date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,582,051,348,000
There was a "kernel panic" - at least sort of. errpt | head IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION A6DF45AA 1205044411 I O RMCdaemon The daemon is started. 67145A39 1205044111 U S SYSDUMP SYSTEM DUMP F48137AC 1205043911 U O minidump COMPRESSED MINIMAL DUMP 0975DD6C 1205043911 P S ABEND KERNEL ABNORMALLY TERMINATED 9DBCFDEE 1205044311 T O errdemon ERROR LOGGING TURNED ON E87EF1BE 1204150011 P O dumpcheck The largest dump device is too small. A6DF45AA 1204012511 I O RMCdaemon The daemon is started. 67145A39 1204012311 U S SYSDUMP SYSTEM DUMP F48137AC 1204012111 U O minidump COMPRESSED MINIMAL DUMP How do I analyze the dumpfile? How can I trace what happened?
Your dump space is too small. It could not store a full dump. sysdumpdev -e will give you an estimate of how much dump space you'll need to capture a full dump. I would suggest you provide a dump LV with at least 1.5x or 2x the estimated size. sysdumpdev -l will show you the current dump device configuration. You can modify the dump devices with some other flags, or via smit sysdumpdev. A dump device is simply a logical volume of type dump. It's recommended that the primary dump device be live on local disk, preferably within rootvg. Since you at least have the minimal dump, the best action would be (assuming you have IBM support) to open a case with IBM and upload the dump file within a snap (snap -ac) to them according to their instructions. They will analyze the snap and dump, and (hopefully) suggest some corrective actions. Best case you stumbled upon a specific bug that has been fixed, but more than likely they will recommend you upgrade to the latest service pack for your technology level.
What to do after a kernel panic on AIX?
1,582,051,348,000
I'm running Linux Mint 21.1 Xfce and it was all good up to and including 5.15.0-60-generic but after updating to -67 I started getting an "out of memory" error at boot (hit any key) followed by "kernel panic, not syncing, VFS unable to mount rootfs on unknown block". Booting trusty old version -60 via advanced options worked fine so I figured I'd just skip -67 and wait until the next version. But now -69 is out and doing the same thing. So a bit of Googling turned up these: https://www.geekswarrior.com/2019/07/solved-how-to-fix-kernel-panic-on-linux.html https://forums.linuxmint.com/viewtopic.php?t=338544 So I tried this: sudo mount /dev/nvme0n1p1 /mnt sudo mount --bind /dev /mnt/dev sudo mount --bind /dev/pts /mnt/dev/pts sudo mount --bind /proc /mnt/proc sudo mount --bind /sys /mnt/sys sudo chroot /mnt update-initramfs -u -k 5.15.0-69-generic update-grub2 But only ended up losing my Windows Grub entry and -69 STILL won't boot. It's fairly new hardware (13700K, DDR5, 4070 Ti) so I wondered if that has anything to do with it, but it works fine on -60 and earlier? I'm happy enough to stick with an older kernel for now, but eventually that's going to become a undesirable security-wise if I'm way out-of-date. Any help would be appreciated.
I found a related answer: https://unix.stackexchange.com/a/717710 It says to modify MODULES and COMPRESS in initramfs.conf Using an editor of your choice (you will need to do this with sudo privilege)- Set MODULES=dep Set COMPRESS=xz Execute sudo update-initramfs -u
Kernel panic when booting new kernels
1,582,051,348,000
I'm trying to get my TAS2505 audio chip working with my arm64 linux system (IMX8MM). It gets detected as an audio device however when I try to play any kind of audio I get a kernel panic. dmesg: [ 0.594275] of_get_named_gpiod_flags: parsed 'reset-gpios' property of node '/soc@0/bus@30800000/i2c@30a50000/tlv320aic32x4-hifi@18[0]' - status (0) [ 0.594333] tlv320aic32x4 2-0018: Looking up ldoin-supply from device tree [ 0.594343] tlv320aic32x4 2-0018: Looking up ldoin-supply property in node /soc@0/bus@30800000/i2c@30a50000/tlv320aic32x4-hifi@18 failed [ 0.594380] tlv320aic32x4 2-0018: Looking up iov-supply from device tree [ 0.594501] tlv320aic32x4 2-0018: Looking up dv-supply from device tree [ 0.594613] tlv320aic32x4 2-0018: Looking up av-supply from device tree [ 1.163560] ALSA device list: [ 1.163567] #0: tas2505-hifi aplay -vv /home/test.wav Playing WAVE '/home/test.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Plug PCM: Hardware PCM card 0 'tas2505-hifi' device 0 subdevice 0 Its setup is: stream : PLAYBACK access : RW_INTERLEAVED format : S16_LE subformat : STD channels : 2 rate : 48000 exact rate : 48000 (48000/1) msbits : 16 buffer_size : 24000 period_size : 6000 period_time : 125000 tstamp_mode : NONE tstamp_type : MONOTONIC period_step : 1 avail_min : 6000 period_event : 0 start_threshold : 24000 stop_threshold : 24000 silence_threshold: 0 silence_size : 0 boundary : 6755399441055744000 appl_ptr : 0 hw_ptr : 0 #################################### + | 75% debug console gives this: [ 122.210201] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 122.218990] Mem abort info: [ 122.221782] ESR = 0x96000044 [ 122.224835] EC = 0x25: DABT (current EL), IL = 32 bits [ 122.230147] SET = 0, FnV = 0 [ 122.233200] EA = 0, S1PTW = 0 [ 122.236340] FSC = 0x04: level 0 translation fault [ 122.241216] Data abort info: [ 122.244094] ISV = 0, ISS = 0x00000044 [ 122.247928] CM = 0, WnR = 1 [ 122.250894] user pgtable: 4k pages, 48-bit VAs, pgdp=00000000465c8000 [ 122.257336] [0000000000000000] pgd=0000000000000000, p4d=0000000000000000 [ 122.264130] Internal error: Oops: 96000044 [#1] PREEMPT SMP [ 122.269703] Modules linked in: [ 122.272759] CPU: 0 PID: 2065 Comm: aplay Not tainted 5.15.32-karo+gc01cf92b4155 #1 [ 122.280331] Hardware name: Ka-Ro TX8M-1610 module on GOcontroll Moduline Screen for av123z7m-n17 screen (DT) [ 122.290157] pstate: 800000c5 (Nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 122.297121] pc : sdma_transfer_init+0x1e8/0x330 [ 122.301661] lr : sdma_transfer_init+0x19c/0x330 [ 122.306194] sp : ffff8000097b39c0 [ 122.309507] x29: ffff8000097b39c0 x28: ffff000002e68298 x27: ffff000002e6c6b0 [ 122.316649] x26: 00000000000000c0 x25: 0000000000000000 x24: 00000000000003c2 [ 122.323791] x23: 0000000000000020 x22: ffff800009355200 x21: ffff000002e68080 [ 122.330933] x20: ffff000006243500 x19: ffff000002e68298 x18: ffffffffffffffff [ 122.338075] x17: 203a6c656e6e6168 x16: 632063696c637963 x15: ffff8000092a12ec [ 122.345218] x14: 0000000000000000 x13: 000000000000068c x12: ffff8000097b35e0 [ 122.352359] x11: ffff8000091c19e0 x10: 00000000fffff000 x9 : 0000000000000000 [ 122.359501] x8 : ffff800009355280 x7 : 0000000000000000 x6 : 000000000000003f [ 122.366642] x5 : 0000000000000040 x4 : 0000000000000000 x3 : 0000000000000004 [ 122.373783] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000001830020 [ 122.380926] Call trace: [ 122.383371] sdma_transfer_init+0x1e8/0x330 [ 122.387557] sdma_prep_dma_cyclic+0xc4/0x3f0 [ 122.391830] snd_dmaengine_pcm_trigger+0xec/0x1c0 [ 122.396540] dmaengine_pcm_trigger+0x18/0x24 [ 122.400814] snd_soc_pcm_component_trigger+0x164/0x230 [ 122.405957] soc_pcm_trigger+0xbc/0x1c0 [ 122.409796] snd_pcm_do_start+0x38/0x44 [ 122.413637] snd_pcm_action_single+0x48/0xa4 [ 122.417910] snd_pcm_action+0x7c/0x9c [ 122.421573] snd_pcm_start+0x24/0x30 [ 122.425150] __snd_pcm_lib_xfer+0x718/0x800 [ 122.429335] snd_pcm_common_ioctl+0x1508/0x1a7c [ 122.433867] snd_pcm_ioctl+0x34/0x50 [ 122.437444] __arm64_sys_ioctl+0xb8/0xe0 [ 122.441369] invoke_syscall+0x48/0x114 [ 122.445123] el0_svc_common.constprop.0+0x44/0xfc [ 122.449829] do_el0_svc+0x28/0x90 [ 122.453146] el0_svc+0x28/0x80 [ 122.456205] el0t_64_sync_handler+0xa4/0x130 [ 122.460478] el0t_64_sync+0x1a0/0x1a4 [ 122.464147] Code: b90026c0 52800400 531b6af7 72a03060 (b9000320) [ 122.470241] ---[ end trace 34c93087276a38a7 ]--- [ 122.474858] Kernel panic - not syncing: Oops: Fatal exception [ 122.480603] SMP: stopping secondary CPUs [ 122.484527] Kernel Offset: disabled [ 122.488013] CPU features: 0x00002001,20000842 [ 122.492369] Memory Limit: none I'm kinda lost on how I should continue on this.
Turns out there were multiple issues, but only one causing the panic. I was missing an IMX SDMA driver that needs to be included in the rootfs. So I had to bitbake a reference yocto build, mount the rootfs, rip out the driver from /lib/firmware/imx/sdma, and put it in my custom rootfs. After putting that in the kernel panic went away, audio still wasn't working. But this was due to faults in my devicetree which were impossible to troubleshoot with the kernel panicking constantly.
kernel panic when trying to play audio
1,582,051,348,000
I have an Ubuntu 20.04 VM hosted in Azure, deployed from the prevailing image at the time. It runs a couple of docker containers, and shuts down every day at 17h00, and starts up at 06h30 every morning. When I checked today, the VM was inaccessible. Eventually found that the machine kept rebooting itself. From the Serial Log in Azure, I saw repeated kernel panics, with the VM restarting automatically. Tracked it down to this bug: https://bugs.launchpad.net/ubuntu/+source/linux-aws-5.13/+bug/1977919 Basically, a change was introduced in Ubuntu 5.13.0-1028.33~20.04.1-azure 5.13.19, which was resolved shortly after in 5.13.0-1029. However, I didn't do updates. No history of patching has occurred via checking Update Management in Azure. No-one else logged on to the machine between yesterday and today. Attached the disk to a different VM, and inspected the kernel logs. Start up yesterday looked like this: Jun 9 06:08:29 myserver kernel: [ 0.000000] Linux version 5.13.0-1025-azure (buildd@lcy02-amd64-007) Today: Jun 10 06:07:55 myserver kernel: [ 0.000000] Linux version 5.13.0-1028-azure (buildd@lcy02-amd64-109) In the dpkg.log, I saw this right after startup yesterday: 2022-06-09 06:33:49 install linux-image-5.13.0-1028-azure:amd64 <none> 5.13.0-1028.33~20.04.1 How do I determine what triggered this?
Does the /var/log/unattended-upgrades directory exist? If it does, read the logs within. If the unattended-upgrades package has been installed (which may be the default on most Debian-based distributions unless you choose a minimal installation), then the systemd service apt-daily-upgrade.service (or /etc/cron.daily/apt if the distribution does not use systemd) will trigger an automatic security patch upgrade daily.
How to determine why the kernel updated
1,582,051,348,000
I'm on a 32 bit CentOS 7 machine. I just do these commands and a kernel panic is observed: cd repos/ git clone https://github.com/SergioBenitez/Rocket cd Rocket/ cd examples/hello_world/ cargo run -v kernel BUG at kernel/auditsc.c:1532! invalid opcode: 0000 [#1] SMP What should I do? Where to report? I have no idea how to react.
Solution Dumped 32 bit CentOS 7 and installed 32 bit Ubuntu 16.04 LTS which seems to be the last 32 bit Ubuntu LTS. No kernel panic is observed with 32 bit Ubuntu 16.04 LTS while installing Rust or building/running Rust applications. History This 32 bit machine previously had Ubuntu 12.04 LTS and 14.04 LTS and the experience was smooth. So, 16.04 LTS looks to be a sensible choice. Service/update The only problem is that Ubuntu 16.04 LTS is going to be out-of-service in April 2021. So no more updates! To work around that, another solution might be to install 32 bit Debian on the machine. The machine didn't have any Debian before, so anything might happen :( Final solution openSUSE Tumbleweed 32 bit Eventually I installed openSUSE Tumbleweed 32-bit which is updated regularly due to being a rolling release. It works great =)
kernel bug at kernel/auditsc.c:1532
1,582,051,348,000
We got a few new machines: x3850 x6. All could pxe boot fine, except one machine, that gives the following kernel panic, looks like an exciting issue: We cannot even scroll up after the kernel panic occurs, after 30-40 seconds. It hungs so bad, that I cannot even type anything. Anybody have any clue, what could the problem possibly be? If it is a HW bug, then what to replace? CPUs? Motherboard? the BIOS settings are the exact same vs. working ones the firmware/bios versions are the exact same vs. working ones tried cold boot, the same kernel panic tried to boot with kernel parameters: "acpi=off" - it just did the same kernel panic at ~18 sec, not the usual panic at 30-40 sec. tried: "noapic nomodeset xforcevesa" - panic after 30-40 sec. tried: "acpi=off noapic nomodeset xforcevesa" - panic after 30-40 sec. tried: "isolcpus=0" boot param, same kernel panic, after 30-40 sec. tried to boot a slacko-5.6-PAE.iso - it booted normally! 3.10.5 SMP PAE. But we have to use SLES. The PAE kernel only sees ~65 GByte RAM, if that is a useful info. tried: https://www.memtest86.com/downloads/memtest86-iso.zip to run a simple memtest, but after 59 seconds of run without memory error, the machine freezed. -> UPDATE: The Memtest86+ from: http://www.memtest.org/#downiso doesn't freezes. Once I seen: "Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 18" - there are 4 CPUs in the machine, each has 18 cores, so don't know which one is this.. UPDATE: with the "maxcpus=0" kernel boot parameter, it finally booted, but still investigating, because still said: "A start job is running for Activation of LVM2 logical volumes (Xmin xs / no limit)" - but maybe CPU HW issue?
After an emulex card driver upgrade, it doesn't kernel panic any more. Version 11.0.270.24 to 11.4.1186.3
Fixing recursive fault but reboot is needed on x3850 x6 SLES12 [closed]
1,582,051,348,000
I'm trying to install Kali Linux (2017.3 32-bit) on my VirtualBox (5.2.2) but when I select 'Install' from the menu, I get an error saying Kernel panic and bad EIP value. After that nothing happens. Please see the attached photo. Other specs: VM version: Linux 2.6 / 3.x / 4.x (32-bit) PAE/NX enabled 1024 MB RAM I have the VM on a Win10 machine. The ISO checksum matches for my download. have tried installation with Debian (32bit) as well and it worked just fine.
You're running a 64bit host OS (Windows 7). But you say that you can run only 32bit guests. The reason for this is that you need to enable virtualisation settings in the BIOS.
Kali Linux installation - Kernel panic
1,582,051,348,000
I have my Linux root on an F2FS USB flash drive. The kernel is on another device accessible by the bootloader. I'm trying to start it with the parameters root=/dev/sda1 rootwait rootfstype=f2fs, but I always end up with a kernel panic: VFS: Cannot open root device "sda1" or unknown-block(8,1): error -19 Please append a correct "root=" boot option; here are the available partitions: 0100 8192 ram0 (driver?) 0101 8192 ram1 (driver?) 0800 3913728 sda driver: sd 0801 3913728 sda1 973c7215-01 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) sda1 is the correct device, and I'm able to mount it with no problems to another computer running Arch Linux. I partitioned it using fdisk and formatted it using mkfs.f2fs from f2fs-tools. Why does the kernel panic? Am I missing the F2FS module? If so, how can I load it at boot time?
As @derobert pointed out, you have to build the kernel with the F2FS module. In my case it wasn't even included as a loadable module. To build the kernel yourself, grab it from kernel.org. Get the default kernel config for your platform. (I got mine from here for the TI-Nspire calculator series.) Modify it to include F2FS by setting CONFIG_F2FS_FS to y. Save it as .config on the root of the downloaded kernel source, and simply build it using make. You'll then find your fresh kernel stuff in arch/arm/boot.
Linux root on F2FS
1,582,051,348,000
I have a Gateway GM5474 which is running XBMCbuntu on a 500GB Hard drive (the originial I believe or maybe slightly newer hard drive). Little background: The system is still stock, and I was having issues getting any linux kernel to install and I considered it a miracle that it installed at all. I tried all three hard drives I have until it installed on the 500GB. (the others are 500GB and 1TB) It was working fine but now it randomly freezes up, on reboot there is nothing logged to syslog, nor is there anything in dmesg indicating that there is an issue. I thought it may have been due to the screensaver as I could not ping it after the screensaver had been initiated for longer than 20 minutes. I also can't turn off the screensaver for whatever reason, I tried the command line method and the gconf method and neither affect the screensaver. [EDIT:] It was not always the screensaver causing the issue, it seems to stall after a solid hour has elapsed. I would like some advice but if the general consensus is to reinstall using something suited to older hardware then so be it.
First, try to run memtest86 to establish whether CPU/mainboard/RAM/power supply are working properly. If you get errors, it could be any one of those - I'd say a 80% likelihood it's a RAM issue, 19% likelihood it's a mainboard issue, and a 1% likelihood that it's power supply issue. It's most probably not a CPU issue. If you get errors and you have multiple memory modules, try removing all but one and test again. I tried all three hard drives I have until it installed on the 500GB. (the others are 500GB and 1TB) That sounds rather funky. Luckily, hard disks are the one thing in a computer that you can test very reliably with SMART selftests. Start a selftest as root on a disk with smartctl -t long /dev/sdX where /dev/sdX is the hard disk you want to test. You can do this while the disk is in use; the selftest runs in the background, won't block the disk and won't damage any data. It will take a while though, depending on the hard disk size. smartctl will tell you exactly how long when you start it. Don't reboot or shutdown your computer as this will interrupt the selftest so that you would have to start it again from scratch. When it's done, query the result with smartctl -a /dev/sdX | less Scroll down to the SMART Self-test log block. If the topmost test says Completed without error the disk is fine, if it reports any error, throw the disk away. The results have been 100% accurate for me, there's no uncertainty involved like with other computer components.
XBMCbuntu freezes at random intervals
1,582,051,348,000
I'm using a Red Hat 4 Enterprise Linux. But, when I upgrade the kernel, an error occurred. And after, when computer is booting with new kernel (red hat enterprıse 2.6.9-100.el), I receive the following the error. mkrootdev: label /1 not found mount: error 2 mountıng ext 3 mount: error 2 mountıng none switchroot : mount failed :22 umount /initrd/dev failed :2 kernel panic -not syncing :Attemped to kill init! After, when I try to boot the system with old kernel (red hat enterprıse 2.6.9-42.el), the system successfully booted. My question is; when I rebooted the system, it attempt to boot with new kernel every time and so I have got to choose the old kernel with hand all the time. How to get rid from this problem?How can I uninstall the new kernel without problem? or How can I use the new kernel without problem? something like this grub.conf; "default=0 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux ES (2.6.9-100.ELsmp) root (hd0,0) kernel /boot/vmlinuz-2.6.9-100.ELsmp ro root=LABEL=/1 rhgb quiet initrd /boot/initrd-2.6.9-100.ELsmp.img title Red Hat Enterprise Linux ES (2.6.9-100.EL) root (hd0,0) kernel /boot/vmlinuz-2.6.9-100.EL ro root=LABEL=/1 rhgb quiet initrd /boot/initrd-2.6.9-100.EL.img title Red Hat Enterprise Linux ES (2.6.9-42.ELsmp) root (hd0,0)"
Get your machine running with the good kernel and then edit /etc/grub.conf so it defauts to your good kernel , check the line in grub which says "default=0". Changing that will fix your manual intervention boot issue. In your case default would need to be "default=3" to boot your old good smp kernel Then look at removing your problem kernel with rpm -e , may be do a test (rpm -e --dry-run
Red Hat Kernel Upgrade problem
1,582,051,348,000
I had a Linux mint system which was working fine until it froze and the "safe restart" of alt+Sysreq+REISUB didn't work, so I had to hard shutdown. After this the bootable SSD stopped getting detected at boot, even as a storage device. I bought a new SSD - it was detected the first time as storage, but the second time it stopped showing in boot. I tried lsblk or fdisk -l with live USB but it failed to show. The live USB works fine so the motherboard is fine, the data cables should be fine too as before the system froze there was no such issue. What other problem can there be?
In the recent years, there seems to have been a fairly large batch of bad SATA data cables on the market. Initially they seem fine, but something (an error in chemical composition?) causes them to age and degrade rapidly. Once aged, moving or bending the cables may make the failure happen quicker. Because of this, I would not be so quick to trust the cables just because "they used to work fine before". That's just it: they'll work fine, until they no longer won't. Also, replacing the data cable is probably the cheapest thing to try, so it would make sense to do it first. Another possibility would be a bad controller on the motherboard, but that would most likely be either a hardware bug (so would have been present since new, and later drivers might improve things by implementing a workaround) or a hardware failure (requiring either an add-on card to replace the failing controller, or motherboard replacement).
Internal SSD not detected in boot or lsblk
1,538,243,897,000
I've been trying to configure a kernel that will not require an initrd to boot. I haven't succeeded. The filesystem I'm attempting to boot from is ext4, and I have all the extended filesystems compiled in my kernel (not as a module, in the kernel itself). I'm using an Early-2011 MacBook Pro with a 2.5" 1TB WD hard drive. My command line is root=PARTUUID=5c595262-cd6a-48f9-b199-6d72dae95b09 ro rootfstype=ext4 rootwait and every time I boot I get a kernel panic about not being able to mount the root partition and the root= section of my command line being invalid. See Update I have CONFIG_DEVTMPFS enabled that mounts /dev before the root filesystem, but since I originally posted this question, I've switched to using a PARTUUID instead of specifying it's dev path, so I shouldn't need it anymore, but it's still enabled. I have the CONFIG_EFI_STUB option enabled and am using that to boot, and my command line is hard coded into the kernel. Kernel panic when I try to boot (because it's not seeing my HDD) My .config file What configuration options do I need to change to get my system working without an initrd talking to my HDD? I can't use an initrd because my only method of booting is directly loading my kernel EFI-style, and therefore since the kernel itself can't load an initrd (as far as I know, a bootloader is required to load the initrd) I can't use one. UPDATE: Since the posting of this question, I've determined that everything is configured fine except my kernel isn't seeing my 2.5" internal 1TB SATA drive (which is why my kernel was originally saying the root argument was invalid, sda didn't exist). So what do I need to configure to get my kernel talking to my internal SATA HDD? (And should I post that as a separate question?) Output of lsmod running with a slightly-modified configuration to support initial ram disks lspci on that same slightly-modified configuration and initial ram disk
I was able to get it running by compiling my SATA controller (libata) into my kernel, which was originally being compiled as a module (and which was why it would run fine with an initial ram disk).
Trying to get my system running without an initrd
1,538,243,897,000
I have just installed FreeBSD 10.1 Release i386 on a new computer. I pretty much used the standard installer defaults, except I chose ZFS, RAID-Z1, encryption, and encrypted swap, and I did not install ports (it would consistently freeze when installing them). Installation seemed to go fine. When I rebooted, it prompted me for the password to decrypt the drive; I entered it, it gave various normal GELI messages, and then: Trying to mount root from zfs:zroot/ROOT/default []... Fatal double fault: eip = 0xc186ad12 esp = 0xc7a3ef80 ebp = 0xc7a3f2e0 cpuid = 3; apic id = 03 panic: double fault cpuid = 3 KDB: stack backtrace: #0 0xc0b53ed2 at kdb_backtrace+0x52 #1 0xc0b1688f at panic+0x11f #2 0xc101bedb at dblfault_handler+0xab It then threatened to reboot in 15 seconds unless I pressed a key. Pressing a key pauses that countdown, but the only option at that point is to press another key to reboot. I've since rebooted several times, and the exact same thing has happened every time. I do not know how to proceed. Any advice would be appreciated. Thank you. EDIT: Thinking the problem might be related to encryption, I tried the installation over again, the same way except not using encryption (neither the main nor swap). The exact same problem occurs.
Answering my own question: Switching from i386 to amd64 seems to have resolved the issue (and also the issue of the ports installation freezing). I originally did not use amd64 because I don't have an AMD processor, but I now see from the documentation that amd64 can be used with certain Intel processors as well, including the one I have.
FreeBSD "Fatal double fault" upon entering ZFS encryption password
1,538,243,897,000
since our production server not startup ( very important server - Rhel 7.2 ) we try to access as single user mode according to the link - https://www.tecmint.com/boot-into-single-user-mode-in-centos-7/ after entering the single user mode details using VMconsole , Linux stop on the following what we can do from this stage in order to recover the production server?
If the GRUB boot menu includes multiple kernel versions, try booting with an older version. (There should always be at least the current kernel and the kernel used by the OS installer: the latter has a version number like 0-rescue-<numbers>. If the boot is successful with an older kernel, then the problem might be a damaged/missing initramfs file. This is pretty common if your /boot filesystem ran out of disk space while installing a kernel update package, for example. (Each kernel version has its own initramfs file, so if the problem was caused during the most recent update, the older kernel and its initramfs will most likely work.) If the system is otherwise running normally with the old kernel, you can use a command like mkinitrd /boot/initramfs-3.10.0-327.el7.img 3.10.0-327.el7 to recreate the initramfs file for the new kernel. But if booting with an older kernel fails too, the problem might be something else. In that case, you should perform a rescue mode boot from the installation media. In the case of VMware, that means making sure the virtual hardware includes a virtual CD-ROM drive, and "inserting" an ISO image of a RHEL 7.x installation media (preferably 7.2 or newer) to the virtual CD drive, and telling the VM to boot from CD. Once the GRUB boot menu of the installation media appears, select "Troubleshooting" and then "Rescue a RedHat Linux system". The installation program will load and ask for language & keyboard settings as with a normal installation, but then it will switch to rescue mode. It will even offer to automatically mount the disks of the installation-to-be-rescued for you, if that OS installation is not too badly damaged. Then it will give you a root command prompt you can use to further troubleshoot and apply fixes as necessary. When in a rescue boot environment, your real root filesystem will be mounted at /mnt/sysimage. To be able to access it using normal pathnames (= without prefixing /mnt/sysimage to everything), you can use the chroot /mnt/sysimage command, which will also be suggested to you just before entering the rescue command prompt. After using the chroot /mnt/sysimage command, you should be able to use any shell commands your installed OS has available. For example, if you find that the initramfs files for your kernels are missing from /boot, you can use the mkinitrd command (as described above) to recreate them.
Cant access as single user mode - what can we do to recover Linux machine
1,538,243,897,000
I am developing a node.js application --- which frequently crashes my Debian Linux kernel: The computer becomes unresponsive and doesn't even respond to 'ping'. At this stage, I don't even ask to analyze or fix the cause of the crashes. I don't have any information that could point to anything specific. The computer just stops responding, neither /var/log/messages nor dmesg show any messages. So my question is: What tools can I use to gather some information regarding the crashes? Here are some background details: My node.js application doesn't use the network stack. It just spawns two sub-processes with child_process.spawn and communicates with them trough writing files, watching for file changes with fs.watch and reading the files that have changed. The rest is just data processing. I have tested this problem on three computers: On the first one (my main dev machine), the system freezes reliably after starting this application a few times. On the other computers (a PC similar to the main dev machine and a digitalocean VPS), the application usually runs well --- but after a few hundred runs it froze both the other computers. It seems that my main dev machine is more prone to this problem --- but because the freezes also happen on two unrelated machines, I assume it is not a pure hardware problem restricted to one PC. Since the computer freezes immediately after starting the app, I am sure the app causes this problem. And since everything stops (including responses to pings), I assume that the Linux Kernel has crashed.
Typically a linux kernel crash would be visible on the system's console. However, just in case it is indeed a kernel crash but in your case it's not visible for whatever reason you may want to confirm it is indeed a kernel crash. For that you could configure your system to auto-reboot after a kernel crash like this: Configure reboot on Linux kernel panic. If the system ends up rebooting then it is indeed a kernel crash and then you can focus on that investigation path (plenty of related answers on stack exchange sites). But from your description I think it's more likely to be a kernel hung or "too busy" condition, you could start here: How to investigate cause of total hang?. Finally, since as you observed the root cause seems more likely to be your application, I would assume it's somehow causing too much of a load on the system causing it to become unresponsive. You could review your code for any lenghty/infinite loops and try to limit their impact: break out after a certain execution time (maybe use some timeout exceptions) or after a certain number of iterations, etc. If the system becomes responsive again after a while then you'd get a better idea which area of your code is at fault and maybe how it impacts the system.
Linux kernel crash: How to gather information?
1,538,243,897,000
Since the last upgrade to Android Studio (Android Studio Arctic Fox | 2020.3.1 Patch 2 Build #AI-203.7717.56.2031.7678000), I am experiencing frequent global system freezes in Fedora 33. Nothing responding, even the SSH server, so the only solution is a hard reboot. It always freezes while typing. In there anyone here with the same problem? EDIT By request, I'm adding the output of dmesg --level=alert,crit,err,warn. I hope the information needed is still there: [ 0.165884] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 0.165948] #5 #6 #7 [ 0.172490] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override. [ 0.173015] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.287753] wait_for_initramfs() called before rootfs_initcalls [ 4.213705] systemd-sysv-generator[573]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 4.214895] systemd-sysv-generator[573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 4.216144] systemd-sysv-generator[573]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 4.899093] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20210331/utaddress-204) [ 4.899732] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.900844] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20210331/utaddress-204) [ 4.901495] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.901500] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) [ 4.902664] (20210331/utaddress-204) [ 4.902704] ACPI Warning: [ 4.903893] SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.903896] lpc_ich: Resource conflict(s) found affecting gpio_ich [ 5.178789] at24 13-0050: supply vcc not found, using dummy regulator [ 5.179916] at24 13-0051: supply vcc not found, using dummy regulator [ 5.181186] at24 13-0052: supply vcc not found, using dummy regulator [ 5.182506] at24 13-0053: supply vcc not found, using dummy regulator [ 5.459435] kauditd_printk_skb: 96 callbacks suppressed [ 5.740341] ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) [ 7.154729] Bluetooth: hci0: command 0x1005 tx timeout [ 9.202670] Bluetooth: hci0: command 0x0c23 tx timeout [ 11.250636] Bluetooth: hci0: command 0x0c14 tx timeout [ 13.298596] Bluetooth: hci0: command 0x0c25 tx timeout [ 15.346559] Bluetooth: hci0: command 0x0c38 tx timeout [ 17.394520] Bluetooth: hci0: command tx timeout [ 1832.091017] IRQ 29: no longer affine to CPU4 [ 1832.093063] IRQ 28: no longer affine to CPU5 [ 1832.093066] IRQ 30: no longer affine to CPU5 [ 1832.094878] IRQ 23: no longer affine to CPU6 [ 1832.094882] IRQ 25: no longer affine to CPU6 [ 1832.094885] IRQ 32: no longer affine to CPU6 [ 1832.096684] IRQ 18: no longer affine to CPU7 [ 1832.096687] IRQ 19: no longer affine to CPU7 [ 1832.096690] IRQ 27: no longer affine to CPU7 [ 1834.430082] Bluetooth: hci0: command 0x0c16 tx timeout [ 1836.478094] Bluetooth: hci0: command 0x2002 tx timeout [ 1837.499045] ata3: link is slow to respond, please be patient (ready=0) [ 1837.500053] ata1: link is slow to respond, please be patient (ready=0) [ 1838.525986] Bluetooth: hci0: command 0x2003 tx timeout [ 1840.574009] Bluetooth: hci0: command 0x201c tx timeout [ 1842.495776] Bluetooth: hci0: command tx timeout [18453.352264] IRQ 29: no longer affine to CPU4 [18453.354087] IRQ 23: no longer affine to CPU5 [18453.354091] IRQ 25: no longer affine to CPU5 [18453.355958] IRQ 16: no longer affine to CPU6 [18453.355963] IRQ 26: no longer affine to CPU6 [18453.355967] IRQ 32: no longer affine to CPU6 [18453.357790] IRQ 19: no longer affine to CPU7 [18453.357796] IRQ 28: no longer affine to CPU7 [18453.357799] IRQ 30: no longer affine to CPU7 [18455.781051] done. [18457.872004] Bluetooth: hci0: command 0x1001 tx timeout [18458.761887] ata1: link is slow to respond, please be patient (ready=0) [18458.762881] ata3: link is slow to respond, please be patient (ready=0) [18459.919872] Bluetooth: hci0: command 0x1009 tx timeout EDIT 2 It just crashed again. Output of dmesg --level=alert,crit,err,warn: [ 0.165915] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 0.165973] #5 #6 #7 [ 0.172521] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override. [ 0.173044] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.287783] wait_for_initramfs() called before rootfs_initcalls [ 7.086711] usb 3-5: device descriptor read/64, error -110 [ 8.324313] kauditd_printk_skb: 5 callbacks suppressed [ 8.940835] systemd-sysv-generator[574]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 8.942438] systemd-sysv-generator[574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 8.944077] systemd-sysv-generator[574]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 9.619007] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20210331/utaddress-204) [ 9.619640] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 9.621931] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20210331/utaddress-204) [ 9.622116] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 9.623233] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20210331/utaddress-204) [ 9.624779] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 9.625544] lpc_ich: Resource conflict(s) found affecting gpio_ich [ 9.786915] at24 13-0050: supply vcc not found, using dummy regulator [ 9.789823] at24 13-0051: supply vcc not found, using dummy regulator [ 9.791691] at24 13-0052: supply vcc not found, using dummy regulator [ 9.793324] at24 13-0053: supply vcc not found, using dummy regulator [ 10.174554] ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) [ 13.354089] kauditd_printk_skb: 87 callbacks suppressed EDIT 3 I've experienced around 10 freezes in a couple of days. I'm afraid Studio is going to harm my computer. Freezing continues despite that I upgraded the whole system to the last Fedora 34. Last output of dmesg --level=alert,crit,err,warn: [ 0.166421] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 0.166484] #5 #6 #7 [ 0.173036] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override. [ 0.173550] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.287346] wait_for_initramfs() called before rootfs_initcalls [ 3.397929] systemd-sysv-generator[567]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 3.399641] systemd-sysv-generator[567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 3.400982] systemd-sysv-generator[567]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 3.713080] systemd-journald[593]: File /var/log/journal/9ff77e59cfd04f078e4b747742c5981e/system.journal corrupted or uncleanly shut down, renaming and replacing. [ 4.036564] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20210331/utaddress-204) [ 4.037179] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.038257] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20210331/utaddress-204) [ 4.038808] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.039932] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20210331/utaddress-204) [ 4.041794] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20210331/utaddress-204) [ 4.042527] lpc_ich: Resource conflict(s) found affecting gpio_ich [ 4.468471] at24 13-0050: supply vcc not found, using dummy regulator [ 4.470405] at24 13-0051: supply vcc not found, using dummy regulator [ 4.474870] at24 13-0052: supply vcc not found, using dummy regulator [ 4.476069] at24 13-0053: supply vcc not found, using dummy regulator [ 4.637421] ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) [ 5.321533] kauditd_printk_skb: 100 callbacks suppressed EDIT 4 Another freeze. journalctl output of the kernel crash: $ journalctl -S '2021-09-16 13:30:00' 13:43:57 Orion kernel: general protection fault, probably for non-canonical address 0x108ddb743acd5eb1: 0000 [#1] SMP PTI 13:43:57 Orion kernel: CPU: 4 PID: 2724 Comm: Xorg Not tainted 5.13.15-200.fc34.x86_64 #1 13:43:57 Orion kernel: Hardware name: Dell Inc. XPS 8700/0KWVT8, BIOS A07 03/13/2014 13:43:57 Orion kernel: RIP: 0010:kmem_cache_alloc_trace+0xac/0x220 13:43:57 Orion kernel: Code: aa 20 d1 44 49 8b 00 49 83 78 10 00 48 89 44 24 08 0f 84 43 01 00 00 48 85 c0 0f 84 3a 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:43:57 Orion kernel: RSP: 0018:ffffb7d901f3f700 EFLAGS: 00010206 13:43:57 Orion kernel: RAX: 108ddb743acd5e81 RBX: 0000000000000002 RCX: 108ddb743acd5eb1 13:43:57 Orion kernel: RDX: 0000000000728bcf RSI: 0000000000000dc0 RDI: 00000000000300c0 13:43:57 Orion kernel: RBP: ffff8dd200042600 R08: ffff8dd50ed300c0 R09: 0000000000000018 13:43:57 Orion kernel: R10: ffff8dd12d26f208 R11: ffffb7d901f3f988 R12: 0000000000000dc0 13:43:57 Orion kernel: R13: ffff8dd200042600 R14: 0000000000003000 R15: ffffffffc06922ce 13:43:57 Orion kernel: FS: 00007f32b7bcea80(0000) GS:ffff8dd50ed00000(0000) knlGS:0000000000000000 13:43:57 Orion kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 13:43:57 Orion kernel: CR2: 00007f32b778d000 CR3: 000000010984e003 CR4: 00000000001706e0 13:43:57 Orion kernel: Call Trace: 13:43:57 Orion kernel: nvkm_mem_new_type+0xae/0x2a0 [nouveau] 13:43:57 Orion kernel: nvkm_umem_new+0x130/0x220 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl_new+0x129/0x1e0 [nouveau] 13:43:57 Orion kernel: ? nvkm_umem_search+0xe0/0xe0 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl+0xdc/0x180 [nouveau] 13:43:57 Orion kernel: nvif_object_ctor+0x122/0x1c0 [nouveau] 13:43:57 Orion kernel: nvif_mem_ctor_type+0xc2/0x180 [nouveau] 13:43:57 Orion kernel: ? nvkm_vmm_ptes_get_map+0x2c/0x90 [nouveau] 13:43:57 Orion kernel: ? nvkm_vmm_map+0x18d/0x350 [nouveau] 13:43:57 Orion kernel: nouveau_mem_host+0xf3/0x190 [nouveau] 13:43:57 Orion kernel: nouveau_sgdma_bind+0x30/0x80 [nouveau] 13:43:57 Orion kernel: nouveau_bo_move+0x3c1/0x820 [nouveau] 13:43:57 Orion kernel: ? ttm_pool_type_take+0x7d/0x90 [ttm] 13:43:57 Orion kernel: ? ttm_pool_alloc+0xe6/0x590 [ttm] 13:43:57 Orion kernel: ttm_bo_handle_move_mem+0x90/0x170 [ttm] 13:43:57 Orion kernel: ttm_bo_validate+0x11c/0x150 [ttm] 13:43:57 Orion kernel: ttm_bo_init_reserved+0x239/0x2c0 [ttm] 13:43:57 Orion kernel: ttm_bo_init+0x4a/0xc0 [ttm] 13:43:57 Orion kernel: ? nv10_bo_put_tile_region.isra.0+0x80/0x80 [nouveau] 13:43:57 Orion kernel: nouveau_bo_init+0x7c/0x90 [nouveau] 13:43:57 Orion kernel: ? nv10_bo_put_tile_region.isra.0+0x80/0x80 [nouveau] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: nouveau_gem_new+0x7f/0xf0 [nouveau] 13:43:57 Orion kernel: nouveau_gem_ioctl_new+0x45/0xe0 [nouveau] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: drm_ioctl_kernel+0x86/0xd0 [drm] 13:43:57 Orion kernel: drm_ioctl+0x220/0x3e0 [drm] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: ? __ia32_sys_timer_getoverrun+0x40/0x50 13:43:57 Orion kernel: nouveau_drm_ioctl+0x55/0xa0 [nouveau] 13:43:57 Orion kernel: __x64_sys_ioctl+0x82/0xb0 13:43:57 Orion kernel: do_syscall_64+0x40/0x80 13:43:57 Orion kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae 13:43:57 Orion kernel: RIP: 0033:0x7f32b84540ab 13:43:57 Orion kernel: Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73> 13:43:57 Orion kernel: RSP: 002b:00007ffdd87e5148 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 13:43:57 Orion kernel: RAX: ffffffffffffffda RBX: 00007ffdd87e51a0 RCX: 00007f32b84540ab 13:43:57 Orion kernel: RDX: 00007ffdd87e51a0 RSI: 00000000c0306480 RDI: 000000000000000f 13:43:57 Orion kernel: RBP: 00000000c0306480 R08: 0000556e9a4293d0 R09: 0000000000000016 13:43:57 Orion kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000556e994bc7e0 13:43:57 Orion kernel: R13: 000000000000000f R14: 00007ffdd87e51a0 R15: 0000000000003000 13:43:57 Orion kernel: Modules linked in: tun cdc_acm md4 nls_utf8 cifs dns_resolver fscache netfs libdes snd_seq_dummy snd_hrtimer dm_crypt trusted asn1_encoder xt_nat iptable_> 13:43:57 Orion kernel: drm_ttm_helper ttm i2c_algo_bit mxm_wmi crct10dif_pclmul crc32_pclmul crc32c_intel wmi drm_kms_helper ghash_clmulni_intel cec drm r8169 video uas usb_sto> 13:43:57 Orion kernel: ---[ end trace 6a741c284e912584 ]--- 13:43:57 Orion kernel: RIP: 0010:kmem_cache_alloc_trace+0xac/0x220 13:43:57 Orion kernel: Code: aa 20 d1 44 49 8b 00 49 83 78 10 00 48 89 44 24 08 0f 84 43 01 00 00 48 85 c0 0f 84 3a 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:43:57 Orion kernel: RSP: 0018:ffffb7d901f3f700 EFLAGS: 00010206 13:43:57 Orion kernel: general protection fault, probably for non-canonical address 0x108ddb743acd5eb1: 0000 [#1] SMP PTI 13:43:57 Orion kernel: CPU: 4 PID: 2724 Comm: Xorg Not tainted 5.13.15-200.fc34.x86_64 #1 13:43:57 Orion kernel: Hardware name: Dell Inc. XPS 8700/0KWVT8, BIOS A07 03/13/2014 13:43:57 Orion kernel: RIP: 0010:kmem_cache_alloc_trace+0xac/0x220 13:43:57 Orion kernel: Code: aa 20 d1 44 49 8b 00 49 83 78 10 00 48 89 44 24 08 0f 84 43 01 00 00 48 85 c0 0f 84 3a 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:43:57 Orion kernel: RSP: 0018:ffffb7d901f3f700 EFLAGS: 00010206 13:43:57 Orion kernel: RAX: 108ddb743acd5e81 RBX: 0000000000000002 RCX: 108ddb743acd5eb1 13:43:57 Orion kernel: RDX: 0000000000728bcf RSI: 0000000000000dc0 RDI: 00000000000300c0 13:43:57 Orion kernel: RBP: ffff8dd200042600 R08: ffff8dd50ed300c0 R09: 0000000000000018 13:43:57 Orion kernel: R10: ffff8dd12d26f208 R11: ffffb7d901f3f988 R12: 0000000000000dc0 13:43:57 Orion kernel: R13: ffff8dd200042600 R14: 0000000000003000 R15: ffffffffc06922ce 13:43:57 Orion kernel: FS: 00007f32b7bcea80(0000) GS:ffff8dd50ed00000(0000) knlGS:0000000000000000 13:43:57 Orion kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 13:43:57 Orion kernel: CR2: 00007f32b778d000 CR3: 000000010984e003 CR4: 00000000001706e0 13:43:57 Orion kernel: Call Trace: 13:43:57 Orion kernel: nvkm_mem_new_type+0xae/0x2a0 [nouveau] 13:43:57 Orion kernel: nvkm_umem_new+0x130/0x220 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl_new+0x129/0x1e0 [nouveau] 13:43:57 Orion kernel: ? nvkm_umem_search+0xe0/0xe0 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl+0xdc/0x180 [nouveau] 13:43:57 Orion kernel: nvif_object_ctor+0x122/0x1c0 [nouveau] 13:43:57 Orion kernel: nvif_mem_ctor_type+0xc2/0x180 [nouveau] 13:43:57 Orion kernel: ? nvkm_vmm_ptes_get_map+0x2c/0x90 [nouveau] 13:43:57 Orion kernel: ? nvkm_vmm_map+0x18d/0x350 [nouveau] 13:43:57 Orion kernel: nouveau_mem_host+0xf3/0x190 [nouveau] 13:43:57 Orion kernel: nouveau_sgdma_bind+0x30/0x80 [nouveau] 13:43:57 Orion kernel: nouveau_bo_move+0x3c1/0x820 [nouveau] 13:43:57 Orion kernel: ? ttm_pool_type_take+0x7d/0x90 [ttm] 13:43:57 Orion kernel: ? ttm_pool_alloc+0xe6/0x590 [ttm] 13:43:57 Orion kernel: ttm_bo_handle_move_mem+0x90/0x170 [ttm] 13:43:57 Orion kernel: ttm_bo_validate+0x11c/0x150 [ttm] 13:43:57 Orion kernel: ttm_bo_init_reserved+0x239/0x2c0 [ttm] 13:43:57 Orion kernel: ttm_bo_init+0x4a/0xc0 [ttm] 13:43:57 Orion kernel: ? nv10_bo_put_tile_region.isra.0+0x80/0x80 [nouveau] 13:43:57 Orion kernel: nouveau_bo_init+0x7c/0x90 [nouveau] 13:43:57 Orion kernel: ? nv10_bo_put_tile_region.isra.0+0x80/0x80 [nouveau] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: nouveau_gem_new+0x7f/0xf0 [nouveau] 13:43:57 Orion kernel: nouveau_gem_ioctl_new+0x45/0xe0 [nouveau] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: drm_ioctl_kernel+0x86/0xd0 [drm] 13:43:57 Orion kernel: drm_ioctl+0x220/0x3e0 [drm] 13:43:57 Orion kernel: ? nouveau_gem_new+0xf0/0xf0 [nouveau] 13:43:57 Orion kernel: ? __ia32_sys_timer_getoverrun+0x40/0x50 13:43:57 Orion kernel: nouveau_drm_ioctl+0x55/0xa0 [nouveau] 13:43:57 Orion kernel: __x64_sys_ioctl+0x82/0xb0 13:43:57 Orion kernel: do_syscall_64+0x40/0x80 13:43:57 Orion kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae 13:43:57 Orion kernel: RIP: 0033:0x7f32b84540ab 13:43:57 Orion kernel: Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73> 13:43:57 Orion kernel: RSP: 002b:00007ffdd87e5148 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 13:43:57 Orion kernel: RAX: ffffffffffffffda RBX: 00007ffdd87e51a0 RCX: 00007f32b84540ab 13:43:57 Orion kernel: RDX: 00007ffdd87e51a0 RSI: 00000000c0306480 RDI: 000000000000000f 13:43:57 Orion kernel: RBP: 00000000c0306480 R08: 0000556e9a4293d0 R09: 0000000000000016 13:43:57 Orion kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000556e994bc7e0 13:43:57 Orion kernel: R13: 000000000000000f R14: 00007ffdd87e51a0 R15: 0000000000003000 13:43:57 Orion kernel: Modules linked in: tun cdc_acm md4 nls_utf8 cifs dns_resolver fscache netfs libdes snd_seq_dummy snd_hrtimer dm_crypt trusted asn1_encoder xt_nat iptable_> 13:43:57 Orion kernel: drm_ttm_helper ttm i2c_algo_bit mxm_wmi crct10dif_pclmul crc32_pclmul crc32c_intel wmi drm_kms_helper ghash_clmulni_intel cec drm r8169 video uas usb_sto> 13:43:57 Orion kernel: ---[ end trace 6a741c284e912584 ]--- 13:43:57 Orion kernel: RIP: 0010:kmem_cache_alloc_trace+0xac/0x220 13:43:57 Orion kernel: Code: aa 20 d1 44 49 8b 00 49 83 78 10 00 48 89 44 24 08 0f 84 43 01 00 00 48 85 c0 0f 84 3a 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:43:57 Orion kernel: RSP: 0018:ffffb7d901f3f700 EFLAGS: 00010206 13:43:57 Orion kernel: RAX: 108ddb743acd5e81 RBX: 0000000000000002 RCX: 108ddb743acd5eb1 13:43:57 Orion kernel: RDX: 0000000000728bcf RSI: 0000000000000dc0 RDI: 00000000000300c0 13:43:57 Orion kernel: RBP: ffff8dd200042600 R08: ffff8dd50ed300c0 R09: 0000000000000018 13:43:57 Orion kernel: R10: ffff8dd12d26f208 R11: ffffb7d901f3f988 R12: 0000000000000dc0 13:43:57 Orion kernel: R13: ffff8dd200042600 R14: 0000000000003000 R15: ffffffffc06922ce 13:43:57 Orion kernel: FS: 00007f32b7bcea80(0000) GS:ffff8dd50ed00000(0000) knlGS:0000000000000000 13:43:57 Orion kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 13:43:57 Orion kernel: CR2: 00007f32b778d000 CR3: 000000010984e003 CR4: 00000000001706e0 13:44:32 Orion kernel: usb 4-5: USB disconnect, device number 2 13:44:32 Orion kernel: sd 6:0:0:0: [sde] Synchronizing SCSI cache 13:44:32 Orion kernel: sd 6:0:0:0: [sde] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK 13:44:32 Orion kernel: usb 4-5: new SuperSpeed USB device number 3 using xhci_hcd 13:44:32 Orion kernel: usb 4-5: New USB device found, idVendor=0781, idProduct=5580, bcdDevice= 0.10 13:44:32 Orion kernel: usb 4-5: New USB device strings: Mfr=1, Product=2, SerialNumber=3 13:44:32 Orion kernel: usb 4-5: Product: Extreme 13:44:32 Orion kernel: usb 4-5: Manufacturer: SanDisk 13:44:32 Orion kernel: usb 4-5: SerialNumber: AA010527142139551888 13:44:32 Orion kernel: usb-storage 4-5:1.0: USB Mass Storage device detected 13:44:32 Orion kernel: scsi host9: usb-storage 4-5:1.0 13:44:32 Orion mtp-probe[251591]: checking bus 4, device 3: "/sys/devices/pci0000:00/0000:00:14.0/usb4/4-5" 13:44:32 Orion mtp-probe[251591]: bus: 4, device: 3 was not an MTP device 13:44:32 Orion mtp-probe[251597]: checking bus 4, device 3: "/sys/devices/pci0000:00/0000:00:14.0/usb4/4-5" 13:44:32 Orion mtp-probe[251597]: bus: 4, device: 3 was not an MTP device 13:44:33 Orion kernel: scsi 9:0:0:0: Direct-Access SanDisk Extreme 0001 PQ: 0 ANSI: 6 13:44:33 Orion kernel: sd 9:0:0:0: Attached scsi generic sg4 type 0 13:44:33 Orion kernel: sd 9:0:0:0: [sdn] 125045424 512-byte logical blocks: (64.0 GB/59.6 GiB) 13:44:33 Orion kernel: sd 9:0:0:0: [sdn] Write Protect is off 13:44:33 Orion kernel: sd 9:0:0:0: [sdn] Mode Sense: 33 00 00 08 13:44:33 Orion kernel: sd 9:0:0:0: [sdn] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 13:44:33 Orion kernel: sdn: sdn1 sdn2 13:44:33 Orion kernel: sd 9:0:0:0: [sdn] Attached SCSI disk 13:45:01 Orion CROND[251616]: (root) CMD ([[ "`pgrep -x each5min`" ]] || each5min >& /dev/console) 13:45:01 Orion CROND[251615]: (root) CMDEND ([[ "`pgrep -x each5min`" ]] || each5min >& /dev/console) 13:45:06 Orion sshd[251632]: rexec line 123: Deprecated option UsePrivilegeSeparation 13:45:06 Orion audit[251633]: CRYPTO_KEY_USER pid=251633 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='op=destroy kind=server fp=SHA256:ee:51:4b:d5:b0:6a:97:cd:fa:74:d8:> 13:45:06 Orion audit[251632]: CRYPTO_SESSION pid=251632 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='op=start direction=from-server [email protected] ksize=> 13:45:06 Orion audit[251632]: CRYPTO_SESSION pid=251632 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='op=start direction=from-client [email protected] ksize=> 13:45:06 Orion kernel: general protection fault, probably for non-canonical address 0x108ddb743acd5eb1: 0000 [#2] SMP PTI 13:45:06 Orion kernel: CPU: 4 PID: 251632 Comm: sshd Tainted: G D 5.13.15-200.fc34.x86_64 #1 13:45:06 Orion kernel: Hardware name: Dell Inc. XPS 8700/0KWVT8, BIOS A07 03/13/2014 13:45:06 Orion kernel: RIP: 0010:__kmalloc+0xc7/0x260 13:45:06 Orion kernel: Code: 05 de 1d d1 44 49 8b 00 49 83 78 10 00 48 89 04 24 0f 84 70 01 00 00 48 85 c0 0f 84 67 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:45:06 Orion kernel: RSP: 0018:ffffb7d90d6a78c0 EFLAGS: 00010206 13:45:06 Orion kernel: RAX: 108ddb743acd5e81 RBX: 0000000000000000 RCX: 108ddb743acd5eb1 13:45:06 Orion kernel: RDX: 0000000000728bcf RSI: 0000000000000d40 RDI: 00000000000300c0 13:45:06 Orion kernel: RBP: ffff8dd200042600 R08: ffff8dd50ed300c0 R09: 0000000000000002 13:45:06 Orion kernel: R10: 00000000002c4051 R11: 0000000000000001 R12: 0000000000000d40 13:45:06 Orion kernel: R13: 0000000000000060 R14: ffff8dd200042600 R15: ffffffffbb4003b1 13:45:06 Orion kernel: FS: 00007f77fe857900(0000) GS:ffff8dd50ed00000(0000) knlGS:0000000000000000 13:45:06 Orion kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 13:45:06 Orion kernel: CR2: 00007f77ff4ed49f CR3: 000000010469c002 CR4: 00000000001706e0 13:43:57 Orion kernel: general protection fault, probably for non-canonical address 0x108ddb743acd5eb1: 0000 [#1] SMP PTI 13:43:57 Orion kernel: CPU: 4 PID: 2724 Comm: Xorg Not tainted 5.13.15-200.fc34.x86_64 #1 13:43:57 Orion kernel: Hardware name: Dell Inc. XPS 8700/0KWVT8, BIOS A07 03/13/2014 13:43:57 Orion kernel: RIP: 0010:kmem_cache_alloc_trace+0xac/0x220 13:43:57 Orion kernel: Code: aa 20 d1 44 49 8b 00 49 83 78 10 00 48 89 44 24 08 0f 84 43 01 00 00 48 85 c0 0f 84 3a 01 00 00 8b 4d 28 48 8b 7d 00 48 01 c1 <48> 8b 19 48 89 ce 48> 13:43:57 Orion kernel: RSP: 0018:ffffb7d901f3f700 EFLAGS: 00010206 13:43:57 Orion kernel: RAX: 108ddb743acd5e81 RBX: 0000000000000002 RCX: 108ddb743acd5eb1 13:43:57 Orion kernel: RDX: 0000000000728bcf RSI: 0000000000000dc0 RDI: 00000000000300c0 13:43:57 Orion kernel: RBP: ffff8dd200042600 R08: ffff8dd50ed300c0 R09: 0000000000000018 13:43:57 Orion kernel: R10: ffff8dd12d26f208 R11: ffffb7d901f3f988 R12: 0000000000000dc0 13:43:57 Orion kernel: R13: ffff8dd200042600 R14: 0000000000003000 R15: ffffffffc06922ce 13:43:57 Orion kernel: FS: 00007f32b7bcea80(0000) GS:ffff8dd50ed00000(0000) knlGS:0000000000000000 13:43:57 Orion kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 13:43:57 Orion kernel: CR2: 00007f32b778d000 CR3: 000000010984e003 CR4: 00000000001706e0 13:43:57 Orion kernel: Call Trace: 13:43:57 Orion kernel: nvkm_mem_new_type+0xae/0x2a0 [nouveau] 13:43:57 Orion kernel: nvkm_umem_new+0x130/0x220 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl_new+0x129/0x1e0 [nouveau] 13:43:57 Orion kernel: ? nvkm_umem_search+0xe0/0xe0 [nouveau] 13:43:57 Orion kernel: nvkm_ioctl+0xdc/0x180 [nouveau] 13:43:57 Orion kernel: nvif_object_ctor+0x122/0x1c0 [nouveau] 13:43:57 Orion kernel: nvif_mem_ctor_type+0xc2/0x180 [nouveau] 13:43:57 Orion kernel: ? nvkm_vmm_ptes_get_map+0x2c/0x90 [nouveau] ....
Please install NVIDIA proprietary drivers. The crash is in nouveau (an open source driver for NVIDIA GPUs) which is not known to be stable enough. Please consider filing a bug report: https://gitlab.freedesktop.org/drm/nouveau/-/issues/
Android studio freezing Fedora (nouveau and kernel crash)
1,538,243,897,000
I'm running on a minimal Ubuntu server 12.04.3 install and I installed a d-link DWA-160 usb wifi adapter as per the instructions shown in this page. After successfully connecting using these instructions (I basically ping google to confirm that I'm connected), I try to run apt-get update but end up getting what appears to be a kernel panic every time I do. The connection does not seem stable during the update process. For instance, as I'm typing this I've tried again and it seems stuck at : 9% [4 Release 3,980B/49.6 kB 8%] [Waiting for headers] [Waiting for headers] I usually get a kernel panic shortly thereafter. I'll try to provide info as needed.
Given the symptoms (crashes when there's a lot of network traffic, and you happen to be using a custom network driver), it's a bug in the network driver. From the page you link: DWA 160 is also know to freeze under heavy network load. When this happens, the only solution is to unplug and replug the key. Till date this bug has not been corrected. Because of all that, this wifi key is not, at this time, a very good deal for Linux users. Report a bug to the providers of the driver. This isn't something that can be worked around, other than not using the driver or using a fixed version of the driver.
Kernel panic on "apt-get upgrade" with DWA-160
1,538,243,897,000
Since kernel 3.3.x there are frequent kernel panics during the boot process. It seems to be a kernel problem since I downgraded to 3.0 and the problems are gone. The kernel panics happen almost at every boot attempt. After 4 or 5 restarts the system boots normally. It also seems to affect only Sandy Bridge systems, and maybe Ivy Bridge too. Last kernel I checked was 3.4.9 with the same results.
So it seems to be TLP causing the issues, in detail the part where radio devices are turned off during the boot sequence. I changed this line DEVICES_TO_DISABLE_ON_STARTUP="bluetooth wwan" to DEVICES_TO_DISABLE_ON_STARTUP="" until now all boot attemps ran successfully. There is a new version of TLP out (0.3.7) and it addresses a similar problem. Maybe this fixes the problems. For now I'm happy finding the cause and I'll mark this as resolved.
Occasionally kernel panics during boot since 3.3.x with Sandy Bridge
1,538,243,897,000
On my embedded device, I have this error showing up after kernel boot: init.exe: Caught segmentation fault, core dumped But I cannot understand why is this happening? If I do battery cut( i.e. reboot my device foricbly) then the device boots and comes up fine. Any pointers will be extremely helpful. Is this some transient low level memory issue? It is linux 2.6.31 on Arm architecture.
The output mentions it dumped core. Try doing: gdb -c [corefile] Then at the (gdb) prompt, do: (gdb) bt To get a backtrace. If the binary was not stripped, you might be in luck and at least have something to google for :-) PS: The core file might be core.PID, where PID was the PID of init.exe when it died. sc.
init.exe: Caught segmentation fault, core dumped - what is the source of this error
1,506,378,771,000
I see with kubernetes it's possible to set node affinity for certain workloads. I'm wondering if there are any facilities in the various container technologies, such as docker, rocket etc that allow you to pin processes to cores? or if this is even possible in multitenant environments? Perhaps it would imply a bare metal setup?
If your system supports SMP (Symmetric multiprocessing) with some combination of multiple physical CPUs, CPU cores, and logical CPUs, you can assign Docker Containers to specific CPU resources. Example Commands for CPU affinity with Docker Containers The examples shown here cover the assignment of the mycontainer Docker Container to specific CPU resources when the containers are created with the docker run command. When running commands, you must substitute your Docker Container name and CPU component numbers to suit your environment. This command will assign the mycontainer Docker Container to the first CPU (CPU0): # docker run --cpuset 0 /bin/bash mycontainer Multiple CPUs can be specified. This command will assign the mycontainer Docker Container to CPU 0 and 1: # docker run --cpuset 0,1 /bin/bash mycontainer A range of CPUs can be specified. This command will assign the mycontainer Docker Container to CPU 0, 1 and 2: # docker run --cpuset 0-2 /bin/bash mycontainer
How to achieve processor affinity in containers?
1,506,378,771,000
I am following this guide of installing kubernetes with kubeadm, and as part of the installation process, I need to set the following kernel parameters in sysctl.d/99-kuvernetes-cni.conf: net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 I know that these belong to the br_netfilter module, since I can only see them with sysctl -a after loading this module. But what are they all about? Are they really necessary for running kubernetes?
These parameters determine whether packets crossing a bridge are sent to iptables for processing. Most Kubernetes CNIs rely on iptables, so this is usually necessary for Kubernetes. The in-kernel default is to enable these settings, but many distributions disable them (see the previous link for details).
What is the net.bridge.bridge-nf-call-iptables kernel parameter?
1,506,378,771,000
I'm trying to add a kubernetes repo to my Amazon Linux 2 instance and struggle with automatically adding GPG keys. This is my /etc/yum.repos.d/kubernetes.repo... [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg I then try to import the GPG keys: ~ # wget https://packages.cloud.google.com/yum/doc/yum-key.gpg \ https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg ~ # rpm --import *.gpg However when I run any yum command it still doesn't know the keys: # yum upgrade -y Loaded plugins: extras_suggestions, langpacks, priorities, update-motd kubernetes/signature | 454 B 00:00:00 Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0xA7317B0F: Userid : "Google Cloud Packages Automatic Signing Key <[email protected]>" Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg kubernetes/signature | 1.4 kB 00:00:00 !!! https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes Trying other mirror. No packages marked for update Even if I try to accept them manually it still doesn't work. # yum upgrade Loaded plugins: extras_suggestions, langpacks, priorities, update-motd kubernetes/signature | 454 B 00:00:00 Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0xA7317B0F: Userid : "Google Cloud Packages Automatic Signing Key <[email protected]>" Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Is this ok [y/N]: y <<<<< Yes, I accept it! Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg kubernetes/signature | 1.4 kB 00:00:01 !!! https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes Trying other mirror. No packages marked for update How can I add the key so that YUM accepts it?
This is a known issue (see https://github.com/kubernetes/kubernetes/issues/60134). Work around it by disabling GPG checks: set repo_gpgcheck=0 in /etc/yum.repos.d/kubernetes.repo. Credits to drakedevel, who writes: I think this is due to Amazon Linux 2 shipping an old version of GnuPG, and something about the repomd.xml.asc signature requires a newer version. GnuPG 2.0.22 outright rejects the signature on the repository metadata with assuming bad signature from key BA07F4FB due to an unknown critical bit. I haven't been able to figure out what critical bit it's referring to -- there don't appear to be any on the signature or key -- but whatever GnuPG 2.0.22 is upset about is most likely the root cause. This only affects the repomd signature, so there's zero reason to disable gpgcheck as several others have suggested. Disabling repo_gpgcheck is sufficient and preserves package signature verification (although it's still not an ideal workaround...)
Yum in Amazon Linux 2 still asks for GPG key even after "rpm --import" when adding Kubernetes repo
1,506,378,771,000
I have a Kubernetes cluster running applications (currently on a set of Vagrant CoreOS VMs on a local server) I want to be able to debug a particular application locally on my laptop, so I worked on setting up VPN into the cluster: a client/server VPN based on kylemanna/docker-openvpn, deployed as a regular Pod I created the cert/key pairs, client certs etc... I can connect to the VPN fine. Now, connecting to the VPN server doesn't get me much if I can't access the Services. I have the DNS addon running skyDNS in the cluster. I can nslookup my services from other pods in the cluster, so all that works fine, but I can't resolve Services by name on the VPN client. I can ping Pods by IP from the VPN client (in the subnet 10.2.0.0/16) but I can't resolve with DNS a nslookup from the client returns: nslookup myservice 10.3.0.10 Server: 10.3.0.10 Address: 10.3.0.10#53 ** server can't find myservice: SERVFAIL One of the problems of troubleshooting is that neither ping nor traceroute work on the DNS IP (from any pod), yet it resolves services, so nslookup is the way I know to check, but that is not very informative. The VPN host IP the Pod binds to is 192.168.10.152 The Kubernetes subnet is 10.2.0.0/16 The SkyDNS server is at 10.3.0.10 The VPN server subnet is 10.8.0.0/24 On the VPN server ifconfig gives: eth0 Link encap:Ethernet HWaddr 02:42:0A:02:16:45 inet addr:10.2.22.69 Bcast:0.0.0.0 Mask:255.255.255.0 tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 So 10.2.22.69 is the Pod IP and the VPN Server IP is 10.8.0.1 with the Gateway being 10.8.0.2 i guess. On the VPN server pod the routign table looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.2.22.1 0.0.0.0 UG 0 0 0 eth0 10.2.22.0 * 255.255.255.0 U 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.254.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 I can reach my applications by IP (and get data) but couldn't when using the service IP (the proxy IP which is on the 10.3.0.0 subnet) I added the route route add -net 10.3.0.0/16 gw 10.8.0.2 to the VPN Server and I can then use the Service IP to get data but the nslookup just times out then. I guess the traffic may not be coming back from the DNS. DNS is itself a proxied service in Kubernetes, so that adds a level of complexity. Not sure how to fix this.
finally my config looks like this: docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -N \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN-SEARCH cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" -u for the VPN server address and port -n for all the DNS servers to use -s to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already) -d to disable NAT -p to push options to the client -N to enable NAT: it seems critical for this setup on Kubernetes the last part, pushing the search domains to the client, was the key to getting nslookup etc.. to work. note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.
OpenVPN server on Kubernetes cluster / DNS and Service resolution
1,506,378,771,000
As the docker daemon gets deprecated in the future Kubernetes version 1.20, I just wanted to start a test installation of a kubernetes cluster with containerd. I am trying to install a new cluster running on a debian (buster) using containerd as the container runtime. But it looks to me like containerd is supported for Ubuntu but not for debian? I did not found any solution or install guide how to install containerd as the container runtime on a debian node. Can this be true? Does anybody know how to install containerd on Debian ?
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" |sudo tee /etc/apt/sources.list.d/docker.list sudo apt update sudo apt install containerd Docker use containerd that's why the package is available on its repositories. Alternative method to install containerd by building the package from source: Build containerd from source Documentation and docker repository info
How to install containerd on Debian?
1,506,378,771,000
How can I pass environment variables from sshd's environment to new SSH sessions? I'm running sshd in Kubernetes pods. Kubernetes sets various variables in the containers environment, including the API server: KUBERNETES_PORT=tcp://100.64.0.1:443 Now, my problem is that this environment variable is not passed to new SSH sessions, and they need it in order to configure kubectl.
There may be better ways to do that, but a quick way is to use the SetEnv directive from the command line of sshd: export FOO=bar sshd ... -o "SetEnv=FOO=$FOO" ... export FOO=foo BAR='baz quux' sshd ... -o "SetEnv=FOO=$FOO BAR=\"$BAR\"" ... The SetEnv directive is supported since OpenSSH 7.8 (check with sshd -V). As with all -o key=val options, only the first will be used. With older versions, you may source an automatically generated file from the users' ~/.ssh/rc (PermitUserRC) or from the initialization files of the login shell: When started via ssh, bash sources ~/.bashrc (and before it, on Debian-like distros, /etc/bash.bashrc) even when run in non-interactive mode [1]. Do not use PermitUserEnvironment because that allows a user to bypass their login shell and any ForceCommand via LD_PRELOAD. Test example with sshd running as an ordinary user: t=$(mktemp -d) ssh-keygen -qN '' -f "$t/key" export FOO=foo BAR='baz quux' /usr/sbin/sshd -h "$t/key" -p 2222 -o "PidFile=$t/pid" \ -o "SetEnv=FOO=\"$FOO\" BAR=\"$BAR\"" connect to it $ ssh -p 2222 localhost 'echo "$FOO" "$BAR"' foo baz quux You may use alias ssh0='ssh -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o LogLevel=ERROR' ssh0 ... if you want to prevent ssh from prompting for and adding the throwaway key to the known hosts file. [1] Bash determines if it's started by ssh by checking the SSH_CLIENT and SHLVL envvars. That's another way PermitUserEnvironment may be "useful" -- to bypass the /etc/bash.bashrc which is sourced before anything else on Debian-like distros: $ bash -xc '' <nothing> $ SHLVL= SSH_CLIENT=foo bash -xc '' + case $- in + return <stuff from /etb/bash.bashrc and ~/.bashrc>
How can I pass environment variables from sshd's environment to new SSH sessions?
1,506,378,771,000
While investigating sharing the PID namespace with containers, I noticed something interesting that I don't understand. When a container shares the PID namespace with the host, some processes have their environmental variables protected while others do not. Let's take, for example, mysql. I'll start a container with a env variable set: ubuntu@sandbox:~$ docker container run -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret mysql 551b309513926caa9d5eab5748dbee2f562311241f72c4ed5d193c81148729a6 I'll start another container which shares the host PID namespace and try to access the environ file: ubuntu@sandbox:~$ docker container run -it --rm --pid host ubuntu /bin/bash root@1c670d9d7138:/# ps aux | grep mysql 999 18212 5.0 9.6 2006556 386428 pts/0 Ssl+ 17:55 0:00 mysqld root 18573 0.0 0.0 2884 1288 pts/0 R+ 17:55 0:00 grep --color=auto mysql root@1c670d9d7138:/# cat /proc/18212/environ cat: /proc/18212/environ: Permission denied Something is blocking my access to read the environmental variables. I was able to find out that I need CAP_SYS_PTRACE to read it in a container: ubuntu@sandbox:~$ docker container run -it --rm --pid host --cap-add SYS_PTRACE ubuntu /bin/bash root@079d4c1d66d8:/# cat /proc/18212/environ MYSQL_PASSWORD=HOSTNAME=551b30951392MYSQL_DATABASE=MYSQL_ROOT_PASSWORD=SuperSecretPWD=/HOME=/var/lib/mysqlMYSQL_MAJOR=8.0GOSU_VERSION=1.14MYSQL_USER=MYSQL_VERSION=8.0.30-1.el8TERM=xtermSHLVL=0MYSQL_ROOT_HOST=%PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binMYSQL_SHELL_VERSION=8.0.30-1.el8 However, not all processes are protected in this way. For example, I'll start another container ubuntu container with a env variable set and run the tail command. ubuntu@sandbox:~$ docker container run --rm --env SUPERSECRET=helloworld -d ubuntu tail -f /dev/null 42023615a4415cd4064392e890622530adee1f42a8a2c9027f4921a522d5e1f2 Now when I run the container with the shared pid namespace, I can access the environmental variables. ubuntu@sandbox:~$ docker container run -it --rm --pid host ubuntu /bin/bash root@3a774156a364:/# ps aux | grep tail root 19056 0.0 0.0 2236 804 ? Ss 17:57 0:00 tail -f /dev/null root 19176 0.0 0.0 2884 1284 pts/0 S+ 17:58 0:00 grep --color=auto tail root@3a774156a364:/# cat /proc/19056/environ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=42023615a441SUPERSECRET=helloworldHOME=/root What mechanism is preventing me from reading the mysqld environmental variables and not the tail -f process?
What mechanism is preventing me from reading the mysqld environmental variables and not the tail -f process? The fact that you're running with a different user ID in the first case. If we start up your two examples: docker run --name mysql -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret mysql:latest docker run --name tail -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret ubuntu:latest tail -f /dev/null And then look at the resulting processes: $ ps -fe n |grep -E 'tail|mysqld' | grep -v grep 999 422026 422005 2 22:50 pts/0 Ssl+ 0:00 mysqld 0 422170 422144 0 22:50 pts/0 Ss+ 0:00 tail -f /dev/null We see that mysqld is running as UID 999, while the tail command is running as UID 0. When we start up a new container in the host pid namespace, we can only read the environ for processes that are owned by the same UID and GID. So this works, because by default a container runs with UID 0: $ docker run --rm --pid host ubuntu:latest cat /proc/422170/environ | tr '\0' '\n' PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=e89c069d4674 TERM=xterm MYSQL_ROOT_PASSWORD=SuperSecret HOME=/root And this fails: $ docker run --rm --pid host ubuntu:latest cat /proc/422026/environ | tr '\0' '\n' cat: /proc/422026/environ: Permission denied We can only read the environ file for a process running under a different UID or GID if we have the CAP_SYS_PTRACE capability. The logic for this check is in the ptrace_may_access function in the kernel: if (uid_eq(caller_uid, tcred->euid) && uid_eq(caller_uid, tcred->suid) && uid_eq(caller_uid, tcred->uid) && gid_eq(caller_gid, tcred->egid) && gid_eq(caller_gid, tcred->sgid) && gid_eq(caller_gid, tcred->gid)) goto ok; if (ptrace_has_cap(tcred->user_ns, mode)) goto ok; We can make that failing example work by having the container run with the same UID and GID as the mysql process: $ docker run -u 999:999 --rm --pid host ubuntu:latest cat /proc/422026/environ | tr '\0' '\n' MYSQL_PASSWORD= HOSTNAME=bde980104dcd MYSQL_DATABASE= MYSQL_ROOT_PASSWORD=SuperSecret PWD=/ HOME=/var/lib/mysql MYSQL_MAJOR=8.0 GOSU_VERSION=1.14 MYSQL_USER= MYSQL_VERSION=8.0.31-1.el8 TERM=xterm SHLVL=0 MYSQL_ROOT_HOST=% PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MYSQL_SHELL_VERSION=8.0.31-1.el8
What mechanism prevents me from reading /proc/<PID>/environ in containers with a PID namespace shared with the host?
1,506,378,771,000
I was looking for ways to tcpdump a Kubernetes pod. Each pod creates a virtual interface and it is hard to know which interface belongs to the actual pod. I found a method that helps identify the system-wide interface index. Basically, it is possible to cat /sys/class/net/eth0/iflink inside a container running in the target pod, and it will show the host network interface index. This way you can know the veth that belongs to the pod. So, it worked but it left me thinking... why can I get system-wide information on the network if I'm supposed to be in a network namespace in the container. So, I shouldn't have access to the system-wide index, I should only see the virtual "namespace" index for the virtual interface. Is there any explaination to how /sys/class/net/eth0/iflink is related to the network namespaces inside K8s?
The iflink value isn't a system-wide value. It's a value valid in the peer network nsid, as displayed for example with all ip link show dev eth0 (since it's a veth interface with its peer in an other network namespace). For the little details, the peer network nsid isn't either a system-wide value but a value valid only in the current network namespace that links the (system-wide) peer network to this (system-wide) network namespace because at some time a related interface was moved across. Here's an example, using ip netns add and ip netns exec (which also correctly (re)mounts /sys in its own mount namespace to be able to use network entries in /sys). # ip netns add n1 # ip netns add n2 # ip netns add n3 # ip netns add n4 # ip -n n1 link add name ton2 index 42 type veth peer netns n2 name eth0 # ip -n n3 link add name ton4 index 42 type veth peer netns n4 name eth0 # ip netns exec n2 cat /sys/class/net/eth0/iflink 42 # ip -n n2 link show dev eth0 2: eth0@if42: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0e:27:6f:19:05:02 brd ff:ff:ff:ff:ff:ff link-netns n1 # ip netns exec n4 cat /sys/class/net/eth0/iflink 42 # ip -n n4 link show dev eth0 2: eth0@if42: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether fe:09:53:11:84:d6 brd ff:ff:ff:ff:ff:ff link-netns n3 Both interfaces have a peer link with index value 42, but this doesn't represent the same interface: one value 42 is ton2's index value in netns n1, the other is ton4's index value in netns n3. If modern versions of ip link didn't resolve the link-nsid to the actual names as designated by ip netns add of the peer namespaces, they would both show as link-netnsid 0 below (because that's the only peer network namespace and the link-nsid starts by default at 0 unless set otherwise, in each network namespace), again with 0 not being system-wide. # stat -f -c %T /run/netns/n1 /run/netns/n3 nsfs nsfs # stat -c %i /run/netns/n1 /run/netns/n3 4026533318 4026533590 The actual system-wide value for the peer is the network namespace + the index. Here that would be 4026533318:42 and 4026533590:42 . But it can be quite challenging to figure out simply this network namespace when it happens to not be the initial network namespace (as in this example) if one doesn't know how it was created (here with ip netns which leaves a mounted reference in /run/netns). Additional information about this available in this answer I made.
How can "/sys/class/net/eth0/iflink" have system wide information in a container
1,506,378,771,000
I have a CSV file that looks like: keyuat,carsim,logs-keyuat-carsim lowuat,carsimserver,logs-lowuat-carsimserver utils,dash,logs-utils-dash utils,lifecycle,logs-utils-lifecycle utils,lifecycle-nodejs,logs-utils-lifecycle-nodejs workshop,cashier,logs-workshop-cashier workshop,jfrog-dotnet,logs-workshop-jfrog-dotnet workshop,labelsengine,logs-workshop-labelsengine Based on this CSV file, I'm trying to run two commands that must go together: oc project $1 oc patch dc $2 -p '{"metadata":{"labels":{"logentries":"$3"}}}' Using real examples from above, the commands would be: oc project keyuat oc patch dc carsim -p '{"metadata":{"labels":{"logentries":"logs-keyuat-carsim"}}}' I have been trying with awk, but I always find an issue with special characters or a /r invalid character that I don't see. If instead of system I use print, I see some of my characters overlapping at the beginning of the line instead of adding in the end: awk -F , '{ cmd="oc project " $1 "\;" "\n" "oc patch dc " $2 " \-p '\''\{\"metadata\"\:\{\"labels\"\:\{\"logentries\"\:\"" $3"\""; print(cmd) }' ./csv/labels.csv "}}}oject keyuat; oc patch dc carsim -p '{"metadata":{"labels":{"logentries":"logs-keyuat-carsim "}}}oject lowuat; oc patch dc carsimserver -p '{"metadata":{"labels":{"logentries":"logs-keyuat-carsimserver "}}}oject utils; oc patch dc dash -p '{"metadata":{"labels":{"logentries":"logs-utils-dash awk -F , '{ cmd="oc project " $1 "\;" "oc patch dc " $2 " -p '\''\{\"metadata\"\:\{\"labels\"\:\{\"logentries\"\:\"" $3 "\"\}\}\}'\''"; system(cmd) }' ./csv/labels.csv awk: cmd. line:1: warning: escape sequence `\;' treated as plain `;' awk: cmd. line:1: warning: escape sequence `\{' treated as plain `{' awk: cmd. line:1: warning: escape sequence `\:' treated as plain `:' awk: cmd. line:1: warning: escape sequence `\}' treated as plain `}' Already on project "keyuat" on server "https://test-ocp.exampleusage.eu:443". Error from server (BadRequest): invalid character '\r' in string literal Already on project "lowuat" on server "https://test-ocp.exampleusage.eu:443". Error from server (BadRequest): invalid character '\r' in string literal Now using project "utils" on server "https://test-ocp.exampleusage.eu:443". Error from server (BadRequest): invalid character '\r' in string literal How can I correct this script?
In awk, you can easily print a singlequote with: \047 You could try (I just type it here, I can't test it right now... I hope it is correct!) # if needed: take out "^M" from the input file tr -d '\015' <inputfile.csv >inputfile_sanitized.csv # then parse it with: awk -F',' ' { cmd1="oc project " $1 cmd2="oc patch dc " $2 " -p \047{\"metadata\":{\"labels\":{\"logentries\":\"" $3 "\"}}}\047" ## once sure: you can here do: system(cmd1 " ; " cmd2); print cmd1 " ; " cmd2 }' inputfile_sanitized.csv
AWK to launch OS commands from items in a CSV file with a jump line
1,506,378,771,000
minikube has a dashboard that I can view easily with minikube dashboard However, that only works if I run it on my own machine because it only answers on localhost, * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening http://127.0.0.1:35781/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... - http://127.0.0.1:35781/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ I'm running minikube though on a remote machine on openstack, so I need to address it by it's lan address, and not 127.0.0.1.
One method is to background the dashboard, $ minikube dashboard --url & [1] 356972 And then use kubectl proxy to listen to all addresses, kubectl proxy --address=0.0.0.0 --accept-hosts='.*'
How can I make the minikube dashboard answer on all ips 0.0.0.0?
1,506,378,771,000
when I mount a nas in kubernetes pod, shows this error: MountVolume.SetUp failed for volume "nfs-hades-mysql-pv1" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/19995168-e921-4b1e-abbd-01df68518f85/volumes/kubernetes.io~nfs/nfs-hades-mysql-pv1 --scope -- mount -t nfs -o nfsvers=3,noresvport 12d025e2-wlgf.cn-balabala.extreme.nas.aliyuncs.com:/share:/k8s/hades-pro/hadesdb/hadesmaster /var/lib/kubelet/pods/19995168-e921-4b1e-abbd-01df68518f85/volumes/kubernetes.io~nfs/nfs-hades-mysql-pv1 Output: Running scope as unit run-2863.scope. mount.nfs: access denied by server while mounting wlgf.balabala.extreme.nas.aliyuncs.com:/share:/k8s/hades-pro/hadesdb/hadesmaster and this is my PV config: kind: PersistentVolume apiVersion: v1 metadata: name: nfs-hades-mysql-pv1 selfLink: /api/v1/persistentvolumes/nfs-hades-mysql-pv1 uid: 71a4f185-b5a9-45c8-bd93-8a8e80ff1f0f resourceVersion: '64010004' creationTimestamp: '2021-05-19T10:46:03Z' labels: alicloud-pvname: hades-mysql-data-db finalizers: - kubernetes.io/pv-protection spec: capacity: storage: 10Gi nfs: server: 'wlgf.balabala.extreme.nas.aliyuncs.com:/share' path: /k8s/hades-pro/hadesdb/hadesmaster accessModes: - ReadWriteOnce claimRef: kind: PersistentVolumeClaim namespace: hades-pro name: data-hades-mysql-ha-mysqlha-0 uid: 7b9256df-5c82-4285-8157-de8468449bcf apiVersion: v1 resourceVersion: '63882330' persistentVolumeReclaimPolicy: Retain mountOptions: - nfsvers=3 - noresvport volumeMode: Filesystem status: phase: Bound this pod was on my k8ssalve3 node. when I mount this nas in my k8ssalve3 node host like this: sudo mount -t nfs -o v3 -wlgf.balabala.extreme.nas.aliyuncs.com:/share /home/miaoyou/nas I could bind success. This make me confusing. what should I do to fix? any special config with the kubernetes pod? I also mount like this in host machine, works fine: sudo mount -t nfs -v v3 wlgf.cn-balabala.extreme.nas.aliyuncs.com:/share/k8s/hades-pro/hadesdb/hadesmaster /home/miaoyou/nas why could not mount on kubernetes pod?
I tweak the PVC like this fix it: kind: PersistentVolume apiVersion: v1 metadata: name: nfs-hades-mysql-pv1 selfLink: /api/v1/persistentvolumes/nfs-hades-mysql-pv1 uid: 71a4f185-b5a9-45c8-bd93-8a8e80ff1f0f resourceVersion: '64010004' creationTimestamp: '2021-05-19T10:46:03Z' labels: alicloud-pvname: hades-mysql-data-db finalizers: - kubernetes.io/pv-protection spec: capacity: storage: 10Gi nfs: server: 'wlgf.balabala.extreme.nas.aliyuncs.com' path: /share/k8s/hades-pro/hadesdb/hadesmaster accessModes: - ReadWriteOnce claimRef: kind: PersistentVolumeClaim namespace: hades-pro name: data-hades-mysql-ha-mysqlha-0 uid: 7b9256df-5c82-4285-8157-de8468449bcf apiVersion: v1 resourceVersion: '63882330' persistentVolumeReclaimPolicy: Retain mountOptions: - nfsvers=3 - noresvport volumeMode: Filesystem status: phase: Bound move the share path into the path not put in server. Not follow the official document server url.
Output: Running scope as unit run-2863.scope. mount.nfs: access denied by server while mounting
1,506,378,771,000
I have a simple secret.yaml file: env: USERNAME: user PASSWORD: pass i'm trying to use yq / jq to encode the values for creating k8s secret, so my final result should be: apiVersion: v1 kind: Secret metadata: name: my-service type: Opaque data: USERNAME: <base64 encoded> PASSWORD: <base64 encoded> i was trying to use: yq r secret.yaml ".env" -j | jq -r ' to_entries[] | "\(.value)" | @base64'" which gave me the base64 encoded values, but i'm failing to insert the encoded values to the final output. i'm trying to avoid using loops but if i won't find a clean solution i'll use it. please assist.
The jq expression you'd want to apply to the data is .env |= map_values(@base64) | { data: .env } This updates all values of the top-level env structure so that they are base64-encoded. It then "renames" the env key to data. With the particular yq utility that you seem to be using in the question, this is done through converting the YAML into JSON, applying jq to the generated JSON, and then converting it back to YAML: $ yq -j r file.yml | jq '.env |= map_values(@base64) | { data: .env }' | yq -P r - data: PASSWORD: cGFzcw== USERNAME: dXNlcg== With the yq utility from https://kislyuk.github.io/yq/ (i.e., not the one that is used in the question), this is done neater using $ yq -y '.env |= map_values(@base64) | { data: .env }' file.yml data: USERNAME: dXNlcg== PASSWORD: cGFzcw== Adding the extra static data could be done after running the above processing.
using yq to base64 encode k8s secret values
1,506,378,771,000
i have setup a private registry in docker accessible thru a domain “makdom.ddns.net”, i can login push and pull images locally, no problem even from slave kubes node i can do this thing, but when i write a kubes deployment file, it is unable to pull images from the private registry and fails. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ssh-deployment spec: template: metadata: labels: app: helloworld spec: containers: - name: ssh-demo image: makdom.ddns.net/my-ubuntu imagePullPolicy: IfNotPresent ports: - name: nodejs-port containerPort: 22 imagePullSecrets: - name: myregistrykey secrets: DOCKER_REGISTRY_SERVER="https://makdom.ddns.net/v1/" DOCKER_USER="user" DOCKER_PASSWORD="password" DOCKER_EMAIL="[email protected]" kubectl create secret docker-registry myregistrykey \ --docker-server=$DOCKER_REGISTRY_SERVER \ --docker-username=$DOCKER_USER \ --docker-password=$DOCKER_PASSWORD \ --docker-email=$DOCKER_EMAIL error: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned ssh-deployment-7b7c7bf977-m6stk to kubes-slave Normal SuccessfulMountVolume 1m kubelet, kubes-slave MountVolume.SetUp succeeded for volume "default-token-mx7qq" Normal Pulled 1m (x3 over 1m) kubelet, kubes-slave Container image "makdom.ddns.net/my-ubuntu" already present on machine Normal Created 1m (x3 over 1m) kubelet, kubes-slave Created container Normal Started 1m (x3 over 1m) kubelet, kubes-slave Started container Normal Pulling 34s (x2 over 1m) kubelet, kubes-slave pulling image "makdom.ddns.net/my-ubuntu" Warning Failed 34s (x2 over 1m) kubelet, kubes-slave Failed to pull image "makdom.ddns.net/my-ubuntu": rpc error: code = Unknown desc = Error: image my-ubuntu:latest not found Warning Failed 34s (x2 over 1m) kubelet, kubes-slave Error: ErrImagePull Warning BackOff 19s (x6 over 1m) kubelet, kubes-slave Back-off restarting failed container
Known issue https://github.com/kubernetes/kubernetes/issues/57427, resolved in master in https://github.com/kubernetes/kubernetes/pull/57463 Targeted fix for 1.9.1 in https://github.com/kubernetes/kubernetes/pull/57472 Workarounds until then: If you have a .dockerconfigjson for your private registry already, you can manually specify the type and data key: kubectl create secret generic my-secret-name \ --type=kubernetes.io/dockerconfigjson \ --from-file .dockerconfigjson=/path/to/.dockerconfigjson If you don't have a .dockerconfigjson file already, you can fix up the secret produced by kubectl create secret docker-registry manually: add --dry-run -o yaml > secret.yaml change the type from kubernetes.io/dockercfg to kubernetes.io/dockercfgjson change the data key from .dockercfg to .dockercfgjson create the modified secret with kubectl create -f secret.yaml
unable to pull images in kubernetes from private registry
1,506,378,771,000
From kubectl get pods -o wide The output looks similar to some-pod-name-with-numbers-123 1/1 Running 0 6d4h 192.168.0.23 node-name-abcdeft-host.domain <none> <none> some-pod-name-with-numbers-1234 1/1 Running 0 4h38m 192.168.0.24 node-name-abcdeft-host.domain <none> <none> some-pod-name-with-numbers-1235 1/1 Running 0 2m38s 192.168.0.25 node-name-abcdeft-host.domain <none> <none> I get time elapsed for the pods in a format like: 4d3h15m3s (it looks like the most is two consecutive units, days and hours, hours minutes, minutes and seconds, but I can't guarantee that. I need to check to see if a pod has been around longer than some threshold X. I tried to find some way of extracting this field as seconds via kubectl, no luck there. I searched the internet for a pre-canned solution, could not find one. I'm thinking that I can use awk to convert the string into a numeric seconds so I can then compare if it is > $X, this value is configurable.
You don't need to fork an external process just to get the kubernetes elapsed time in seconds. The below GNU awk code has a user defined function fx() which shall do that conversion and from there on you can use it for your comparison purposes. kubectl get pods -o wide | awk ' BEGIN { a["d"] = 24*(\ a["h"] = 60*(\ a["m"] = 60*(\ a["s"] = 1))); } { print fx($5); # kubernetes time elapsed in seconds } function fx(ktym, f,u,n,t,i) { n = split(ktym, f, /[dhms]/, u) t = 0 for (i=1; i<n; i++) { t += f[i] * a[u[i]] } return t } ' In case you want to go by a user defined function in bash then we can do the following. We have a function fx() which needs one arg, which is the time elapsed in kubernetes format abd it will output a time in seconds. fx() { echo "$1" | sed -Ee ' s/[1-9][0-9]*[dhms]/&+ /g :a;s/[+]([^+].*)/\1+/;ta s/.$// :loop s/d/ 24h*/ s/h/ 60m*/ s/m/ 60s*/ s/s// t loop s/.*/echo "&" | dc -e "?p"/e ' } We are dynamically generating a code for the Linux utility called dc or the desk calculator in the RPN (Reverse Polish Notation).
How to convert a kubernetes style time elapsed into seconds so I can do some comparisons
1,506,378,771,000
When reading this Kubernetes blog post about Intel's CPU manager it mentions that you can avoid cross-socket traffic by having CPUs allocated on the socket near to the bus which connects to an external device. What does cross-socket traffic mean and what problems can it cause? These are my guesses: A CPU from one socket needs access to a device connected to a bus that is only accessible to CPUs in another socket, so instructions to that device must be written to memory to be executed by a CPU in this other socket A CPU from one socket needs access to a device connected to a bus that is only accessible to CPUs in another socket, so instructions to that device are sent directly to a CPU in this other socket to be forwarded to the device (not sure if this is even possible)
The authors of the Kubernetes blog post just speak gibberish trying to reinvent the wheel - one more PBS (portable batch system), which they call "CPU manager". Answering question: "What does cross-socket traffic mean and what problems can it cause?" - it's necessary to say first, that this is about Multiprocessor Computers, i.e. computer systems with two or more CPUs and CPU sockets respectively. Multiprocessor systems are available in two different architectures: SMP (symmetric multiprocessing) and AMP (asymmetric multiprocessing). Most of multiprocessor systems available at the moment are SMP architecture systems. Such the systems have so called shared memory which is visible to indepentent physical CPUs as common main memory. There are two types of such the systems according to type of physical CPUs interconnection: system bus and crossbar switch. Diagram of SMP system with crossbar switch: Diagram of SMP system with system bus: Mostly SMP systems have system bus type CPUs connection and the Kubernets blog post is about systems of such the kind. SMP systems with system bus CPUs connection have both advantages and disadvantages. The most significant disadvantage of this systems is that they're NUMA (non-uniform memory access) systems. What does it mean. Every CPU socket physically associates its own memory bank, but Linux kernel can't distiungish this assosiation in SMP - the memory banks are seen to Linux as single integral memory. But despite this fact, NUMA phenomenon arises - interoperation of a physical CPU with addresses of its own physical memory bank isn't same fast as with memory bank(-s) assosiated with another CPU socket(-s). Thus, we wish naturally to avoid using by a physical CPU such addresses in the common main memory of SMP which belong to physical memory bank connected to another physical CPU. The part "Limitations" of the Kubernates blog post refers to NUMA phenomenon as to "cross-socket traffic" problem (citation): Users might want to get CPUs allocated on the socket near to the bus which connects to an external device, such as an accelerator or high-performance network card, in order to avoid cross-socket traffic. This type of alignment is not yet supported by CPU manager. By the way, the disability to assign a thread to some definite CPU which "is closer" to something is quite natural. Linux kernel sees all of CPU cores of physical CPUs as equal ordinary SMP processors since it can't distinguish physical CPUs of SMP computer. There're some poor attempts to avoid using CPU cores which are "farther" using "warm cache" and "cold cache" signs, but it doesn't work effectively due to nature of SMP systems. Please, read additionally: NUMA (Non-Uniform Memory Access): An Overview Managing Process Affinity in Linux
What does cross-socket traffic mean?
1,506,378,771,000
I need to map a word "foo" to a domain name "foobar.com" So it should be able to run ping foo or curl it. alias foo='foobar.com' giving below issue. curl -k foo curl: (6) Could not resolve host: foo this is run on kubernetes and domain name IP address is dynamic. Basically I want to map K8s service name to a internal domain name which IP dynamic.
What you are trying to do is a server alias here. Option A (dirty ugly) You need to edit the /etc/hosts file for this e.g. ${IP} foo foo.com then you will be able to curl foo -H 'Host: foo.com' The host header is necessary as the vhost alias is foo.com Option B (clean and nice) Create a server alias, depending on your web server it may be different config. On my apache httpd config I use: /etc/httpd/sites-available - contains the config for each vhost /etc/httpd/sites-enabled - contains the symlinks to the sites-available config files, this way I can activate/deactivate vhosts without modifying or renaming the files at sites-available, I just need to unlink to bring down a vhost and I have modified the conf /etc/httpd/conf/httpd.conf [...] IncludeOptional sites-enabled/*.conf` [...] so let's say I had /etc/httpd/sites-available/foo.com.conf and a symlinked version at /etc/httpd/sites-enabled/foo.com.conf then do cp /etc/httpd/sites-available/foo.com.conf /etc/httpd/sites-available/foo.conf modify foo.conf to match this: [...] ServerName foo ServerAlias foo [...] and then ln -s /etc/httpd/sites-available/foo.conf /etc/httpd/sites-enabled/foo.conf finally restart apache: systemctl restart httpd Note all these actions may require root or sudo (power user) Assuming your system has a DNS server, nothing else would be needed; otherwise the /etc/hosts file will need to be modified too, but when you curl you won't need to pass a host header as the vhost alias will be foo. i.e. in this case curl http://foo will work fine, while on option A you need a host header too.
Mapping a word "foo" to a domain name "foobar.com"
1,506,378,771,000
I try: # task - name: Add ldap oauth query password k8s: state: present definition: "{{ lookup('file', 'openshift-config/secrets/ldap-bind-pw.yaml.j2') }}" kubeconfig: "{{ install_directory }}/auth/kubeconfig" # openshift-config/secrets/ldap-bind-pw.yaml.j2 --- kind: Secret apiVersion: v1 metadata: name: ldap-bind-password namespace: openshift-config data: bindPassword: {{ vault_openshift_ldap_bind_pw | string | b64encode }} type: Opaque # vault.yaml vault_openshift_ldap_bind_pw: test1234 Error: <os-helper71.domain.com> Failed to connect to the host via ssh: Traceback (most recent call last): File "<stdin>", line 102, in <module> File "<stdin>", line 94, in _ansiballz_main File "<stdin>", line 40, in invoke_module File "/usr/lib/python3.6/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 279, in <module> File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 275, in main File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 145, in __init__ File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 145, in <listcomp> File "/usr/lib/python3/dist-packages/yaml/__init__.py", line 84, in load_all yield loader.get_data() File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 31, in get_data return self.construct_document(self.get_node()) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 46, in construct_document for dummy in generator: File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 398, in construct_yaml_map value = self.construct_mapping(node) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 204, in construct_mapping return super().construct_mapping(node, deep=deep) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 128, in construct_mapping "found unhashable key", key_node.start_mark) yaml.constructor.ConstructorError: while constructing a mapping in "<unicode string>", line 8, column 17: bindPassword: {{ vault_openshift_ldap_bind_pw | s ... ^ found unhashable key in "<unicode string>", line 8, column 18: bindPassword: {{ vault_openshift_ldap_bind_pw | st ... ^ The full traceback is: Traceback (most recent call last): File "<stdin>", line 102, in <module> File "<stdin>", line 94, in _ansiballz_main File "<stdin>", line 40, in invoke_module File "/usr/lib/python3.6/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 279, in <module> File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 275, in main File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 145, in __init__ File "/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 145, in <listcomp> File "/usr/lib/python3/dist-packages/yaml/__init__.py", line 84, in load_all yield loader.get_data() File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 31, in get_data return self.construct_document(self.get_node()) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 46, in construct_document for dummy in generator: File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 398, in construct_yaml_map value = self.construct_mapping(node) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 204, in construct_mapping return super().construct_mapping(node, deep=deep) File "/usr/lib/python3/dist-packages/yaml/constructor.py", line 128, in construct_mapping "found unhashable key", key_node.start_mark) yaml.constructor.ConstructorError: while constructing a mapping in "<unicode string>", line 8, column 17: bindPassword: {{ vault_openshift_ldap_bind_pw | s ... ^ found unhashable key in "<unicode string>", line 8, column 18: bindPassword: {{ vault_openshift_ldap_bind_pw | st ... ^ fatal: [os-helper71.domain.com]: FAILED! => { "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\", line 279, in <module>\n File \"/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\", line 275, in main\n File \"/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\", line 145, in __init__\n File \"/tmp/ansible_k8s_payload_osgd8_f3/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\", line 145, in <listcomp>\n File \"/usr/lib/python3/dist-packages/yaml/__init__.py\", line 84, in load_all\n yield loader.get_data()\n File \"/usr/lib/python3/dist-packages/yaml/constructor.py\", line 31, in get_data\n return self.construct_document(self.get_node())\n File \"/usr/lib/python3/dist-packages/yaml/constructor.py\", line 46, in construct_document\n for dummy in generator:\n File \"/usr/lib/python3/dist-packages/yaml/constructor.py\", line 398, in construct_yaml_map\n value = self.construct_mapping(node)\n File \"/usr/lib/python3/dist-packages/yaml/constructor.py\", line 204, in construct_mapping\n return super().construct_mapping(node, deep=deep)\n File \"/usr/lib/python3/dist-packages/yaml/constructor.py\", line 128, in construct_mapping\n \"found unhashable key\", key_node.start_mark)\nyaml.constructor.ConstructorError: while constructing a mapping\n in \"<unicode string>\", line 8, column 17:\n bindPassword: {{ vault_openshift_ldap_bind_pw | s ... \n ^\nfound unhashable key\n in \"<unicode string>\", line 8, column 18:\n bindPassword: {{ vault_openshift_ldap_bind_pw | st ... \n ^\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } whats wrong? ansible version: 2.9.9 with python 3.8.6
You are using lookup('file', '/path/to/template.j2') which is used to retrieve the raw contents of the file specified. Instead, you have to use lookup('template', '/path/to/template.j2') if you want jinja2 to fill your template. Source
Ansible with Kubernetes: create k8s secret with ansible
1,506,378,771,000
I think about installing an as small as possible kubernetes setup on scaleway. The idea is to prepare myself with a kind of MVP that is able to run my applications components and turn it into a full blown redundant setup when usage grows. The tutorial at https://www.tauceti.blog/post/kubernetes-the-not-so-hard-way-with-ansible-at-scaleway-part-1/ mentions that etcd should not be installed on controller nodes but on separate VMs etcd needs at least three nodes What's the reason for recommending separate installs and could I run only one etcd on a controller ? Please consider that I only search for a functional setup and not a highly available.
etcd needs at least three nodes This is true of any distributed system that has a leader/master to maintain consistency if you require it to be fault tolerant. In fact, you need an odd number of nodes to ensure that if the cluster cannot get evenly split in two (due to network outage for example) as when this happens neither side can elect a leader and the whole cluster will become unavailable. Three happens to be the minimum number that is able to tolerate a single node going down without affecting the uptime of the cluster. If you do not require a highly available system you can get away with a single node but non highly available solutions are not recommended for production use for obvious reasons - you are always free to ignore this advice if your system is smaller enough and you understand the risks of the system falling over or cannot justify the expense of the extra nodes. etcd should not be installed on controller nodes but on separate VMs This is also a stability/scalability issue - you are free to mix controller nodes with compute nodes in most distributed systems but they can struggle in this setup when under high load. If you don't have enough nodes to create a three node cluster then you don't have enough nodes to stress the systems to a point where this will matter. Both of these issues can be addressed when your system grows to a point where the nodes start to struggle or you can warrant the cost of setting up the extra nodes. You could start off with one node for a MVP but both kubernetes and etcds guides are geared towards a distributed setup and you only really benefit from them being setup in a cluster. You may also encounter issues trying to grow it from one node to three nodes. If you can afford a three node setup then I would start off with that and just have all the nodes start with all of the services splitting them out when you want to grow the setup further.
Minimal amount of etcd instances
1,602,273,208,000
I have the following yaml file: --- apiVersion: v1 kind: pod metadata: name: Tesing_for_Image_pull -----------> 1 spec: containers: - name: mysql ------------------------> 2 image: mysql ----------> 3 imagePullPolicy: Always ------------->4 command: ["echo", "SUCCESS"] -------------------> 5 After running kubectl create -f my_yaml.yaml I get the following error: error: error converting YAML to JSON: yaml: line 10: did not find expected key UPDATE: With yamllint I get the following error: root@debian:~# yamllint my_yaml.yaml my_yaml.yaml 8:9 error wrong indentation: expected 12 but found 8 (indentation) 11:41 error syntax error: expected <block end>, but found '<scalar>' Where is my problem and how can I solve it?
The simple pod example YAML for Kubernetes shows that the 'metadata' and 'spec' elements required are at the top level of the definition. The kubectl command is most likely failing because it cannot find the 'spec' element, which defines the specification of the pod. You seem to be testing the image pull configuration, and you have specified that you simply want to run echo SUCCESS inside the container. Considering both these conditions, it would be preferable to pull down bash image instead of the mysql image. The following alternate YAML should work for your needs: --- apiVersion: v1 kind: Pod metadata: name: testing-for-image-pull spec: containers: - name: bash image: bash imagePullPolicy: Always command: ["echo"] args: ["SUCCESS"] The following changes have been made from the original YAML file: 1) The kind element has been corrected to the value Pod. 2) The name of the pod has been changed to fit Kubernetes requirements (lowercase DNS-like name). 3) The image and name elements have been modified to use the bash image. 4) The command definition has been changed to use the command and args keys instead. Note that YAML uses spaces instead of tabs for indentation, and the suggested syntax for YAML is to use two spaces per level of indentation instead of the traditional four spaces. For more example YAML files, refer to the Kubernetes website repository on GitHub.
error converting YAML to JSON: yaml: line 10: did not find expected key
1,602,273,208,000
In my kubernetes (v1.28.7), docker uses containerd as underlying container management engine. (I guess I can call it Container Runtime Interface - CRI? ). This is how I assume that (look at the last line and scroll all the way to the right): lab@worker01:~$ sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2024-03-27 14:22:36 UTC; 1h 11min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 946 (dockerd) Tasks: 7 Memory: 87.3M CPU: 1.080s CGroup: /system.slice/docker.service └─946 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd <--- HERE!!! containerd instead of docker. Question: If containerd is my CRI, why the only way to eg. list images or show running containers is "crictl"? sudo crictl image ls IMAGE TAG IMAGE ID SIZE docker.io/calico/cni v3.26.0 5d6f5c26c6554 93.3MB docker.io/calico/node v3.26.0 44f52c09decec 87.6MB docker.io/library/busybox latest ba5dc23f65d4c 2.16MB docker.io/library/nginx latest 92b11f67642b6 70.5MB docker.io/library/redis latest 170a1e90f8436 51.4MB k8s.gcr.io/metrics-server/metrics-server v0.6.2 25561daa66605 28.1MB registry.k8s.io/coredns/coredns v1.10.1 ead0a4a53df89 16.2MB registry.k8s.io/kube-proxy v1.28.7 123aa721f941b 28.1MB registry.k8s.io/pause 3.8 4873874c08efc 311kB registry.k8s.io/pause 3.9 e6f1816883972 322kB Why docker OR ctr shows no images: sudo ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS sudo docker images ls REPOSITORY TAG IMAGE ID CREATED SIZE
Containerd allows clients to set a "namespace" in order to manage different sets of resources. For example, on my local system, running Docker 26.0.0, Docker uses containerd as the container runtime. There are a couple of running Docker containers: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7cfbf97a9275 alpinelinux/darkhttpd "darkhttpd /var/www/…" 7 seconds ago Up 6 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp boring_thompson 0e1ede44350e kindest/node:v1.29.2 "/usr/local/bin/entr…" 3 weeks ago Up 12 hours 127.0.0.1:39949->6443/tcp kind-control-plane I don't see anything if I run ctr container ls: # ctr container ls CONTAINER IMAGE RUNTIME But if I use the moby namespace, I see the two Docker containers: # ctr --namespace moby container ls CONTAINER IMAGE RUNTIME 0e1ede44350e15fa2305f4b2dbfa0a5023de645bb535b05cac232e91069c4e7e - io.containerd.runc.v2 7cfbf97a9275edb79228d241c221b665659e3688bbc96ac879bb950db481e912 - io.containerd.runc.v2 Similarly, on a system running Kubernetes, running ctr container ls shows no containers in the default namespace, but if we use the k8s.io namespace, we see the Kubernetes-managed containers: # ctr --namespace k8s.io container ls CONTAINER IMAGE RUNTIME 007dc9290e81c88cc85cf1b74b50c535420f1e1b4188eca4dfbd46e14881d2ab registry.k8s.io/kube-apiserver-amd64:v1.29.2 io.containerd.runc.v2 00c5f27f9125eb7132277585d450c904f4ff9542f5f70130855d268debad0624 registry.k8s.io/pause:3.7 io.containerd.runc.v2 0f2968f76498a18b098bc5a11f1b8071e261d74e0790bc7df6a56f0b37e9b293 registry.k8s.io/kube-proxy-amd64:v1.29.2 io.containerd.runc.v2 ... Namespace support in containerd is described in this article.
kubernetes cluster - only crictl can actually see containers (containers assets)
1,602,273,208,000
I've been granted access to a k8s cluster in k8s, and I'd like to see what privileges I have in the cluster, like creating/deleting pods and other such kind of actions. So far I know a command to check a particular action like: kubectl auth can-i get pods But could be good to see all the other commands, thus if someone knows it I'd appreciate sharing it with me.
You can simply run: kubectl auth can-i --list According to the help: kubectl auth can-i --help --list=false: If true, prints all allowed actions.
How to list all allowed actions I can perform in kubernetes?
1,602,273,208,000
Debugging a tcp: out of memory error, I found that a process (from a container) has a lot of connection on CLOSE_WAIT status aka 08 when I cat /proc/XXX/net/tcp But neither netstat or ss were showing those leaked connections. 133: 0E03540A:9D9C 804CC2AD:01BB 08 00000000:00059D7A 00:00000000 00000000 0 0 316215 1 ffff8f677201df00 20 4 0 10 -1 134: 0E03540A:8316 80A7E940:01BB 08 00000000:00000000 00:00000000 00000000 0 0 255647 1 ffff8f67c9592600 20 4 1 10 -1 135: 0E03540A:8874 808C7D4A:01BB 08 00000000:00037EED 00:00000000 00000000 0 0 331603 1 ffff8f68e37a7200 20 4 1 10 -1 136: 0E03540A:E226 804CC2AD:01BB 08 00000000:0005E30B 00:00000000 00000000 0 0 215782 1 ffff8f67bd1edf00 20 4 0 10 -1 137: 0E03540A:DAEC 804CC2AD:01BB 08 00000000:0005B41A 00:00000000 00000000 0 0 216048 1 ffff8f67daf9af80 20 4 0 10 -1 138: 0E03540A:9AEA 8005FB8E:01BB 08 00000000:000D6360 00:00000000 00000000 0 0 243082 1 ffff8f67db637200 20 4 30 10 -1 140: 0E03540A:BAE4 800FB16C:01BB 08 00000000:000D8432 00:00000000 00000000 0 0 245062 1 ffff8f67640f8980 20 4 1 10 -1 141: 0E03540A:9754 804CC2AD:01BB 08 00000000:00003186 00:00000000 00000000 0 0 298890 1 ffff8f676e1a5f00 20 4 1 10 -1 142: 0E03540A:C6FC 800FB16C:01BB 08 00000000:000658C9 00:00000000 00000000 0 0 299343 1 ffff8f68dcef5580 20 4 0 10 -1 143: 0E03540A:CB24 804CC2AD:01BB 08 00000000:0005BBB4 00:00000000 00000000 0 0 316285 1 ffff8f6772019300 20 4 1 10 -1 144: 0E03540A:8204 80A7E940:01BB 08 00000000:0005DD3A 00:00000000 00000000 0 0 217390 1 ffff8f67dbc20000 20 4 0 10 -1 145: 0E03540A:8BC8 80016642:01BB 08 00000000:00059847 00:00000000 00000000 0 0 275095 1 ffff8f67b6d7a600 20 4 1 10 -1 146: 0E03540A:C612 8005FB8E:01BB 08 00000000:0003EC48 00:00000000 00000000 0 0 252281 1 ffff8f67cf014280 20 4 1 10 -1 Why netstat is not showing those connection and how to get them without digging into each process details ?
If you are looking at this from the host, you're in the initial network namespace rather than in the container's network namespace: these connections or states are not seen because they are not handled by the initial network namespace's network stack. When following an entry in the process directory in /proc, this entry is seen from the process point of view ... sometimes, so will sometimes display the relevant namespace information, but tools are not meant to use this. So you have to switch to the studied process' network namespace first. It's as simple as (with the root user): nsenter -t XXX --net -- ss -tn Or to find the process(es) (as seen in the initial pid namespace, not the container's): nsenter -t XXX --net -- ss -tnp state CLOSE-WAIT Normally one would search per pod (or container in other techonogies) rather than per process. Various container technologies allow to retrieve a PID process from the container name (eg: LXC's lxc-info -Hp -n containername or Docker's docker inspect --format '{{.State.Pid}}' containername ), but I don't know if and how this information is available with Kubernetes if the backend is not Docker. Also for some tools it's a bit more difficult than this, because for example /sys should be remounted for /sys/class/net to reflect the new network namespace's interfaces view: now there would be two namespaces to change: target process' namespace and temporary mount namespace (to not damage the initial, nor use the target that might not have the required commands). Anyway, the ss command is operating purely on sockets and wouldn't need this. For example the obsolete brctl show command would require this to work properly: nsenter -t XXX --net -- unshare --mount -- sh -c 'mount -t sysfs sysfs /sys; brctl show'
CLOSE_WAIT not visible on the kubernetes node
1,602,273,208,000
I've installed k3s on Debian Bullseye (on M1 Pro through qemu/UTM). k3s recommend to disable the swap. After reading the answers of the following questions: How to safely turn off swap permanently and reclaim the space? (on Debian Jessie) Disabling Swap on Debian Permanently I've : Disabled systemd swap service sudo systemctl mask "dev-*.swap" Removed the swap partition in /etc/fstab. Deleted the swap partition and extend the main partition to regain space Set the swapiness to 0 in /etc/sysctl.conf Now I have: root@debian:~# systemctl --type swap --all UNIT LOAD ACTIVE SUB DESCRIPTION 0 loaded units listed. root@debian:~# sysctl vm.swappiness vm.swappiness = 0 root@debian:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 254:0 0 10G 0 disk ├─vda1 254:1 0 512M 0 part /boot/efi └─vda2 254:2 0 9.5G 0 part / root@debian:~# free total used free shared buff/cache available Mem: 1000692 705588 34164 1704 260940 221484 Swap: 0 0 0 root@debian:~# swapon -s root@debian:~# But when I run k3s check-config, I still have: - swap: should be disabled What should I do in order to fully disable the swap in the eyes of k3s?
The swap activation probably happens early in the boot process while the system is still running on initramfs, so after removing the swap configuration items, you should have done a update-initramfs -u. I also don't see a systemctl stop "dev-*.swap" or swapoff -a anywhere: those would have been the commands to actually disable already-activated swap areas. systemctl mask will certainly prevent the swap units from starting, but it does nothing at all to swap areas that have already been activated. You should ensure any units you are systemctl masking are stopped first.
Why k3s is still seeing swap on Debian Bullseye?
1,602,273,208,000
I installed kubernetes on Ubuntu server 20.04 . Master configured with flannel network without any problem . Initialize command: kubeadm init --pod-network-cidr=10.10.10.0/24 --apiserver-advertise-address=172.16.200.10 Network interfaces on master: cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 [4/651] inet 10.10.10.1 netmask 255.255.255.0 broadcast 10.10.10.255 inet6 fe80::8e1:9ff:fe5e:56fb prefixlen 64 scopeid 0x20<link> ether be:cc:5a:4b:f0:c0 txqueuelen 1000 (Ethernet) RX packets 6565 bytes 400726 (400.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3121 bytes 291909 (291.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:9e:d3:e7:73 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.200.10 netmask 255.255.255.0 broadcast 172.16.200.255 inet6 fe80::20c:29ff:fe6b:dbf8 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:6b:db:f8 txqueuelen 1000 (Ethernet) RX packets 7134 bytes 713894 (713.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9077 bytes 2726670 (2.7 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.10.10.0 netmask 255.255.255.255 broadcast 10.10.10.0 inet6 fe80::ec9f:dfff:fe19:b2bd prefixlen 64 scopeid 0x20<link> ether ee:9f:df:19:b2:bd txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 14 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 222414 bytes 41222254 (41.2 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 222414 bytes 41222254 (41.2 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth92c2b55e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet6 fe80::bccc:5aff:fe4b:f0c0 prefixlen 64 scopeid 0x20<link> ether be:cc:5a:4b:f0:c0 txqueuelen 0 (Ethernet) RX packets 3252 bytes 244283 (244.2 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1602 bytes 148781 (148.7 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethdc37182b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet6 fe80::d84e:c0ff:feda:9875 prefixlen 64 scopeid 0x20<link> ether da:4e:c0:da:98:75 txqueuelen 0 (Ethernet) RX packets 3313 bytes 248353 (248.3 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1540 bytes 144798 (144.7 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 but nodes result this error on /var/log/syslog after join to cluster. Jan 27 08:08:33 k8s-node1 kubelet[5393]: E0127 08:08:33.534423 5393 pod_workers.go:191] Error syncing pod 90b895af-b046-4149-ac18-bbec859c37a2 ("kube-flannel-ds-vsgx8_kube-system(90b895af-b046-4149-ac18-bbec859c37a2)"), skipping: failed to "StartContainer" for "kube-flannel" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-flannel pod=kube-flannel-ds-vsgx8_kube-system(90b895af-b046-4149-ac18-bbec859c37a2)" This is nodes status: root@k8s-master:~# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready control-plane,master 42m v1.20.2 172.16.200.10 <none> Ubuntu 20.04.1 LTS 5.4.0-62-generic docker://19.3.8 k8s-node1 Ready <none> 35m v1.20.2 172.16.200.20 <none> Ubuntu 20.04.1 LTS 5.4.0-62-generic docker://19.3.8 What is this error ? How can fix that ? Update This is exists containers in Node1: root@k8s-node1:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 358f8cdc6141 f03a23d55e57 "/opt/bin/flanneld -…" 15 seconds ago Exited (1) 12 seconds ago k8s_kube-flannel_kube-flannel-ds-vsgx8_kube-system_90b895af-b046-4149-ac18-bbec859c37a2_7 e3a40bcc9db3 f03a23d55e57 "cp -f /etc/kube-fla…" 11 minutes ago Exited (0) 11 minutes ago k8s_install-cni_kube-flannel-ds-vsgx8_kube-system_90b895af-b046-4149-ac18-bbec859c37a2_0 7ccb8514bed7 43154ddb57a8 "/usr/local/bin/kube…" 11 minutes ago Up 11 minutes k8s_kube-proxy_kube-proxy-mvbw2_kube-system_5090f36a-4178-4318-aa15-19c61239045f_0 27ae4b2a1157 k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-flannel-ds-vsgx8_kube-system_90b895af-b046-4149-ac18-bbec859c37a2_0 507f2d4ab72c k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-proxy-mvbw2_kube-system_5090f36a-4178-4318-aa15-19c61239045f_0 Container Error: root@k8s-node1:~# docker logs k8s_kube-flannel_kube-flannel-ds-vsgx8_kube-system_90b895af-b046-4149-ac18-bbec859c37a2_7 ERROR: logging before flag.Parse: I0127 08:16:06.400204 1 main.go:519] Determining IP address of default interface ERROR: logging before flag.Parse: I0127 08:16:06.402530 1 main.go:532] Using interface with name eth0 and address 172.16.200.20 ERROR: logging before flag.Parse: I0127 08:16:06.402721 1 main.go:549] Defaulting external address to interface address (172.16.200.20) W0127 08:16:06.402840 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. ERROR: logging before flag.Parse: I0127 08:16:07.100353 1 kube.go:116] Waiting 10m0s for node controller to sync ERROR: logging before flag.Parse: I0127 08:16:07.100618 1 kube.go:299] Starting kube subnet manager ERROR: logging before flag.Parse: I0127 08:16:08.101341 1 kube.go:123] Node controller sync successful ERROR: logging before flag.Parse: I0127 08:16:08.101408 1 main.go:253] Created subnet manager: Kubernetes Subnet Manager - k8s-node1 ERROR: logging before flag.Parse: I0127 08:16:08.101423 1 main.go:256] Installing signal handlers ERROR: logging before flag.Parse: I0127 08:16:08.102395 1 main.go:391] Found network config - Backend type: vxlan ERROR: logging before flag.Parse: I0127 08:16:08.103389 1 vxlan.go:123] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false ERROR: logging before flag.Parse: E0127 08:16:08.104363 1 main.go:292] Error registering network: failed to acquire lease: node "k8s-node1" pod cidr not assigned ERROR: logging before flag.Parse: I0127 08:16:08.105069 1 main.go:371] Stopping shutdownHandler... Why in nodes, flannel containers exited ?!!!
That --pod-network-cidr 10.10.10.0/24 sounds wrong. If you do a kubectl describe node <your-master-node-name>, I would bet there is something like a PodCIDR: 10.10.10.0/24. Your first node is already eating up the whole pod network CIDR, as you planned one that is too small. Once flannel first starts on a Kubernetes node, it would allocate itself with a /24 subnet, taken in the pod network range. I would assume you're just out of IPs, for your second node to join the SDN.
Kubernetes node error sync pod
1,602,273,208,000
On an Amazon Linux 2 instance, the command line is throwing the following connection refused error every time a command is run that references a file path. The same error is thrown when an https url is used in place of the file path. Why is this happening, and how can this problem be remediated so that the file can be read and used from the command line? Here is the console output: [kubernetes-host@ip-of-ec2-instance ~]$ sudo kubectl apply -f rbac-kdd.yaml | tee kubeadm-rbac-kdd.out unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused [kubernetes-host@ip-of-ec2-instance ~]$ The relative path of the file is correct. The command is trying to apply calico to a Kubernetes cluster created by kubeadm, if that helps. But I am thinking this is a basic linux question. SELinux has been disabled on this Amazon Linux 2 EC2 instance. Would appreciate some pointers on this as I try to identify possible causes. PROBLEM ISOLATED: Also, the contents of .kube/config indicate port 6443 as follows: [kubernetes-host@ip-of-ec2-instance ~]$ cat /home/kubernetes-host/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: <encrypted-certificate-authority-data-here> server: https://ip-of-ec2-instance:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: <encrypted-client-certificate-data-here> client-key-data: <encrypted-client-key-data-here> [kubernetes-host@ip-of-ec2-instance ~]$ The problem seems to be that the kubectl apply command is using port 8080 while the Kubernetes apiserver is using port 6443. How can this mismatch be remediated so that the kubectl apply command uses port 6443? Further, kubectl is able to see that 6443 is the correct port, and curl can reach the correct 6443 port, as follows: [kubernetes-host@ip-of-ec2-instance ~]$ kubectl cluster-info Kubernetes master is running at https://ip-of-ec2-instance:6443 KubeDNS is running at https://ip-of-ec2-instance:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [kubernetes-host@ip-of-ec2-instance ~]$ curl https://ip-of-ec2-instance:6443 curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. [kubernetes-host@ip-of-ec2-instance ~]$ [kubernetes-host@ip-of-ec2-instance ~]$ curl https://127.0.0.1:6443 curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. [kubernetes-host@ip-of-ec2-instance ~]$ Why is kubectl apply NOT able to map to port 6443, when kubectl cluster-info is able to map to the correct port?
This looks like you can't connect to the Kubernetes API server. This could be for many reasons The kubernetes API server is not running The API server is not listening on TCP/8080 The API server is not listening on the loopback address of 127.0.0.1 The API server is not listening with HTTP (but with HTTPS) A local firewall (such as iptables) is blocking the connection TCPwrapper is blocking the connection. A mandatory access control system such as SELinux is blocking the connection, but you said this was disabled. And if you have AppArmor installed on Amazon Linux, then I don't know if anyone can help you. :) and this list can go on to many more esoteric reasons why this connection won't happen. some remediation/troubleshooting steps Make sure the k8s api server is running (I don't know how you've installed it, so I can't suggest how you'd check, probably with systemctl status or docker ps). Run ss -ln and check for something listening on 127.0.0.1:8080 or *:8080 see if you can connect to the socket with something else curl -k https://127.0.0.1:8080 to check https, or curl http://127.0.0.1:8080 for HTTP. If your API server is running in a docker container, make sure it's listening on 8080 on the host. docker ps or docker inspect to see the port forwarding. Check the firewall, iptables -S, this is a longshot, not often will you see rules blocking packets going to localhost. Check /etc/hosts.deny for anything that might stop you (again, this is a long shot, because this doesn't usually get configured by accident). Edit After seeing some more of your troubleshooting data. I noticed that you're running kubectl as root. And your kubeconfig is in a user directory. You should run the kubectl as the user "kubernetes-host" by just dropping the sudo at the beginning of your command. The kubeconfig file will direct Kubectl to the right endpoint (address and port), but running as root, kubectl will not check in /home/kubernetes-host/.kube/config. So try kubectl apply -f rbac-kdd.yaml If you have to run as root for some reason, you should: 1) Question the life choices that lead you here. 2) run sudo kubectl apply --kubeconfig=/home/kubernetes-host/.kube/config -f rbac-kdd.yaml to explicitly use the config in the kubernetes-host user's home directory.
unable to recognize file. connection refused
1,602,273,208,000
I would like to bind my service on all nodes to ports 80 and 443, so that I will be redirected via a DNS name (kubernetes) to any node that redirects me directly to the service via HTTP/S and then to the deployment (nginx). However, I don't know exactly how this works, because the range of the NodePorts only goes from 30000 to 32xxx. Here is my setup DNS-Name IPv4 k8s-master 172.25.35.47 k8s-node-01 172.25.36.47 k8s-node-02 172.25.36.8 kubernetes 172.25.36.47 kubernetes 172.25.36.8 My yaml-file apiVersion: v1 kind: Service metadata: name: proxy spec: ports: - name: http nodePort: 80 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 443 port: 443 protocol: TCP targetPort: 443 selector: name: proxy type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: proxy labels: name: proxy spec: selector: matchLabels: name: proxy replicas: 1 template: metadata: labels: name: proxy spec: containers: - name: nginx image: nginx:latest ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP Which type of service provide me a function to expose this ports or how I can realize my mental setup? Volker
You have two options: simple port forwarding externalIPs keepalived Simple port forwarding Run the following on all servers sudo iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports <nodeport> sudo iptables -t nat -I OUTPUT -p tcp -o lo --dport 443 -j REDIRECT --to-ports <nodeport> replace <nodeport> with the port you choose for nodeport. This requires you to run a command on all machines, and is a bit hacks. A better solution would be: externalIps link to docs This allows you to bind any port on a specific node, which will then be routed through the cluster. This does provide a single point of failure, obviously, which can be fixed with: keepalived keepalived is a very simple piece of software. It creates a virtual IP address, which is moved to point to a different node when the master fails. It effecively creastes an alias IP address for the master keepalived server. A good start would be keepalived-vip, which automatically sets up keepalived for services you give it. conclusion I personally use keepalived-vip for this, as it fits my network model much better, but if your clients can access any of your servers, then simple port forwarding is the only way to go about it.
kubernetes export ports
1,602,273,208,000
Here is the shell script I want to execute. for example, I have 100 kubernetes pods with 100 different vehicles. I want a list to be generated of cars and choose 10 particular cars from the list. My goal is to deploy a new image into the selected car's pod. I am able to generate a list of cars through the array and grep. #!/bin/bash array=$(kubectl get pods -n ns -o=name | grep "cars" ) declare array INDEX=1 for i in ${array[@]}; do echo ${INDEX} $i INDEX=$(expr $INDEX + 1) ## this will make a numbered list. done sample output 1 cars-bmw 2 cars-audi 3 cars-volkswagen 4 cars-jaguar 5 cars-ferrari 6 cars-toyota 7 cars-honda How to achieve selection of cars through either name or just selecting the number of the car? Please suggest if there is another way to do this. or suggest a better practice for the shell script. while [[ read -p {array[@]} ]]; do echo "choosing car " {array[@]} kubectl edit pod -n ns $podname -o yaml done exit 1
You could use select: select car in "${array[@]}" exit; do case $car in exit) break;; *) printf 'Choosing car: %s\n' "$car" kubectl edit pod -n ns "$car" -o yaml ;; esac done of course this would depend on you properly setting the array first by using the array=() syntax: array=($(kubectl get pods -n ns -o=name | grep "cars" ))
How to Get user input from the stdout of for loop index
1,602,273,208,000
I have just come across an article describing process of installing containerD runtime and I'm a little dubious about the command mentioned, maybe a typo but I want to get clarity on it. The command is as follows curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.lis Now as far as I know the apt-key add - is used to add the key and the contents are read from the piped standard output for which - is there but what about the echo after it, if this is a separate command shouldn't it be separated by || or a semicolon ;? I know the command is fetching key from the repo and then updating the apt sources list but I'm confused about the syntax of the command.
That's a typo. The correct commands are: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list The first command downloads a gpg signature from https://packages.cloud.google.com/apt/doc/apt-key.gpg which is then piped to sudo apt-key add - (the - means "read from standard input") which adds the key to the list of known apt keys. Then, the second command prints out the line describing the relevant repository and this is piped to sudo tee as a way of writing to /etc/apt/sources.list.d/kubernetes.list. Taken together, the two commands add a new, signed repository to your system.
multiple commands in linux shell
1,602,273,208,000
I am European export BOOTSTRAP_TIMEZONE="UTC +2" I can not check, something is wrong with Github right now. If that is not right what should I change?
It's quite common for servers to be set to UTC in Europe. But not wholly ubiquitous. Only you can decide if using something other than UTC is a good idea. Timezones are a rabbit hole of utter madness where the more you know the more you realise you have to learn. Unless you have a very specific reason why you know you MUST set your timezone to a fixed UTC offset you should always use timezones from the IANA timezone database. You can commonly lookup the named timezones from wikipedia here or use tzselect. In the case of "Berlin" the timezone you are looking for is: Europe/Berlin. Unlike almost every other solution, the IANA database is kept up to date with political changes etc. It's indexed by geographic location [by city] which works for almost everywhere in the world. There are always exceptions. So if the local government in Berlin decide to change timezone (yes this type of thing happens) then a software patch will most likely fix the difference. As long as you keep your software up to date and as long as the local legislature gives enough warning, this should mean your server is always in the right time zone.
How time zone variable for Kubernetes should look like?
1,602,273,208,000
I am trying to install a package using apt-get on Debian. However, for some reason, it also wants to upgrade ca-certificates. Can I force apt-get install to skip upgrading ca-certificates as the upgrade is failing due to a read-only file system? I am looking for a workaround as I don't to mess up with the certificates of the host (it's a kubernetes api-server host).
The upgrade is failing because the file system is read-only, which means you won’t be able to install procps either. The general answer to prevent upgrades is to put a hold on the package: apt-mark hold ca-certificates but in your case you might need to rebuild the container image instead.
Can I force apt-get install to skip upgrading ca-certificates as the upgrade is failing on Debian GNU/Linux 9?
1,712,303,986,000
I want to know what resources: {} in pod.spec.containers.resources in Kubernetes means?
You can get documentation using kubectl for the fields of Kubernetes resources. For example: $ kubectl explain pod.spec.containers.resources KIND: Pod VERSION: v1 FIELD: resources <ResourceRequirements> DESCRIPTION: Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ResourceRequirements describes the compute resource requirements. FIELDS: claims <[]ResourceClaim> Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. limits <map[string]Quantity> Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests <map[string]Quantity> Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ If you need more information about any of the subfields, just append that field to the command $ kubectl explain pod.spec.containers.resources.claims $ kubectl explain pod.spec.containers.resources.limits $ kubectl explain pod.spec.containers.resources.requests ... In your case, the {} means that none of the fields are set.
what does {} in containers section in kubernetes mean?
1,712,303,986,000
I'm running Kubernetes Jobs in which I set limits and requests both to the same number of CPUs. In some of these jobs I'm occasionally seeing OutOfcpu errors When I kubectl describe pods PODNAME I see the following message: Pod Node didn't have enough resource: cpu, requested: 8000, used: 11453, capacity: 16000 That pretty clearly indicates why the OutOfcpu occurred. But my Limits.cpu == Requests.cpu == 8. Limits: cpu: 8 ephemeral-storage: 500Gi memory: 10Gi Requests: cpu: 8 ephemeral-storage: 300Gi memory: 2Gi So as far as I understand I should have been throttled at 8 CPUs and fenced off from the node running out of CPU resources for the pod. I've only noticed this recently, our Kubernetes version is 1.22.5 as of a reasonably recent upgrade.
There is an open issue with a long thread about this bug. It is introduced in k8s v 1.22 and seems a race condition which can occur when pods get scheduled on a node where another pod is terminated. The terminated pod isn't seen by the scheduler anymore, but still uses resources of the node (cpu, memory). https://github.com/kubernetes/kubernetes/issues/106884
Kubernetes OutOfcpu error when Requests.cpu == Limits.cpu
1,712,303,986,000
When I want to use kubeadm kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:03:28Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} to upload a config file like this: kubeadm config upload from-file --kubeconfig kubeadm-config.yaml it give me error: [root@k8smasterone config]# kubeadm config upload from-file --kubeconfig kubeadm-config.yaml invalid subcommand: "upload" To see the stack trace of this error execute with --v=5 or higher I have read the kubeadm manual and did not find any replace with upload, what should I do to upload the config file?
I do it like this in new version: kubeadm init phase upload-config kubeadm --config kubeadm-config.yaml more info:https://github.com/kubernetes/kubeadm/issues/988
invalid subcommand: "upload" when using kubeadm to upload config
1,712,303,986,000
I am following a lab on Kubernetes and Mongodb but all the Pods are always in 0/1 state what does it mean? how do i make them READY 1/1 [root@master-node ~]# kubectl get pod NAME READY STATUS RESTARTS AGE mongo-express-78fcf796b8-wzgvx 0/1 Pending 0 3m41s mongodb-deployment-8f6675bc5-qxj4g 0/1 Pending 0 160m nginx-deployment-64bd7b69c-wp79g 0/1 Pending 0 4h44m kubectl get pod nginx-deployment-64bd7b69c-wp79g -o yaml [root@master-node ~]# kubectl get pod nginx-deployment-64bd7b69c-wp79g -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2021-07-27T17:35:57Z" generateName: nginx-deployment-64bd7b69c- labels: app: nginx pod-template-hash: 64bd7b69c name: nginx-deployment-64bd7b69c-wp79g namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-deployment-64bd7b69c uid: 5b1250dd-a209-44be-9efb-7cf5a63a02a3 resourceVersion: "15912" uid: d71047b4-d0e6-4d25-bb28-c410639a82ad spec: containers: - image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2zr6k readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-2zr6k projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2021-07-27T17:35:57Z" message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn''t tolerate.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: BestEffort kubectl describe pod nginx-deployment-64bd7b69c-wp79g [root@master-node ~]# kubectl get pod nginx-deployment-64bd7b69c-wp79g -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2021-07-27T17:35:57Z" generateName: nginx-deployment-64bd7b69c- labels: app: nginx pod-template-hash: 64bd7b69c name: nginx-deployment-64bd7b69c-wp79g namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-deployment-64bd7b69c uid: 5b1250dd-a209-44be-9efb-7cf5a63a02a3 resourceVersion: "15912" uid: d71047b4-d0e6-4d25-bb28-c410639a82ad spec: containers: - image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2zr6k readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-2zr6k projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2021-07-27T17:35:57Z" message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn''t tolerate.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: BestEffort [root@master-node ~]# kubectl describe pod nginx-deployment-64bd7b69c-wp79g Name: nginx-deployment-64bd7b69c-wp79g Namespace: default Priority: 0 Node: <none> Labels: app=nginx pod-template-hash=64bd7b69c Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/nginx-deployment-64bd7b69c Containers: nginx: Image: nginx:1.14.2 Port: 8080/TCP Host Port: 0/TCP Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2zr6k (ro) Conditions: Type Status PodScheduled False Volumes: kube-api-access-2zr6k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m53s (x485 over 8h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
You seem to have only one server for the K8s cluster. In a typical K8s cluster, the master, or the control plane, is usually kept separate from the servers running workloads. To this effect, it has a 'taint', which is basically a property that repels pods. With the taint in place, pods cannot be scheduled on the master. You can see this information in the 'status.conditions.message' element in the kubectl get pod output: message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master:}, that the pod didn't tolerate.' Pods can define tolerations, which allow them to be scheduled to nodes that have the corresponding taints. That mechanism is explained in detail within the docs: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ The toleration config should look something like this (untested): tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" In your case, it may be easier to use approach mentioned in this SO question. Specify an explicit nodeName: master element in your pod definitions. This should skip the taint mechanism and allow your pods to be scheduled. Another option is to remove the taint from the master node, as discussed here: https://stackoverflow.com/q/43147941
kubectl get pod READY 0/1 state
1,712,303,986,000
I use Microk8s kubectl. I tried to record events inside a pod (named e.g., mypod) using sysdig by setting a filter k8s.pod.name=mypod (specifically through $sysdig k8s.pod.name=mypod). But sysdig does not show any logged event. For example, it doesn't show any system call when I attempt to $cat /etc/passwd of the container inside the pod. p.s. I can get the events inside docker containers (not in pods) using sysdig while setting the filter container.name=mycontainer. I would appreciate any possible explanation/solution about this.
Hello you need to connect to the k8s api server, here is a description from the man page: -k, --k8s-api Enable Kubernetes support by connecting to the API server specified as argument. E.g. "http://admin:[email protected]:8080". The API server can also be specified via the environment variable SYSDIG_K8S_API. Once you connect to the k8s api that filter will work.
Sysdig doesn’t record events inside Kubernetes pods
1,712,303,986,000
Kubernetes use TERM signal on count down replicaSet . this is so dangerous for my application . I use Tomcat application server and this application server has own shutdown mechanism for terminating application server. when i send operating system (Linux) signals (9,15,...) to jvm , JVM does not know what do with threads inside jvm , so threads killed before successfully completed. Request -----> BackEnd API -----> [Process] -----> Response ^ Send TERM (Signal) while thread doing Process Is there any way to change kubernetes container shutdown mechanism ? I want kubernetes use : catalina.sh stop
Kubernetes lifecycle is good solution for problems like this : Container Lifecycle Hooks
kubernetes replicaSet shutdown policy
1,712,303,986,000
As you know current CentOS Atomic Host Docker version is 1.13.1 ... Do you recommend using this version in production environment? If not, should i update it or find another Docker Linux host? Note: I plan to build a K8s PaaS.
Atomic is reaching its End Of Life. RedHat would recommend you to consider using RedHat-CoreOS (base on the former CoreOS). Or in your case, Fedora CoreOS. Both of which may be used deploying OpenShift. For a vanilla Kubernetes, it might be complicated, little documented, but not impossible in theory. Meanwhile, note that tools such as KubeSpray would setup proper repositories installing your container runtime.
RHEL/CentOS Atomic Host Docker version is old
1,712,303,986,000
I have a containerized unimrcp server and it is running as kubernetes pod. When I go inside container and do ps -ef its output is like this: [root@unimrcp-0 fd]# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 99 13:13 ? 01:07:38 ./unimrcpserver root 75 1 0 13:13 ? 00:00:00 [arping] <defunct> root 76 1 0 13:13 ? 00:00:00 [arping] <defunct> root 154 0 0 13:14 pts/0 00:00:00 /bin/bash root 209 154 0 14:21 pts/0 00:00:00 ps -ef Also if I do cat /proc/[pid]/fd/1 then I am seeing some corrupted output like this: unknown command: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ Why there is no controlling terminal attached with the process. I have disabled the Unimrcp from logging to stdout. Also the CPU utilization is 99%. Can someone please help to solve this? This is the entrypoint of the container #!/bin/sh source /ip-conf.sh; set_control_media_network "UNIMRCP" CONTROL_IP=$(get_control_ipv4) MEDIA_IP=$(get_media_ipv4) LOG_LEVEL=$(echo $LOG_LEVEL | tr -s " " | xargs) LOG_OUTPUT=$(echo $LOG_OUTPUT | tr -s " " | xargs) LOG_HEADERS=$(echo $LOG_HEADERS | tr -s " " | xargs) sed -i 's+<priority>.*</priority>+''<priority>'$LOG_LEVEL'</priority>+g' /usr/local/unimrcp/conf/logger.xml sed -i 's+<output>.*</output>+''<output>'$LOG_OUTPUT'</output>+g' /usr/local/unimrcp/conf/logger.xml sed -i 's+<headers>.*</headers>+''<headers>'$LOG_HEADERS'</headers>+g' /usr/local/unimrcp/conf/logger.xml sed -i 's+<!-- <ip>.*</ip> -->+''<ip>'$CONTROL_IP'</ip>+g' /usr/local/unimrcp/conf/unimrcpserver.xml sed -i 's+<!-- <rtp-ip>.*</rtp-ip> -->+''<rtp-ip>'$MEDIA_IP'</rtp-ip>+g' /usr/local/unimrcp/conf/unimrcpserver.xml cd /usr/local/unimrcp/bin/ exec ./unimrcpserver This is the output of the ls -l at the /proc/1/fd/ inside the unimrcp container total 0 lrwx------ 1 root root 64 Jan 2 12:04 0 -> /dev/null l-wx------ 1 root root 64 Jan 2 12:04 1 -> pipe:[17601930] l-wx------ 1 root root 64 Jan 2 12:04 10 -> pipe:[17605635] lrwx------ 1 root root 64 Jan 2 12:04 11 -> socket:[17605636] lrwx------ 1 root root 64 Jan 2 12:04 12 -> anon_inode:[eventpoll] lrwx------ 1 root root 64 Jan 2 12:04 13 -> anon_inode:[eventfd] lrwx------ 1 root root 64 Jan 2 12:04 14 -> anon_inode:[eventpoll] lrwx------ 1 root root 64 Jan 2 12:04 15 -> anon_inode:[eventfd] lrwx------ 1 root root 64 Jan 2 12:04 16 -> anon_inode:[eventpoll] lrwx------ 1 root root 64 Jan 2 12:04 17 -> socket:[17602110] lrwx------ 1 root root 64 Jan 2 12:04 18 -> socket:[17602111] lrwx------ 1 root root 64 Jan 2 12:04 19 -> anon_inode:[eventpoll] l-wx------ 1 root root 64 Jan 2 12:04 2 -> pipe:[17601931] lrwx------ 1 root root 64 Jan 2 12:04 20 -> socket:[17603083] lrwx------ 1 root root 64 Jan 2 12:04 21 -> socket:[17603084] lr-x------ 1 root root 64 Jan 2 12:04 22 -> /dev/urandom lrwx------ 1 root root 64 Jan 2 12:04 23 -> socket:[17603087] lrwx------ 1 root root 64 Jan 2 12:04 24 -> socket:[17603088] l-wx------ 1 root root 64 Jan 2 12:04 3 -> /usr/local/unimrcp/log/unimrcpserver_2020.01.02_12.04.08.988860.log lrwx------ 1 root root 64 Jan 2 12:04 4 -> anon_inode:[eventpoll] lr-x------ 1 root root 64 Jan 2 12:04 5 -> pipe:[17605633] l-wx------ 1 root root 64 Jan 2 12:04 6 -> pipe:[17605633] lrwx------ 1 root root 64 Jan 2 12:04 7 -> socket:[17605634] lrwx------ 1 root root 64 Jan 2 12:04 8 -> anon_inode:[eventpoll] lr-x------ 1 root root 64 Jan 2 12:04 9 -> pipe:[17605635]
Issue was because of process not having TTY attached to it. TTY is the device used by the process to interact for the purpose of input and output. Since there was no TTY, Unimrcp process used its fd 1 (stdout) for its one of the thread communications (fd 1 was attached the process's pipe). Hence there were some junk characters sent to stdout (do not know exactly why?). After attaching the tty to the process, process fd 1 pointed to /dev/pts/0 which is the pseudo terminal. Now I am able to see the logs in readble format. Added these lines to pod yaml file. which solved the issue containers: - name: unimrcp tty: true stdin: true
Linux process is sending some junk characters to the STDOUT. No Controlling terminal attached to it
1,712,303,986,000
After deploying all kubernetes ressources I wanna open port 443. I added it to my whitelist table but it is still closed. Same already happened to me for port 80. After flushing all tables, deleting all kubernetes ressources and setup the firewall from scratch (including whitelisted port 80) before deploying kubernetes again port 80 was finally open. Now I prefer understanding why I can not open port 443 instead of doing all that again. I found out that there is the table KUBE-FIREWALL (see below), which blocks everything by default. And this is my main question: Does the rules of KUBE-FIREWALL have a higher priority than my table TCP? And if, how I can change the priority? INPUT Chain INPUT (policy DROP) target prot opt source destination cali-INPUT all -- anywhere anywhere /* cali:Cz_u1IQiXIMmKD4c */ f2b-sshd tcp -- anywhere anywhere multiport dports ssh KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID ACCEPT icmp -- anywhere anywhere icmp echo-request ctstate NEW UDP udp -- anywhere anywhere ctstate NEW TCP tcp -- anywhere anywhere tcp flags:FIN,SYN,RST,ACK/SYN ctstate NEW REJECT udp -- anywhere anywhere reject-with icmp-port-unreachable REJECT tcp -- anywhere anywhere reject-with tcp-reset REJECT all -- anywhere anywhere reject-with icmp-proto-unreachable cali-INPUT Chain cali-INPUT (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere /* cali:msRIDfJRWnYwzW4g */ mark match 0x10000/0x10000 cali-wl-to-host all -- anywhere anywhere [goto] /* cali:y4fKWmWkTnYGshVX */ MARK all -- anywhere anywhere /* cali:JnMb-hdLugWL4jEZ */ MARK and 0xfff0ffff cali-from-host-endpoint all -- anywhere anywhere /* cali:NPKZwKxJ-5imzORj */ ACCEPT all -- anywhere anywhere /* cali:aes7S4xZI-7Jyw63 */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000 KUBE-FIREWALL Chain cali-INPUT (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere /* cali:msRIDfJRWnYwzW4g */ mark match 0x10000/0x10000 cali-wl-to-host all -- anywhere anywhere [goto] /* cali:y4fKWmWkTnYGshVX */ MARK all -- anywhere anywhere /* cali:JnMb-hdLugWL4jEZ */ MARK and 0xfff0ffff cali-from-host-endpoint all -- anywhere anywhere /* cali:NPKZwKxJ-5imzORj */ ACCEPT all -- anywhere anywhere /* cali:aes7S4xZI-7Jyw63 */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000 claus@vmd33301:~$ sudo iptables -L KUBE-FIREWALL Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 TCP Chain TCP (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https
Edit 2 The port was closed because nothing was listening to it :) Edit 1 The list order is important, but the KUBE-FIREWALL only drops marked packages. I missed the mark match 0x8000/0x8000 at the end of the rule. Therefore it should work. My guess is that one of the cali rules (or fail2ban?) claims port 443. There is no way to know without the full iptables output. --- Original answer below --- Yes, TCP has a lower priority because it is lower in the list. Not only is the KUBE-FIREWALL chain evaluated before your TCP chain, it ends in a rule that DROPs all remaining traffic. Your TCP rule is therefore never evaluated. You can insert your TCP chain entrypoint above the KUBE-FIREWALL chain using iptables -I INPUT ... or insert it in above a specific line number using iptables -I INPUT 2 ... (to insert above line 2). You can see the line numbers by adding --line-numbers to your iptables command. (iptables -nvL --line-numbers)
iptables priority
1,712,303,986,000
I have two OpenSUSE Micro Leap 5.5 VMs in the same network called m0.k8b.intranet.domain and m1.k8b.intranet.domain. Both are clean Preconfigured images from https://get.opensuse.org/leapmicro/5.5/. The two virtual machines have installed containerd, kubeadm, kubelet, but kubectl only in m0 like in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ and https://blog.kubesimplify.com/how-to-install-a-kubernetes-cluster-with-kubeadm-containerd-and-cilium-a-hands-on-guide. In general the https://blog.kubesimplify.com/how-to-install-a-kubernetes-cluster-with-kubeadm-containerd-and-cilium-a-hands-on-guide was followed with m0 being the master and m1 being the slave, but when was time of m0 to join the m1 cluster I get the following error: error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "a4rvza"
After cheeking https://stackoverflow.com/questions/68387634/the-cluster-info-configmap-does-not-yet-contain-a-jws-signature-for-token-id-cj and https://serverfault.com/questions/728727/kubernetes-stuck-on-containercreating results that in the creation of ...-controller-manager-... pod is failing because is trying to create a file at /usr/libexec which is read only by the immutable nature of the distro.
Kubernetes cannot join server cluster
1,712,303,986,000
I have a K8S deployment loading a volume as raw block (through volumeDevices): apiVersion: apps/v1 kind: Deployment ... spec: replicas: 1 ... containers: - name: somepod image: ubuntu:latest command: ["sh","-c", "--"] args: ["mount /dev/block /data && while true; do sleep 1000000; done"] securityContext: privileged: true volumeDevices: - devicePath: /dev/block name: vol volumes: - name: vol persistentVolumeClaim: claimName: vol This works as expected. What I want to achieve: I want to mount /dev/block in the container without having to grant it privileged access (ergo, no root). I have complete control on the base image, whose default user is 1001, added to a nonroot group. When k8s adds /dev/block to the container, from what I can tell it assigns it a random group e.g. 993: brw-rw----. 1 root 993 259, 16 Dec 15 09:00 block From my understanding this is out of my control (e.g. I cannot tell k8s to mount it under a known group). Things I tried: Formatting the filesystem as ext4, adding a /etc/fstab line /dev/block /data ext4 user,uid=1001,auto 0 0 Adding securityContext: fsGroup:1001 Formatting the filesystem as ntfs adding a /etc/fstab line /dev/block /data ntfs user,uid=1001,auto 0 0 Installing and using pmount in the container. Fails because my user is not part of the /dev/block group Using a postStart hook (useless, since it shares the same permissions of the main runtime) Using a privileged initContainer to mount the volume from /dev/block to an emptyDir /data. From my understanding the initContainer and the container should share the emptyDir, but since the data lives on the mounted volume this doesn't work. Things I've yet to try: the guestmount suggested here and here A possible point of failure might be my possibly incorrect /etc/fstab settings, because whenever I try to mount as user I still get permission issues on /dev/block regardless. Why I'm using a Block volume: I'm running EKS and I want to have the data in 'ReadWriteMany' shared across several pods in the same Availability Zone. I've looked into using a EFS volume instead of EBS in io2, but I have price/latency concerns. Related questions: Use "mount -o" with a non-root user How to allow non-superusers to mount any filesystem? https://superuser.com/questions/519824/mounting-ext4-drive-with-specified-user-permission
What ended up being a solution for me is using an AWS EC2 function (exported by Karpenter) that allows to provision a node with a copy of a given volume snapshot. My Karpenter's ec2 NodeClass looks something like: apiVersion: karpenter.k8s.aws/v1beta1 kind: EC2NodeClass metadata: name: test spec: amiFamily: AL2 ... blockDeviceMappings: - deviceName: /dev/xvda ebs: deleteOnTermination: true volumeSize: 100Gi volumeType: gp2 - deviceName: /dev/xvdc ebs: deleteOnTermination: true snapshotID: snap-ID-NUMBER volumeSize: 40Gi volumeType: gp2 ... userData: |- #!/bin/bash set -x ... mkdir -p /home/ec2-user/data/ mount -o defaults /dev/nvme2n1 /home/ec2-user/data/ ... There was a bit of trial and error here, but the main takeaways are: AWS supplies a disk copied from the given snapshotID, that I provide; it gets added under /dev/xvdc; this equals to /dev/nvme2n1 in my AMI's case, although it's not guaranteed for all AMIS/Archs I mount the filesystem in the EC2's userdata. Additionally, to ensure that the data is updated, I run an aws s3 sync as part of the userData. The same behavior can be replicated without Karpenter just with the AWS EC2's api.
Mount a filesystem in a K8S pod without privileged
1,712,303,986,000
I'm encountering an issue where a Kubernetes namespace is stuck in the 'Terminating' state. Running kubectl get ns cattle-monitoring-system -o json|jq produces error messages related to custom.metrics.k8s.io/v1beta1 and shows a DiscoveryFailed condition in the namespace status: E1213 08:02:39.979034 953148 memcache.go:287] couldn't get resource list for custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request … { "apiVersion": "v1", "kind": "Namespace", … "status": { "conditions": [ { "lastTransitionTime": "2023-12-12T14:53:40Z", "message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request", "reason": "DiscoveryFailed", "status": "True", "type": "NamespaceDeletionDiscoveryFailure" }, … ] } } How can I resolve this issue to successfully delete the namespace?
The issue you're encountering is related to the Kubernetes API server's inability to discover the custom.metrics.k8s.io/v1beta1 API, which is blocking the namespace from terminating. Here are the steps to troubleshoot and resolve this: Check for Associated APIService Investigate if there's an APIService for custom.metrics.k8s.io/v1beta1: kubectl get apiservices.apiregistration.k8s.io -o json|jq '.items[]|select(.metadata.name=="v1beta1.custom.metrics.k8s.io")' Output should be something like { "apiVersion": "apiregistration.k8s.io/v1", "kind": "APIService", ... "status": { "conditions": [ { "message": "service/example-service in \"example-namespace\" is not present", "reason": "ServiceNotFound", "status": "False", "type": "Available" } ] } } Verify No Active Components Related to APIService: Issue the following command. Adjust example-service according to the output you generated earlier. for c in configmaps secrets deployments statefulsets pods services; do echo "Checking for $c related to example-service" kubectl get $c -A | grep example-service done If there's no output, it confirms there are no active components related to the APIService in the cluster. Delete the APIService kubectl delete apiservice v1beta1.custom.metrics.k8s.io After deleting the APIService, the namespace should proceed to terminate. Monitor the cluster for any unexpected issues.
Kubernetes Namespace Stuck in 'Terminating'
1,455,898,365,000
I have a remote machine running Debian 8 (Jessie) with lightdm installed. I want it to start in no-GUI mode, but I don't want to remove all X-related stuff to still be able to run it though SSH with the -X parameter. So how to disable X server autostart without removing it? I tried systemctl stop lightdm, it stops the lightdm, but it runs again after reboot. I also tried systemctl disable lightdm, but it basically does nothing. It renames lightdm's scripts in /etc/rc*.d directories, but it still starts after reboot, so what am I doing wrong? And I can't just update-rc.d lightdm stop, because it's deprecated and doesn't work.
The disable didn't work because the Debian /etc/X11/default-display-manager logic is winding up overriding it. In order to make text boot the default under systemd (regardless of which distro, really): systemctl set-default multi-user.target To change back to booting to the GUI, systemctl set-default graphical.target I confirmed those work on my Jessie VM and Slashback confirmed it on Stretch, too. PS: You don't actually need the X server on your machine to run X clients over ssh. The X server is only needed where the display (monitor) is.
How to disable X server autostart in Debian Jessie?
1,455,898,365,000
What is LightDM and GDM? In Linux operating system I heard both but I don't know about them and what is called? Where they are used? Are they related to display?
LightDM is an x display manager that aims to be lightweight, fast, extensible and multi-desktop. It uses various front-ends to draw login interfaces, so-called Greeters. Key features are: A well-defined greeter API allowing multiple GUIs Support for all display manager use cases, with plugins where appropriate Low code complexity Fast performance                                              LightDM offers at least the same functionality as GDM but it has a simpler code base and does not load any GNOME libraries to work. LightDM is the default display manager for Ubuntu. LighDM configuration is governed by the configuration files in /etc/lightdm/lightdm.conf.d/. To add your own configuration, create a new file in that directory such as /etc/lightdm/lightdm.conf.d/my.conf. GDM (the GNOME Display Manager) is a display manager (a graphical login program) for the windowing systems X11 and Wayland.It is a highly configurable reimplementation of xdm, the X Display Manager.                                              Gdm allows you to log into your system with the X Window System running and supports running several different X sessions on your local machine at the same time. The X Window System by default uses the XDM display manager. However, resolving XDM configuration issues typically involves editing a configuration file. GDM allows users to customize or troubleshoot settings without having to resort to a command line.
What is LightDM and GDM
1,455,898,365,000
When I log in with LightDM on my laptop running Debian Unstable, it recently started to hang for around 2 minutes until journalctl shows the message kernel: random: crng init done. When I press random keys on my keyboard while it hangs, it logs in faster (around 10 seconds). Before I didn't have this issue, is there any way I can fix it? Edit: using linux-image-4.15.0-3-amd64 instead of linux-image-4.16.0-1-amd64 works, but I don't want to use an older kernel.
Looks like some component of your system blocks while trying to obtain random data from the kernel (i. e. reading from /dev/urandom or calling getrandom()) due to insufficient entropy (randomness) available. I do not have a ready explanation for why the problem depends on a particular kernel version, or which component on your system actually blocks, but regardless of the root cause, Indeed, as pointed out by Bigon in his answer, it appears to be a kernel bug introduced in 4.16: This bug is introduced by the "crng_init > 0" to "crng_init > 1" change in this commit: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=43838a23a05fbd13e47d750d3dfd77001536dd33 This change inadvertently impacts urandom_read, causing the crng_init==1 state to be treated as uninitialized and causing urandom to block, despite this state existing specifically to support non-cryptographic needs at boot time: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/char/random.c#n1863 Reverting 43838a23a05f ("random: fix crng_ready() test") fixes the bug (tested with 4.16.5-1), but this may cause security concerns (CVE-2018-1108 is mentioned in 43838a23a05f). I am testing a more localised fix that should be more palatable to upstream. (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=897572#82) ...Still, you may try using haveged or rng-tools to gather entropy faster.
When I log in, it hangs until crng init done
1,455,898,365,000
I have started running Jessie (Debian 8) with a LightDM/Xfce desktop on my HTPC after it grinding to a near-halt on W7. One of the things that I cannot get past is having to type the password -- not a normal thing to do for watching TV. Following the instructions on the Debian Wiki I got as far as my login being automatically selected. But this still requires the password, and half-fixes like empty / trivial passwords are not allowed. Is it possible to go straight to the Xfce session without login/password?
I solved it using the Debian wiki page and this page on LinuxServe -- especially the comment! when I do /usr/sbin/lightdm --show-config I get two files: /etc/lightdm/lightdm.conf and /usr/share/lightdm/lightdm.conf.d/01_debian.conf These I edited so that in /usr/share/lightdm/lightdm.conf.d/01_debian.conf it says: greeter-session=lightdm-greeter session-wrapper=/etc/X11/Xsession and in /etc/lightdm/lightdm.conf it says: autologin-user=username autologin-user-timeout=0 The trick was that, as the comment at the end of the second link says, that the autologin settings need to be in the [SeatDefaults] section of the file. There are two places where the lines appear, commented, and I had uncommented the first place. It was a bit strange because in normal settings files for Debian, lines like these don't appear twice -- but I should have taken a better look!
auto login on xfce in jessie
1,455,898,365,000
Is there any way that I can lock the screen CLI-style? I'm trying to target as many desktop managers as possible (mostly LightDM, but GDM, KDM, SLiM, XScreensaver, etc. would be great too), but I can only dig things up for GDM and XScreensaver. For GDM, it would be: gnome-screensaver-command -l For XScreensaver: xscreensaver-command -lock Is there a similar command for KDM and LightDM?
I Googled/emailed around a bit and got these two commands. To lock the screen: xflock4 To activate user switching: gdmflexiserver For Lightdm, this file resides in a strange spot (at least on Arch Linux): /usr/lib/lightdm/lightdm/gdmflexiserver I merged these two into XFCE's logout button dialog, in case anyone's interested, so the patch is available here: https://aur.archlinux.org/packages.php?ID=52816
Locking the screen via CLI?
1,455,898,365,000
How do I change the default session I get when I log in? I'm on Debian jessie. I tried changing settings on gdm3, tried installing lightdm and following this but it's just not working. For more specificity, I'm trying to default to gnome-classic instead of gnome. I want to turn on the computer, log in as any arbitrary user, and see gnome-classic, not gnome3 (preferably I'd remove the gnome3 default session, if there's a way to do that).
On Debian, you should set the x-session-manager default command to choose your default session manager: # update-alternatives --config x-session-manager There, you can select the session manager you want GDM3 to use by default. If gnome-session-classic does not appear in the listing, try creating the link on your own. Something like the following: # update-alternatives --install /usr/bin/x-session-manager x-session-manager /usr/bin/gnome-session-classic 60 Then you should be able to select gnome-classic with update-alternatives --config x-session-manager. To customize the session managers listed by GDM, I think the only way is to go to /usr/share/xsessions and create/remove Desktop Entry files there. The format is easy to understand, but in case you need help, you can consult the Desktop Entry specification or the GNOME documentation about Desktop Entry files.
How do I change my default session?
1,455,898,365,000
That's a question I've seen several time for several Linux flavours, so let's try to be exhaustive. What is the method to execute script/command/program before and after user login into its desktop session ?
Introduction To run a program in graphical environement before a user logged in a graphical environment depend on your display manager. A display manager is in charge to provide you a login interface and setup your graphical environment once logged in. the most important are the following: GDM is the GNOME display manager. LightDM is a cross-desktop display manager, can use various front-ends written in any toolkit. LXDM is the LXDE display manager but independent of the LXDE desktop environment. SDDM is a modern display manager for X11 and Wayland aiming to be fast, simple and beautiful. We will review how to setup the execution of command when the display manager popup before any user logged in and how to execute something when someone is finally logged in. If you don't know which one you're running, you can refer to this question : Is there a simple linux command that will tell me what my display manager is? IMPORTANT Before I start, you are going to edit file that except if mention execute command as root. Do not remove existing stuff in those files except if you know what you're doing and be careful in what you put in those file. This could remove your ability to log in. GDM Be careful with GDM, it will run all script as `root`, a different error code than 0 could limit your log in capability and GDM will wait for your script to finish making it irresponsive as long as your command run. For complete explanation [read the documentation][5]. Before Login If you need to run commands before a user logged-in you can edit the file: `/etc/gdm3/Init/Default`. This file is a shell script that will be executed before the display manager is displayed to the user. After Login If you need to execute things once a user has logged in but before its session has been initialize edit the file: `/etc/gdm3/PostLogin/Default` If you want to execute command after the session of session initialization (env, graphical environment, login...) edit the file: `/etc/gdm3/PreSession/Default` LightDM I will talk about lightdm.conf and not about /etc/lightdm.conf.d/*.conf. You can do what you want what is important is to know the options you can use. Be careful with lightDM, you could already have several other script starting you should read precisely your config file before editing it. also the order in which you put those script might influence the way the session load. LightDM works a bit differently from the others you will put options in the main configuration files to indicate script that will be execute. Edit the main lightDM conf file /etc/lightdm/lightdm.conf. You should add first line with [Seat:*], as indicated here: Later versions of lightdm (15.10 onwards) have replaced the obsolete [SeatDefaults] with [Seat:*] Before Login Add a line `greeter-setup-script=/my/path/to/script` This script will be executed when lightDM shows the login interface. After Login Add a line `session-setup-script=/script/to/start/script` This will run the script as `root` after a user successfully logged in. LXDM Before Login If you want to execute command before anyone logged in, you can edit the shell script: `/etc/lxdm/LoginReady` After Login If you want to execute command after someone logged in but as root, you can edit the shell script: `/etc/lxdm/PreLogin` And if you want to run command as the logged in user, you can edit the script: `/etc/lxdm/PostLogin` SDDM Before Login Modify the script located at /usr/share/sddm/scripts/Xsetup. This script is executed before the login screen appears and is mostly used to adjust monitor displays in X11. Not sure what the equivalent would be for wayland After Login sddm will now source the script located at /usr/share/sddm/scripts/Xsession, which in turn will source the user's dotfiles depending on their default shell. For bash shell, it will source ~/.bash_profile (among others), and for zsh, it will source ${ZDOTDIR:-$HOME}/.zprofile (among others). You can take this opportunity to modify those files to also run any other command you need after logging in.
How to execute command before user login on linux
1,455,898,365,000
My setup is: Debian testing (stretch), up to date LightDM with autologin enabled Awesome window manager bash, in ROXTerm or XTerm I don't seem to be able to set own environment variables and get it sourced at X session startup. Here's what I tried: using ~/.bash_profile worked on my previous OS, but I learned from this answer that it isn't sourced on X startup in Debian and it's derivatives I did mv .bash_profile .profile as suggested, but it didn't work too because, as I learned later from here, ~/.profile isn't sourced when display manager launches X session the answer from above question suggests use of ~/.xsessionrc. This also didn't work because, as I learned from here, it is sourced only by /etc/X11/Xsession which LightDM doesn't execute Arch Linux wiki claims that LightDM sources ~/.xprofile files, but that didn't work too. Trying advice from that last site, I made my ~/.xinitrc like this: export QT_STYLE_OVERRIDE=GTK+ [ -f ~/.xprofile ] && source ~/.xprofile ~/.screenlayout/default.sh awesome And my ~/.xprofile like this: [[ -f ~/.bashrc ]] && . ~/.bashrc source /etc/bash_completion.d/virtualenvwrapper export GDK_NATIVE_WINDOWS=1 export WORKON_HOME=$HOME/env/ Sadly, after logging in and starting X session, I see that none of these variables are set: red@localhost:~$ echo $QT_STYLE_OVERRIDE red@localhost:~$ echo $GDK_NATIVE_WINDOWS red@localhost:~$ echo $WORKON_HOME How do I set them up properly?
~/.xinitrc is only read when you start a GUI session with startx (or otherwise calling xinit) after logging in in text mode. So that won't help you. Whether ~/.bash_profile, ~/.profile, ~/.xprofile and ~/.xsessionrc are read when logging in with a display manager depends on how the display manager is configured and what session type you select when logging in. As far as I can tell, at least on Debian jessie (I haven't looked if this has changed since then): /usr/share/lightdm/lightdm.conf.d/01_debian.conf tells Lightdm to use /etc/X11/Xsession as the session startup script. /etc/X11/Xsession (via /etc/X11/Xsession.d/40x11-common_xsessionrc) loads $USERXSESSIONRC which is ~/.xsessionrc. So ~/.xsessionrc should work, at least on Debian jessie. On Debian, ~/.pam_environment should work to set environment variables for any login method. Alternatively, you can set environment variables and run programs from Awesome via ~/.config/awesome/rc.lua (call posix.setenv("QT_STYLE_OVERRIDE", "GTK+") to set an environment variable).
Can't export environment variables on X session start
1,455,898,365,000
I've looked at the man pages and the configuration files but found nothing (but maybe I'm missing something). I see only many options for autologin. From a few googles I've read that the KDE version does work this way, but I would like to continue using the GTK version. update I still haven't found a solution and sometime I come back to this issue, looking around have seen some screenshots of lightdm-gtk-greeter that support themes and a popup menu for the user list. I'm surely missing something in the configuration or need to install some package. I'm using openbox not gnome, below a picture of my poor greeter. howto make it remember last user and focus to the password? update 2 I'm using a GNU/Debian/unstable (jessie/sid) here some details of the installed lightdm* $ dpkg -l "*lightdm*" | grep ^ii ii liblightdm-gobject-1-0 1.10.0-3 i386 simple display manager (gobject library) ii lightdm 1.10.0-3 i386 simple display manager ii lightdm-gtk-greeter 1.8.4-1 i386 simple display manager (GTK+ greeter) $ apt-cache show lightdm-gtk-greeter|grep Homepage Homepage: https://launchpad.net/lightdm-gtk-greeter and a debug log (launched from desktop eventually I can add the version from /var/log) $ /usr/sbin/lightdm --test-mode --debug [+0.00s] DEBUG: Logging to /home/alex/.cache/lightdm/log/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.10.0, UID=1000 PID=477 [+0.00s] DEBUG: Loading configuration dirs from /usr/share/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration from /usr/share/lightdm/lightdm.conf.d/01_debian.conf [+0.00s] DEBUG: Loading configuration dirs from /usr/local/share/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration dirs from /etc/xdg/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Running in user mode [+0.00s] DEBUG: Using Xephyr for X servers [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Registered seat module unity [+0.00s] DEBUG: Registered seat module surfaceflinger [+0.01s] DEBUG: Adding default seat [+0.01s] DEBUG: Seat: Starting [+0.01s] DEBUG: Seat: Creating greeter session [+0.01s] WARNING: Error getting user list from org.freedesktop.Accounts: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Accounts was not provided by any .service files [+0.01s] DEBUG: Loading user config from /etc/lightdm/users.conf [+0.01s] DEBUG: User alex added [+0.01s] DEBUG: User trustno1 added [+0.01s] DEBUG: Seat: Creating display server of type x [+0.01s] DEBUG: Could not run plymouth --ping: Failed to execute child process "plymouth" (No such file or directory) [+0.01s] DEBUG: Seat: Starting local X display [+0.01s] DEBUG: DisplayServer x-1: Logging to /home/alex/.cache/lightdm/log/x-1.log [+0.01s] DEBUG: DisplayServer x-1: Writing X server authority to /home/alex/.cache/lightdm/run/root/:1 [+0.06s] DEBUG: DisplayServer x-1: Launching X Server [+0.08s] DEBUG: Launching process 482: /usr/bin/Xephyr :1 -seat seat0 -auth /home/alex/.cache/lightdm/run/root/:1 -nolisten tcp [+0.08s] DEBUG: DisplayServer x-1: Waiting for ready signal from X server :1 [+0.08s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.08s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 /bin/rm: impossibile rimuovere "/var/lib/lightdm-data/lightdm": Permesso negato [+0.16s] DEBUG: Got signal 10 from process 482 [+0.16s] DEBUG: DisplayServer x-1: Got signal from X server :1 [+0.16s] DEBUG: DisplayServer x-1: Connecting to XServer :1 [+0.16s] DEBUG: Seat: Display server ready, starting session authentication [+0.16s] DEBUG: Session: Not setting XDG_VTNR [+0.16s] DEBUG: Session pid=487: Started with service 'lightdm-greeter', username 'alex' ** (process:487): WARNING **: Error getting user list from org.freedesktop.Accounts: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Accounts was not provided by any .service files [+0.18s] DEBUG: Session pid=487: Authentication complete with return value 0: Success [+0.18s] DEBUG: Seat: Session authenticated, running command [+0.18s] DEBUG: Session pid=487: Not setting XDG_VTNR [+0.18s] DEBUG: Session pid=487: Running command /usr/sbin/lightdm-gtk-greeter [+0.18s] DEBUG: Creating shared data directory /var/lib/lightdm-data/alex [+0.18s] DEBUG: Session pid=487: Logging to /home/alex/.cache/lightdm/log/x-1-greeter.log [+0.26s] DEBUG: Session pid=487: Greeter connected version=1.10.0 [+0.79s] DEBUG: Session pid=487: Greeter start authentication [+0.79s] DEBUG: Session: Not setting XDG_VTNR [+0.79s] DEBUG: Session pid=504: Started with service 'lightdm', username '(null)' [+0.79s] DEBUG: Session pid=504: Got 1 message(s) from PAM [+0.79s] DEBUG: Session pid=487: Prompt greeter with 1 message(s) [+3.23s] DEBUG: Got signal 2 from process 0 [+3.23s] DEBUG: Caught Interrupt signal, shutting down [+3.23s] DEBUG: Stopping display manager [+3.23s] DEBUG: Seat: Stopping [+3.23s] DEBUG: Seat: Stopping display server [+3.23s] DEBUG: Sending signal 15 to process 482 [+3.23s] DEBUG: Seat: Stopping session [+3.23s] DEBUG: Session pid=487: Sending SIGTERM [+3.23s] DEBUG: Seat: Stopping session [+3.23s] DEBUG: Session pid=504: Sending SIGTERM [+3.23s] DEBUG: Session pid=504: Terminated with signal 2 [+3.23s] DEBUG: Session: Failed during authentication [+3.23s] DEBUG: Seat: Session stopped [+3.23s] DEBUG: Session pid=487: Terminated with signal 2 [+3.23s] DEBUG: Seat: Session stopped [+3.23s] DEBUG: Process 482 exited with return value 0 [+3.23s] DEBUG: DisplayServer x-1: X server stopped [+3.23s] DEBUG: DisplayServer x-1: Removing X server authority /home/alex/.cache/lightdm/run/root/:1 [+3.23s] DEBUG: Seat: Display server stopped [+3.23s] DEBUG: Seat: Stopped [+3.23s] DEBUG: Display manager stopped [+3.23s] DEBUG: Stopping daemon [+3.23s] DEBUG: Exiting with return value 0 here the /etc config files $ grep -v ^# /etc/lightdm/*.conf /etc/lightdm/keys.conf:[keyring] /etc/lightdm/lightdm.conf:[LightDM] /etc/lightdm/lightdm.conf:[SeatDefaults] /etc/lightdm/lightdm.conf:greeter-session=lightdm-gtk-greeter /etc/lightdm/lightdm.conf:greeter-hide-users=true /etc/lightdm/lightdm.conf:greeter-allow-guest=false /etc/lightdm/lightdm.conf:[XDMCPServer] /etc/lightdm/lightdm.conf:[VNCServer] /etc/lightdm/lightdm-gtk-greeter.conf:[greeter] /etc/lightdm/lightdm-gtk-greeter.conf:background=/usr/share/images/desktop-base/login-background.svg /etc/lightdm/lightdm-gtk-greeter.conf:theme-name=Adwaita /etc/lightdm/lightdm-gtk-greeter.conf:xft-antialias=true /etc/lightdm/lightdm-gtk-greeter.conf:xft-hintstyle=hintfull /etc/lightdm/lightdm-gtk-greeter.conf:xft-rgba=rgb /etc/lightdm/lightdm-gtk-greeter.conf:show-indicators=~language;~session;~power /etc/lightdm/users.conf:[UserAccounts] /etc/lightdm/users.conf:minimum-uid=500 /etc/lightdm/users.conf:hidden-users=nobody nobody4 noaccess /etc/lightdm/users.conf:hidden-shells=/bin/false /usr/sbin/nologin update 3 I've check version 1.1.6-2/stable with no results. Installed accountsservice. (in the while lightdm-gtk-greeter dist-upgraded to 1.8.5-1) below /usr/share/lightdm/lightdm.conf.d/01_debian.conf commented out are values as coming from the Debian installation, I changed them (but seem they are overwritten by /etc/lightdm/lighdm.conf) [SeatDefaults] #greeter-session=lightdm-greeter #greeter-hide-users=true greeter-session=lightdm-gtk-greeter greeter-hide-users=false session-wrapper=/etc/X11/Xsession also changed greeter-hide-users in /etc/lightdm/lightdm.conf, the popup menu now appear, it show others..., user alexis bold as if it is the current or default but below the popup there's still the user input text field with focus and empty (I'll update the screenshot and the test/debug log later).
Update: (after comments) Try to modify in /etc/lightdm/lightdm.conf: greeter-hide-users=true in greeter-hide-users=false It's seems it's needed in all lightdm .conf files. It's possible you need to use lightdm-set-defaults [OPTION...] to fix it. The full options available are in the file: /usr/share/doc/lightdm/lightdm.conf.gz (if installed). update In Debian is important to set it in the right section [], [SeatDefaults] in /etc/lightdm/lightdm.conf should win, use lightdm --show-config to see changed settings an in which files are (relative to default values) Original: Maybe you can try to add this ppa ppa:lightdm-gtk-greeter-team/daily and install the LightDM GTK+ Greeter 1.6.0. it seems that it solves automatically your problem you can see here. I find the ppa on this page of the Launchpad blog posts. You can download directly from here Good luck.
Configure Lightdm (GTK) for last saved or a default user and focus on password?
1,455,898,365,000
How can I set user pictures on Debian with lightdm? Xfce does not seem to have a GUI facility for user management at all. I have tried installing gnome-system tools that contains user management dialog (users-admin) but I can't find where to set picture there. I remember Gnome had "About me" dialog but that was GNOME 2. GNOME 3 probably also has something like that but I don't want to install GNOME if I can simply add PNG file and/or edit a config file somewhere for lightdm to look up.
I don't know if there is a way to do it with a GUI, but you could place a icon.face file in your user directory. That may cause issues, however. An alternative is to use the AccountsService. Edit/create the file /var/lib/AccountsService/users/<username>, and add the following lines: [User] Icon=/somewhere/pathToIcon.icon Make sure the lightdm user has read access to the icon (IDK, but maybe 755 permissions?) Source: This Arch Linux wiki page.
Set lightdm user picture
1,455,898,365,000
I configured a media-center server running Debian with LightDM, Leapcast and Plex. What I want to accomplish is the following. There should be three LightDM seats, one default seat on which I can login; one seat that runs Leapcast (Chromecast emulator); and one seat that runs Plex Media Center. The last seat should show up as default. Below you can find the relevant part of my lightdm.conf: [Seat:0] vt=7 [Seat:1] allow-guests=false autologin-user=media-center autologin-timeout=0 greeter-hide-users=true session-setup-script=start-chromecast vt=8 [Seat:2] allow-guests=false autologin-user=media-center autologin-timeout=0 greeter-hide-users=true session-setup-script=start-media-center vt=9 This configuration file enables the three seats, as intended. The problem I am facing now is that the default seat (the seat that is shown after booting) seems to be random, sometimes tty7 show up at boot time (showing the login screen) and sometimes tty8 shows up (which auto-logins and runs Leapcast). Another problem is that when tty7 shows up as default, tty8 is not started automatically. So what I want is to be able to choose the default virtual terminal and make virtual terminal 8 and 9 (Leapcast and Plex) start automatically.
For question #1: LightDM doesn't have that functionality builtin, but you can hack it. In /etc/lightdm/lightdm.conf, add a greeter-setup-script. The script can then use sleep (to wait for things to settle) and chvt to switch to whatever virtual terminal you like. (In your case, you'd want chvt 7). For question #2: I think if you have it switch to tty8 and tty9 before switching to tty7, lightdm will start all three. (You might need a sleep in between the switching to let lightdm get started.)
Autostart all LightDM seats and show one as default
1,455,898,365,000
I am running Xubuntu 18.04. When I lock the session, the screen gets turned off instantly. I am using stock lightdm as display manager and lightlocker for locking the session. From my viewpoint the following sequence of events happens. I initiate locking by running xflock4 via keyboard shortcut or clicking "Lock Screen" in the Whisker (Main) menu. VT8 becomes active, a new lightdm greeter is spawned on this VT terminal and physical screen turns off at the same time. My usual VT7 terminal gets seized in background with lightlocker which draws "This session is locked" screen. If press some button on the keyboard or move the mouse, the screen turns on. If I press Control-Alt-F7, I see lightlocker lock screen in my original session. If I press Control-Alt-F8 I go back to the greeter where I can enter my password. After entering the password, VT7 becomes active and lightlocker white-on-black lock screen is no longer shown. If I later go back to VT8 with Control-Alt-F8, I see a completely black screen with only a blinking cursor (seems to be in text mode). If the session timeouts and gets locked automatically, I also end up with the screen turned off. That can happen several times a day. I am using an external monitor which is very slow to turn on again. It takes around 10 seconds and that is quite annoying every single time. I would rather keep it on for an hour or more on password dialog before timing it out and turning off automatically. Also there is a non-zero chance of getting a system freeze due to buggy Intel (KMS?) drivers when doing VT switch and turning of the screen at almost the same time. I skimmed through lightdm and lightdm greeter docs and found no hints on how to prevent that. Update 1 I discovered an "Action" applet for the xfce4-panel that can "switch" user sessions without turning off the screen. This essentially locks the session with light-locker and shows greeter on a new VT. After some digging I discovered a command to show greeter, dm-tool switch-to-greeter. I have reassigned keyboard shortcut to lock the screen from xflock4 to dm-tool switch-to-greeter as workaround. But the problem with automatic locking and turning screen still annoys me. What is interesting that dm-tool lock and light-locker-command --lock (xflock4 calls it) behave the same and produce a turned off monitor. If I uninstall light-locker (with full reboot) and do dm-tool lock, the screen also turns off. So this should not be related to light-locker... Update 2 The question is how to keep screen turned on when locking the session via light-locker on timeout or locking manually with xflock4, not on how to disable timeouts for locking.
As of Ubuntu 20.04 (LTS) Xfce comes with a native screensaver. When I lock the session, the screensaver kicks in. If I lock the session in any way (lock icon click in Whisker, timeout, xflock4 command), the screensaver starts. If I move the mouse or press a mouse/keyboard button, the unlock dialog appears. The monitor stays on all this time.
Prevent lightdm from turning off screen when locking session
1,455,898,365,000
If I restart the Display Manager, e.g lightdm, will X be restarted as well ?
It depends on your configuration: you can have X-window Server started by itself and then the Display Manager process or Display Manager could start the X-window server. I have X server started by kdm in OpenSuse 12.1: kdm(4655)─┬─Xorg(4671) └─kdm(4698)───startkde(4800)─┬─gpg-agent(4877) ├─kwrapper4(4977) └─ssh-agent(4878) If you use Unix with Xorg and pstree you can check by: pstree -p `ps -H -C Xorg -o ppid --no-header` or ps -H -C Xorg -o ppid --no-header | xargs pstree -p
Does restarting of Display Manager (e.g lightdm) restart X server as well?
1,455,898,365,000
Every time I boot my PC Ubuntu keeps dropping me to TTY 1 where I have to log in and then do sudo lightdm start just to log in again, which is very annoying I have already tried removing and re-adding it to update-rc.d update-rc.d lightdm defaults but it just does not work. Anyone got an idea which logfiles to check or what to do get it working again? I use Mint 12 with Gnome3.
This is how I fixed this problem: first you need to stop lightdm if it's running sudo service lightdm stop then you need to x server to create a fresh xorg.conf, I did this by renaming my old one sudo mv /etc/X11/xorg.conf /etc/X11/xorg.old then I deleted my current drivers sudo aptitude remove --purge nvidia-current IMPORTANT if you had or have drivers from the Nvidia site then you need to uninstall them as well. That means you will have to download them again if you dont have the .run file anymore and then do [nvidia-installer] --uninstall where nvidia-installer is the installer you just downloaded. then you properly install the current drivers sudo aptitude install nvidia-current the next step would be to type startx and hope for Gnome2 (or the default window manager of your distribution) to come up which worked for me. The reason for this error seemed to be some kind of conflict with X11 and the Nvidia drivers. While on boot time I got the error that the Nvidia Kernel module could not be loaded, I still was able to start lightdm once I was in TTY1. The cause for this might have been that I previously had the 290 version of the Nvidia drivers installed and then downgraded to nvidia-current (280) via aptitude which might caused some leftovers to remain and conflict with the older drivers (290 vs 280). Note that you will have to reconfigure your desktop environment after applying these steps. If you don't get a graphics accelerated UI (ie Gnome3, Unity 3D) run sudo nvidia-xconfig
Lightdm won't start automatically on boot
1,455,898,365,000
When I run in my gnome-terminal: service lightdm status everything looks OK and it is as expected: lightdm start/running, process 1221 But when I run: service --status-all 2>&1 | grep lightdm the output is: [ ? ] lightdm which, because of ?, AFAIK means that the lightdm service doesn't have a status command. Now I want to understand from where came these contradictory results? Is this a bug?
This just seems to be a bug in the service script. The behaviour is different for --status-all than for a single process. For a single process, service just uses exec to hand over to the init script itself (in this case /etc/init.d/lightdm). Here is the relevant snippet: if [ -x "${SERVICEDIR}/${SERVICE}" ]; then exec env -i LANG="$LANG" PATH="$PATH" TERM="$TERM" "$SERVICEDIR/$SERVICE" ${ACTION} ${OPTIONS} For --status-all it tries to parse the output of the init.d script itself. Downloading the sysvinit-tools package for Ubuntu 13.10 and comparing to my version (Debian Jessie), you can see that there has been a change made to that part of the code (most likely to fix this exactly this kind of bug). Compare this (Ubuntu 13.10) snippet (I have marked the changed lines with #<<<: if [ -z "${SERVICE}" -a $# -eq 1 -a "${1}" = "--status-all" ]; then cd ${SERVICEDIR} for SERVICE in * ; do case "${SERVICE}" in functions | halt | killall | single| linuxconf| kudzu) ;; *) if ! is_ignored_file "${SERVICE}" \ && [ -x "${SERVICEDIR}/${SERVICE}" ]; then if ! grep -qs "\(^\|\W\)status)" "$SERVICE"; then #<<< #printf " %s %-60s %s\n" "[?]" "$SERVICE:" "unknown" 1>&2 echo " [ ? ] $SERVICE" 1>&2 continue And if [ -z "${SERVICE}" -a $# -eq 1 -a "${1}" = "--status-all" ]; then cd ${SERVICEDIR} for SERVICE in * ; do case "${SERVICE}" in functions | halt | killall | single| linuxconf| kudzu) ;; *) if ! is_ignored_file "${SERVICE}" \ && [ -x "${SERVICEDIR}/${SERVICE}" ]; then out=$(env -i LANG="$LANG" PATH="$PATH" TERM="$TERM" "$SERVICEDIR/$SERVICE" status 2>&1) #<<< retval=$? #<<< if echo "$out" | egrep -iq "usage:"; then #<<< #printf " %s %-60s %s\n" "[?]" "$SERVICE:" "unknown" 1>&2 echo " [ ? ] $SERVICE" 1>&2 continue
Lightdm service - contradictory results for `status` command
1,455,898,365,000
I am using LightDM with the Slick Greeter. In my system, I have two user accounts for myself. They have the same name, but different user names. I want to separate private and professional work. Problem: in the greeter, there is no visual difference between them. I only see the name (not the user name). Hacky solutions: know the order of the accounts use different window managers (because they have different icons and they are visible next to the user name) Obviously, all of the above solutions are plain out stupid. Better ideas? I am willing to change the greeter, but not the the session manager LightDM, neither my names (because they are used in other programs, such as email).
Change your avatar: create the image file as /home/username/.face from: https://wiki.archlinux.org/index.php/LightDM#Changing_your_avatar
LightDM: how to distinguish users with the same name?
1,455,898,365,000
OS: Linux Mint 18.3 Cinnamon 64-bit. I almost got a heart attack, after reboot preceding this; in an important VM: sudo apt-get install lightdm ... I chose LightDM as the default DM ... sudo apt-get purge mdm I underestimated it as after reboot the LightDM failed to start. What's more I could not get to Virtual Console, there was just an underscore blinking. After yet another reboot, the same behavior. I fixed that in a while over SSH by purging LightDM and installing MDM back. What is the correct procedure to replace MDM with LightDM on Linux Mint?
Following the official statement: sudo apt-get install slick-greeter lightdm lightdm-settings apparmor Did not yet get me to the result. I had to create a file: /etc/lightdm/lightdm.conf with contents: [Seat:*] allow-guest=false
What is the correct procedure to replace MDM with LightDM on Linux Mint?
1,455,898,365,000
I am setting up Debian computers for my high school and I would like to somehow customize LightDM to display system and network information at the login screen. The big picture is the following : people log into the computers thanks to LDAP authentication. But we have a non-optimal network situation. First, the computer could be physically disconnected from the network. Second, the DHCP server could fail to give the computer an IP address in time. Third, the LDAP server might be down. Currently, the standard way to detect this is that your login credentials are rejected. But then you might as well have mistyped them, or simply forgot them. So there are various causes for a failure to log in, and we unfortunately cannot expect most teachers to even begin to understand them, which leads to frustration and low-value reports of "I cannot login". So I would like to display an information window stating either "Network cable seems to be disconnected" or "Network is responding, waiting for an IP address" or "Network seems down" or "Network is ready for authentication". How can I run a program that would compute such information, then display and maybe update it on the login screen ?
Probably the proper way to do this is to write your own greeter (the thing that shows the "login:" prompt etc). If you are familiar with web technologies you might write your own webkit greeter as in this example. Or you might try running an X11 application from the hook provided by lightdm. In the file /etc/lightdm/lightdm.conf add a line like greeter-setup-script=/home/meuh/myinfo in the [SeatDefaults] section, and in this executable script do something simple like #!/bin/bash #--beware running as root (sleep 2 && xlogo) & #--must return 0 or lightdm stops exit 0 where xlogo is some suitable application. I tested this only with lightdm --test-mode --debug which you can run whilst you are logged in, and it shows you in a window what you might really get. You will need to test this for real and work out if the window can get iconified or killed, and whether it dies when someone actually logs on. Also make sure you don't stay root in the script, and put it somewhere safer. There are logs in ~/.cache/lightdm/log/. As an application you could use something like conky which can be fairly easily configured to present system information on the root screen.
Displaying system information at graphical login prompt
1,455,898,365,000
I'm installing Arch Linux, but I have a problem with the installation of my display manager, lightdm. When I install and enable lightdm, I think the keyboard layout changes from Portuguese to US. Changing the keymapping with: # localectl set-locale LANG=pt_PT.UTF-8 only works until lightdm launches, then the keymapping reverts to US. The same occurs when I try to change it permanently by changing to KEYMAP=br-latin1 with: # sudo nano /etc/vconsole.conf
Ok, I have found the answer to my question, I believe that it can be applied to other languages than Portuguese. The LightDM keyboard layout to Portuguese is done with this command: localectl set-x11-keymap pt I did this as root, and I think it is the right way, but you can do it has a normal user too, I believe. I've found this on Fedora forum. For set-x11-keymap to work, first, we need to install package libxkbcommon, without that will give you an error message. I've found it on ArchLinux forum. UPDATE I asked this question also in Arch Linux forum and this answer, solves the problem too, adding to the file /etc/X11/xorg.conf.d/20-keyboard.conf, these configs: Section "InputClass" Identifier "system-keyboard" MatchIsKeyboard "on" Option "XkbLayout" "pt" Option "XkbModel" "pc105" EndSection and reboot...
Change lightdm keyboard layout US to Portuguese Arch Linux
1,455,898,365,000
I upgraded my OS to the latest version - Linux Mint 18.2 Sarah. After upgrading my NumLock is off. I changed my display manager from mdm to lightdm. this method is not working.
Jaleks' answer was only almost right for me, but this finally worked (Linux Mint 18.2 Cinnamon, manually updated to LightDM (from LM 18.1)): Install sudo apt install numlockx, after that, edit /usr/share/lightdm/lightdm.conf.d/90-slick-greeter.conf file and add the following line at the end: greeter-setup-script=/usr/bin/numlockx on
NumLock is off on start-up on Linux Mint 18.2
1,455,898,365,000
At some point, in the past, I must have disabled lightdm service with: systemctl disable lightdm.service or something similar on my Debian with Cinnamon desktop. Unfortunately, I will now need it, but running: systemctl start lightdm.service every time the computer boots up does not make me happy, so... How do I re-enable the lightdm service? Because just running: systemctl enable lightdm.service yields to the following error: Synchronizing state of lightdm.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable lightdm The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: • A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. • A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. • A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). • In case of template units, the unit is meant to be enabled with some instance name specified. Also tried before: dpkg-reconfigure lightdm to no avail.
It looks like your service is masked. To unmask it, run: systemctl unmask lightdm.service and afterwards, run: systemctl daemon-reload
How do I re-enable the lightdm.service?
1,455,898,365,000
I changed my graphic cards from nvidia to a radeon R200 and now LightDM won't start even though I installed the new drivers. I'm using Arch. Can anyone please help me? EDIT: I can't copy the logs because I can't open my browser on my computer, for the lightdm logs I have "greeter display server failed to start" (I have lightdm-greeter-gtk) and also "no screens found". For the x-0.log file for the xorg logs I have "Screen 0 deleted because of no matching config section".
The most likely cause is a missing or wrong driver. The arch wiki is a great source for how to solve these problems: https://wiki.archlinux.org/title/ATI#Selecting_the_right_driver
LightDM won't boot after changing graphics card
1,455,898,365,000
I am using Linux Mint 19.2 Cinnamon, with ZSH as login shell for my user. When I login, I see that my $PATH includes $HOME/.local/bin two times, even though it is specified only once across all my ZSH startup scripts (in $HOME/.zshenv). I looked at the other dotfiles of my $HOME, and found the offending modification of the $PATH in $HOME/.profile: # set PATH so it includes user's private bin if it exists if [ -d "$HOME/.local/bin" ] ; then PATH="$HOME/.local/bin:$PATH" fi This was surprising, because the documentation of ZSH says that .profile is only sourced in a "compatibility" mode, when it is being invoked as sh or ksh: Zsh tries to emulate sh or ksh when it is invoked as sh or ksh respectively; more precisely, it looks at the first letter of the name by which it was invoked, excluding any initial 'r' (assumed to stand for 'restricted'), and if that is 'b', 's' or 'k' it will emulate sh or ksh. Furthermore, if invoked as su (which happens on certain systems when the shell is executed by the su command), the shell will try to find an alternative name from the SHELL environment variable and perform emulation based on that. I checked from where .profile was being source from by appending the following lines at the end of it: logger "Opened .profile with this shell: $SHELL" PARENT_COMMAND=$(ps -o args= $PPID) logger "Parent command: $PARENT_COMMAND" After logging out and logging back in again, the syslog contained the following: Opened .profile with this shell: /usr/bin/zsh Parent command: lightdm --session-child 13 20 What is the sequence of events that leads the display manager lightdm to source .profile even though my $SHELL is ZSH, and what is the rationale behind it?
From the Arch-Linux Wiki page for lightdm (which I imagine may be valid for your situation in Linux Mint too): If you are migrating from xinit, you will notice that the display is not launched by your shell. This is because, as opposed to your shell starting the display (and the display inheriting the environment of your shell), LightDM starts your display and does not source your shell. LightDM launches the display by running a wrapper script and that finally exec's your graphic environment. By default, /etc/lightdm/Xsessions.conf is run. Environment variables The script checks and sources /etc/profile, ~/.profile, /etc/xprofile and ~/.xprofile, in that order. If you are using a shell that does not source any of these files, you can create an ~/.xprofile to do so. (In this example, the login shell is zsh) ~/.xprofile #!/bin/sh [[ -f ~/.config/zsh/.zshenv ]] && source ~/.config/zsh/.zshenv So, get your ~/.profile file sourced because that's what the wrapper scripts of lightdm does when it starts. The wiki text furthermore has a specific example for zsh users which includes sourcing your ~/.zshenv from ~/.xprofile. The #!-line in that example doesn't make much sense though as the file is sourced and therefore shouldn't need it, and I wonder if they got the order of parsing right as it would make more sense to prioritise ~/.xprofile over ~/.profile (i.e don't read ~/.profile at all if ~/.xprofile exists). If the suggestion in that wiki doesn't do it for you, you can get your ~/.profile to source ~/.zshenv when read by zsh by using case $SHELL in (*/zsh) . "${ZDOTDIR:-$HOME}/.zshenv"; return ;; esac or if [ "$SHELL" = "/usr/bin/zsh" ]; then . "${ZDOTDIR:-$HOME}/.zshenv" return fi at its top.
Why does lightdm source my .profile even though my login shell is zsh?
1,455,898,365,000
I just installed Mint 16 and I see that root user is not available at login screen. I log in from normal user and went to "Login Window" option and there I set "Allow root login". Then I restarted the PC and still I don't see root user in login window. I also did the below but it also didn't work. sudo passwd root sudo sh -c 'echo "greeter-show-manual-login=true" >> /etc/lightdm/lightdm.conf'
Linux Mint 16 uses the Mint-X theme by default which only displays the password box for chosen non-root users. In order to enable the User entry field (from which you will be able to specify root) do this. From Menu ==> Administration ==> Login Window ==> Theme choose Clouds and logout.
Enable root login from GUI
1,455,898,365,000
I have Manjaro/XFCE/LightDM and a GeForce GT 710 and I followed this guide to get rid of tearing. I added those three lines to my X-configuration. After I log in in the greeter the screen only shows the background image and the mouse cursor. After I press Ctrl+Alt+F2 and Ctrl+Alt+F7 I can see the desktop and the applications that auto-start. Without the three lines desktop and applications are visible right away. Yes, I googled. My X-configurations: /etc/X11/xorg.conf: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 375.26 (buildmeister@swio-display-x86-rhel47-01) Thu Dec 8 19:07:46 PST 2016 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" Option "metamodes" "nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection /etc/X11/xorg.conf.d/00-keyboard.conf # Read and parsed by systemd-localed. It's probably wise not to edit this file # manually too freely. Section "InputClass" Identifier "system-keyboard" MatchIsKeyboard "on" Option "XkbLayout" "de" Option "XkbModel" "pc105" Option "XkbOptions" "terminate:ctrl_alt_bksp,grp:alt_shift_toggle" EndSection /etc/X11/xorg.conf.d/90-mhwd.conf # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 375.26 (buildmeister@swio-display-x86-rhel47-01) Thu Dec 8 19:07:46 PST 2016 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" Option "NoLogo" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection Option "metamodes" "nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }" Option "AllowIndirectGLXProtocol" "off" Option "TripleBuffer" "on" EndSection Section "Extensions" Option "Composite" "Enable" EndSection Section "InputClass" Identifier "Keyboard Defaults" MatchIsKeyboard "yes" Option "XkbOptions" "terminate:ctrl_alt_bksp" EndSection /var/log/Xorg.0.log: [ 13766.904] X.Org X Server 1.20.1 X Protocol Version 11, Revision 0 [ 13766.904] Build Operating System: Linux Arch Linux [ 13766.904] Current Operating System: Linux runlikehell 4.14.66-1-MANJARO #1 SMP PREEMPT Wed Aug 22 21:45:26 UTC 2018 x86_64 [ 13766.904] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.14-x86_64 root=UUID=6ddc54eb-3d4c-4bab-b2b9-9a3e01d25a7a rw resume=UUID=3290b951-2ffd-4a02-b65c-19a4b8a9962c quiet splash [ 13766.904] Build Date: 09 August 2018 06:37:34PM [ 13766.904] [ 13766.904] Current version of pixman: 0.34.0 [ 13766.904] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 13766.904] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 13766.904] (==) Log file: "/var/log/Xorg.0.log", Time: Fri Aug 31 17:29:08 2018 [ 13766.904] (==) Using config file: "/etc/X11/xorg.conf" [ 13766.904] (==) Using config directory: "/etc/X11/xorg.conf.d" [ 13766.904] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 13766.904] (==) ServerLayout "Layout0" [ 13766.904] (**) |-->Screen "Screen0" (0) [ 13766.904] (**) | |-->Monitor "Monitor0" [ 13766.905] (**) | |-->Device "Device0" [ 13766.905] (**) | |-->GPUDevice "Device0" [ 13766.905] (**) |-->Input Device "Keyboard0" [ 13766.905] (**) |-->Input Device "Mouse0" [ 13766.905] (==) Automatically adding devices [ 13766.905] (==) Automatically enabling devices [ 13766.905] (==) Automatically adding GPU devices [ 13766.905] (==) Automatically binding GPU devices [ 13766.905] (==) Max clients allowed: 256, resource mask: 0x1fffff [ 13766.905] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/100dpi". [ 13766.905] Entry deleted from font path. [ 13766.905] (Run 'mkfontdir' on "/usr/share/fonts/100dpi"). [ 13766.905] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/75dpi". [ 13766.905] Entry deleted from font path. [ 13766.905] (Run 'mkfontdir' on "/usr/share/fonts/75dpi"). [ 13766.905] (==) FontPath set to: ... [ 13766.905] (==) ModulePath set to "/usr/lib/xorg/modules" [ 13766.905] (WW) Ignoring unrecognized extension "Composite" [ 13766.905] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 13766.905] (WW) Disabling Keyboard0 [ 13766.905] (WW) Disabling Mouse0 [ 13766.905] (II) Module ABI versions: [ 13766.905] X.Org ANSI C Emulation: 0.4 [ 13766.905] X.Org Video Driver: 24.0 [ 13766.905] X.Org XInput driver : 24.1 [ 13766.905] X.Org Server Extension : 10.0 [ 13766.905] (++) using VT number 7 [ 13766.906] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration [ 13766.906] (II) xfree86: Adding drm device (/dev/dri/card0) [ 13766.907] (**) OutputClass "nvidia" ModulePath extended to "/usr/lib/nvidia/xorg,/usr/lib/xorg/modules,/usr/lib/xorg/modules" [ 13766.907] (**) OutputClass "nvidia" setting /dev/dri/card0 as PrimaryGPU [ 13766.908] (--) PCI:*(1@0:0:0) 10de:128b:1462:8c93 rev 161, Mem @ 0xfd000000/16777216, 0xf0000000/134217728, 0xf8000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/131072 [ 13766.909] (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory) [ 13766.909] (II) LoadModule: "glx" [ 13766.909] (II) Loading /usr/lib/nvidia/xorg/libglx.so [ 13766.913] (II) Module glx: vendor="NVIDIA Corporation" [ 13766.913] compiled for 4.0.2, module version = 1.0.0 [ 13766.913] Module class: X.Org Server Extension [ 13766.913] (II) NVIDIA GLX Module 396.54 Tue Aug 14 22:37:05 PDT 2018 [ 13766.913] (II) LoadModule: "nvidia" [ 13766.913] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so [ 13766.914] (II) Module nvidia: vendor="NVIDIA Corporation" [ 13766.914] compiled for 4.0.2, module version = 1.0.0 [ 13766.914] Module class: X.Org Video Driver [ 13766.914] (II) NVIDIA dlloader X Driver 396.54 Tue Aug 14 22:15:03 PDT 2018 [ 13766.914] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 13766.940] (II) Loading sub module "fb" [ 13766.940] (II) LoadModule: "fb" [ 13766.941] (II) Loading /usr/lib/xorg/modules/libfb.so [ 13766.941] (II) Module fb: vendor="X.Org Foundation" [ 13766.941] compiled for 1.20.1, module version = 1.0.0 [ 13766.941] ABI class: X.Org ANSI C Emulation, version 0.4 [ 13766.941] (II) Loading sub module "wfb" [ 13766.941] (II) LoadModule: "wfb" [ 13766.941] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 13766.941] (II) Module wfb: vendor="X.Org Foundation" [ 13766.941] compiled for 1.20.1, module version = 1.0.0 [ 13766.941] ABI class: X.Org ANSI C Emulation, version 0.4 [ 13766.941] (II) Loading sub module "ramdac" [ 13766.941] (II) LoadModule: "ramdac" [ 13766.941] (II) Module "ramdac" already built-in [ 13766.944] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 13766.944] (==) NVIDIA(0): RGB weight 888 [ 13766.944] (==) NVIDIA(0): Default visual is TrueColor [ 13766.944] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 13766.944] (II) Applying OutputClass "nvidia" options to /dev/dri/card0 [ 13766.944] (**) NVIDIA(0): Option "AllowEmptyInitialConfiguration" [ 13766.944] (**) NVIDIA(0): Enabling 2D acceleration [ 13767.226] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0 [ 13767.226] (--) NVIDIA(0): CRT-0 [ 13767.226] (--) NVIDIA(0): DFP-0 (boot) [ 13767.226] (--) NVIDIA(0): DFP-1 [ 13767.228] (II) NVIDIA(0): NVIDIA GPU GeForce GT 710 (GK208) at PCI:1:0:0 (GPU-0) [ 13767.228] (--) NVIDIA(0): Memory: 2097152 kBytes [ 13767.228] (--) NVIDIA(0): VideoBIOS: 80.28.a6.00.22 [ 13767.228] (II) NVIDIA(0): Detected PCI Express Link width: 8X [ 13767.230] (--) NVIDIA(GPU-0): CRT-0: disconnected [ 13767.230] (--) NVIDIA(GPU-0): CRT-0: 400.0 MHz maximum pixel clock [ 13767.230] (--) NVIDIA(GPU-0): [ 13767.245] (--) NVIDIA(GPU-0): Acer K242HQL (DFP-0): connected [ 13767.245] (--) NVIDIA(GPU-0): Acer K242HQL (DFP-0): Internal TMDS [ 13767.245] (--) NVIDIA(GPU-0): Acer K242HQL (DFP-0): 330.0 MHz maximum pixel clock [ 13767.245] (--) NVIDIA(GPU-0): [ 13767.245] (--) NVIDIA(GPU-0): DFP-1: disconnected [ 13767.245] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS [ 13767.245] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock [ 13767.245] (--) NVIDIA(GPU-0): [ 13767.248] (==) NVIDIA(0): [ 13767.248] (==) NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" [ 13767.248] (==) NVIDIA(0): will be used as the requested mode. [ 13767.248] (==) NVIDIA(0): [ 13767.248] (II) NVIDIA(0): Validated MetaModes: [ 13767.248] (II) NVIDIA(0): "DFP-0:nvidia-auto-select" [ 13767.248] (II) NVIDIA(0): Virtual screen size determined to be 1920 x 1080 [ 13767.251] (--) NVIDIA(0): DPI set to (93, 94); computed from "UseEdidDpi" X config [ 13767.251] (--) NVIDIA(0): option [ 13767.252] (II) NVIDIA: Using 6144.00 MB of virtual memory for indirect memory [ 13767.252] (II) NVIDIA: access. [ 13767.254] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon [ 13767.254] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X [ 13767.254] (II) NVIDIA(0): configuration option may not be set correctly. When the [ 13767.254] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will [ 13767.255] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For [ 13767.255] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and [ 13767.255] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X [ 13767.255] (II) NVIDIA(0): Config Options in the README. [ 13767.275] (II) NVIDIA(0): Setting mode "DFP-0:nvidia-auto-select" [ 13767.284] (==) NVIDIA(0): Disabling shared memory pixmaps [ 13767.284] (==) NVIDIA(0): Backing store enabled [ 13767.284] (==) NVIDIA(0): Silken mouse disabled [ 13767.284] (**) NVIDIA(0): DPMS enabled [ 13767.285] (WW) NVIDIA(0): Option "PrimaryGPU" is not used [ 13767.285] (WW) NVIDIA(0): Option "NoLogo" is not used [ 13767.285] (II) Loading sub module "dri2" [ 13767.285] (II) LoadModule: "dri2" [ 13767.285] (II) Module "dri2" already built-in [ 13767.285] (II) NVIDIA(0): [DRI2] Setup complete [ 13767.285] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia [ 13767.285] (II) Initializing extension Generic Event Extension [ 13767.285] (II) Initializing extension SHAPE [ 13767.285] (II) Initializing extension MIT-SHM [ 13767.285] (II) Initializing extension XInputExtension [ 13767.285] (II) Initializing extension XTEST [ 13767.285] (II) Initializing extension BIG-REQUESTS [ 13767.286] (II) Initializing extension SYNC [ 13767.286] (II) Initializing extension XKEYBOARD [ 13767.286] (II) Initializing extension XC-MISC [ 13767.286] (II) Initializing extension SECURITY [ 13767.286] (II) Initializing extension XFIXES [ 13767.286] (II) Initializing extension RENDER [ 13767.286] (II) Initializing extension RANDR [ 13767.287] (II) Initializing extension COMPOSITE [ 13767.287] (II) Initializing extension DAMAGE [ 13767.287] (II) Initializing extension MIT-SCREEN-SAVER [ 13767.287] (II) Initializing extension DOUBLE-BUFFER [ 13767.287] (II) Initializing extension RECORD [ 13767.287] (II) Initializing extension DPMS [ 13767.287] (II) Initializing extension Present [ 13767.287] (II) Initializing extension DRI3 [ 13767.287] (II) Initializing extension X-Resource [ 13767.287] (II) Initializing extension XVideo [ 13767.288] (II) Initializing extension XVideo-MotionCompensation [ 13767.288] (II) Initializing extension XFree86-VidModeExtension [ 13767.288] (II) Initializing extension XFree86-DGA [ 13767.288] (II) Initializing extension XFree86-DRI [ 13767.288] (II) Initializing extension DRI2 [ 13767.288] (II) Initializing extension GLX [ 13767.288] (II) Initializing extension GLX [ 13767.288] (II) Indirect GLX disabled. [ 13767.288] (II) Initializing extension NV-GLX [ 13767.288] (II) Initializing extension NV-CONTROL [ 13767.288] (II) Initializing extension XINERAMA [ 13767.328] (II) config/udev: Adding input device Power Button (/dev/input/event2) [ 13767.328] (**) Power Button: Applying InputClass "libinput keyboard catchall" [ 13767.328] (**) Power Button: Applying InputClass "system-keyboard" [ 13767.328] (**) Power Button: Applying InputClass "Keyboard Defaults" [ 13767.328] (II) LoadModule: "libinput" [ 13767.328] (II) Loading /usr/lib/xorg/modules/input/libinput_drv.so [ 13767.330] (II) Module libinput: vendor="X.Org Foundation" [ 13767.330] compiled for 1.20.0, module version = 0.28.0 [ 13767.330] Module class: X.Org XInput Driver [ 13767.330] ABI class: X.Org XInput driver, version 24.1 [ 13767.330] (II) Using input driver 'libinput' for 'Power Button' [ 13767.330] (**) Power Button: always reports core events [ 13767.330] (**) Option "Device" "/dev/input/event2" [ 13767.330] (**) Option "_source" "server/udev" [ 13767.330] (II) event2 - Power Button: is tagged by udev as: Keyboard [ 13767.330] (II) event2 - Power Button: device is a keyboard [ 13767.330] (II) event2 - Power Button: device removed [ 13767.342] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input2/event2" [ 13767.342] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6) [ 13767.342] (**) Option "xkb_model" "pc105" [ 13767.342] (**) Option "xkb_layout" "de" [ 13767.342] (**) Option "xkb_options" "terminate:ctrl_alt_bksp" [ 13767.365] (II) event2 - Power Button: is tagged by udev as: Keyboard [ 13767.365] (II) event2 - Power Button: device is a keyboard [ 13767.365] (II) config/udev: Adding input device Video Bus (/dev/input/event3) [ 13767.365] (**) Video Bus: Applying InputClass "libinput keyboard catchall" [ 13767.365] (**) Video Bus: Applying InputClass "system-keyboard" [ 13767.365] (**) Video Bus: Applying InputClass "Keyboard Defaults" [ 13767.365] (II) Using input driver 'libinput' for 'Video Bus' [ 13767.365] (**) Video Bus: always reports core events [ 13767.365] (**) Option "Device" "/dev/input/event3" [ 13767.365] (**) Option "_source" "server/udev" [ 13767.366] (II) event3 - Video Bus: is tagged by udev as: Keyboard [ 13767.366] (II) event3 - Video Bus: device is a keyboard [ 13767.366] (II) event3 - Video Bus: device removed [ 13767.402] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:03/LNXVIDEO:01/input/input3/event3" [ 13767.402] (II) XINPUT: Adding extended input device "Video Bus" (type: KEYBOARD, id 7) [ 13767.402] (**) Option "xkb_model" "pc105" [ 13767.402] (**) Option "xkb_layout" "de" [ 13767.402] (**) Option "xkb_options" "terminate:ctrl_alt_bksp" [ 13767.403] (II) event3 - Video Bus: is tagged by udev as: Keyboard [ 13767.403] (II) event3 - Video Bus: device is a keyboard [ 13767.403] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 13767.403] (**) Power Button: Applying InputClass "libinput keyboard catchall" [ 13767.403] (**) Power Button: Applying InputClass "system-keyboard" [ 13767.403] (**) Power Button: Applying InputClass "Keyboard Defaults" [ 13767.403] (II) Using input driver 'libinput' for 'Power Button' [ 13767.403] (**) Power Button: always reports core events [ 13767.403] (**) Option "Device" "/dev/input/event1" [ 13767.403] (**) Option "_source" "server/udev" [ 13767.404] (II) event1 - Power Button: is tagged by udev as: Keyboard [ 13767.404] (II) event1 - Power Button: device is a keyboard [ 13767.404] (II) event1 - Power Button: device removed [ 13767.422] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input1/event1" [ 13767.422] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 8) [ 13767.422] (**) Option "xkb_model" "pc105" [ 13767.422] (**) Option "xkb_layout" "de" [ 13767.422] (**) Option "xkb_options" "terminate:ctrl_alt_bksp" [ 13767.423] (II) event1 - Power Button: is tagged by udev as: Keyboard [ 13767.423] (II) event1 - Power Button: device is a keyboard [ 13767.424] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event11) [ 13767.424] (II) No input driver specified, ignoring this device. [ 13767.424] (II) This device may have been added with another device file. [ 13767.424] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13767.424] (II) No input driver specified, ignoring this device. [ 13767.424] (II) This device may have been added with another device file. ... mouse ... [ 13767.459] (II) No input driver specified, ignoring this device. [ 13767.459] (II) This device may have been added with another device file. [ 13767.460] (II) config/udev: Adding input device HD-Audio Generic Front Mic (/dev/input/event4) [ 13767.460] (II) No input driver specified, ignoring this device. [ 13767.460] (II) This device may have been added with another device file. [ 13767.460] (II) config/udev: Adding input device HD-Audio Generic Rear Mic (/dev/input/event5) [ 13767.460] (II) No input driver specified, ignoring this device. [ 13767.460] (II) This device may have been added with another device file. [ 13767.460] (II) config/udev: Adding input device HD-Audio Generic Line (/dev/input/event6) [ 13767.460] (II) No input driver specified, ignoring this device. [ 13767.460] (II) This device may have been added with another device file. [ 13767.460] (II) config/udev: Adding input device HD-Audio Generic Line Out (/dev/input/event7) [ 13767.460] (II) No input driver specified, ignoring this device. [ 13767.460] (II) This device may have been added with another device file. [ 13767.460] (II) config/udev: Adding input device HD-Audio Generic Front Headphone (/dev/input/event8) [ 13767.460] (II) No input driver specified, ignoring this device. [ 13767.460] (II) This device may have been added with another device file. [ 13767.461] (II) config/udev: Adding input device AT Translated Set 2 keyboard (/dev/input/event0) [ 13767.461] (**) AT Translated Set 2 keyboard: Applying InputClass "libinput keyboard catchall" [ 13767.461] (**) AT Translated Set 2 keyboard: Applying InputClass "system-keyboard" [ 13767.461] (**) AT Translated Set 2 keyboard: Applying InputClass "Keyboard Defaults" [ 13767.461] (II) Using input driver 'libinput' for 'AT Translated Set 2 keyboard' [ 13767.461] (**) AT Translated Set 2 keyboard: always reports core events [ 13767.461] (**) Option "Device" "/dev/input/event0" [ 13767.461] (**) Option "_source" "server/udev" [ 13767.461] (II) event0 - AT Translated Set 2 keyboard: is tagged by udev as: Keyboard [ 13767.461] (II) event0 - AT Translated Set 2 keyboard: device is a keyboard [ 13767.461] (II) event0 - AT Translated Set 2 keyboard: device removed [ 13767.478] (**) Option "config_info" "udev:/sys/devices/platform/i8042/serio0/input/input0/event0" [ 13767.478] (II) XINPUT: Adding extended input device "AT Translated Set 2 keyboard" (type: KEYBOARD, id 10) [ 13767.478] (**) Option "xkb_model" "pc105" [ 13767.478] (**) Option "xkb_layout" "de" [ 13767.479] (**) Option "xkb_options" "terminate:ctrl_alt_bksp" [ 13767.480] (II) event0 - AT Translated Set 2 keyboard: is tagged by udev as: Keyboard [ 13767.480] (II) event0 - AT Translated Set 2 keyboard: device is a keyboard /home/sjngm/.xsession-errors: gpg-agent[25981]: WARNUNG: "--write-env-file" ist eine veraltete Option - sie hat keine Wirkung. gpg-agent: Ein gpg-agent läuft bereits - ein weiterer wird nicht gestartet (xfce4-session:25974): xfce4-session-WARNING **: 17:29:12.371: gpg-agent returned no PID in the variables (xfce4-session:25974): xfce4-session-WARNING **: 17:29:12.371: xfsm_manager_load_session: Something wrong with /home/sjngm/.cache/sessions/xfce4-session-runlikehell:0, Does it exist? Permissions issue? Warning: Unsupported high keycode 372 for name <I372> ignored X11 cannot support keycodes above 255. This warning only shows for the first high keycode. Warning: Unsupported high keycode 372 for name <I372> ignored X11 cannot support keycodes above 255. This warning only shows for the first high keycode. ** (xfce4-clipman:25994): WARNING **: 17:29:13.061: Unable to register GApplication: Für die Schnittstelle org.gtk.Application auf /org/xfce/clipman wurde bereits ein Objekt exportiert (xfce4-clipman:25994): GLib-GIO-CRITICAL **: 17:29:13.061: g_application_get_is_remote: assertion 'application->priv->is_registered' failed (xfce4-clipman:25994): GLib-WARNING **: 17:29:13.061: g_set_application_name() called multiple times (xfwm4:25984): xfwm4-WARNING **: 17:29:13.411: Error waiting on vblank with DRI: Das Argument ist ungültig (wrapper-2.0:26035): Gtk-WARNING **: 17:29:14.405: Negative content width -3 (allocation 1, extents 2x2) while allocating gadget (node button, owner GtkToggleButton) (pamac-tray:26015): Gdk-CRITICAL **: 17:29:14.420: gdk_window_thaw_toplevel_updates: assertion 'window->update_and_descendants_freeze_count > 0' failed (wrapper-2.0:26072): Gtk-WARNING **: 17:29:14.420: gtk_widget_size_allocate(): attempt to allocate widget with width -3 and height 30 (xfce4-clipman:25994): Gdk-CRITICAL **: 17:29:14.441: gdk_window_thaw_toplevel_updates: assertion 'window->update_and_descendants_freeze_count > 0' failed (wrapper-2.0:26071): Gtk-WARNING **: 17:29:14.456: gtk_widget_size_allocate(): attempt to allocate widget with width -3 and height 30 (wrapper-2.0:26078): Gtk-WARNING **: 17:29:14.531: gtk_widget_size_allocate(): attempt to allocate widget with width -3 and height 30 (wrapper-2.0:26069): Gtk-WARNING **: 17:29:14.581: gtk_widget_size_allocate(): attempt to allocate widget with width -3 and height 30 (wrapper-2.0:26076): Gtk-WARNING **: 17:29:14.592: Negative content width -1 (allocation 1, extents 1x1) while allocating gadget (node button, owner GtkToggleButton) (wrapper-2.0:26076): Gtk-WARNING **: 17:29:14.592: gtk_widget_size_allocate(): attempt to allocate widget with width -2 and height 29 /etc/lightdm/lightdm.conf (only the uncommented lines): [LightDM] run-directory=/run/lightdm [Seat:*] session-wrapper=/etc/lightdm/Xsession [XDMCPServer] [VNCServer] /etc/lightdm/lightdm-gtk-greeter.conf: [greeter] background = /usr/share/backgrounds/maia.png theme-name = Vertex-Maia icon-theme-name = Vertex-Maia font-name = Cantarell 10 xft-antialias = true xft-hintstyle = hintfull show-clock = false position = 50%,center 50%,center screensaver-timeout = 60 hide-user-image = true /var/log/lightdm/lightdm.log: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.26.0, UID=0 PID=25883 [+0.00s] DEBUG: Loading configuration dirs from /usr/share/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration dirs from /usr/local/share/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration dirs from /etc/xdg/lightdm/lightdm.conf.d [+0.00s] DEBUG: Loading configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Registered seat module local [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Registered seat module unity [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Monitoring logind for seats [+0.00s] DEBUG: New seat added from logind: seat0 [+0.00s] DEBUG: Seat seat0: Loading properties from config section Seat:* [+0.00s] DEBUG: Seat seat0: Starting [+0.00s] DEBUG: Seat seat0: Creating greeter session [+0.00s] DEBUG: Seat seat0: Creating display server of type x [+0.01s] DEBUG: Using VT 7 [+0.01s] DEBUG: Seat seat0: Starting local X display on VT 7 [+0.01s] DEBUG: XServer 0: Logging to /var/log/lightdm/x-0.log [+0.01s] DEBUG: XServer 0: Writing X server authority to /run/lightdm/root/:0 [+0.01s] DEBUG: XServer 0: Launching X Server [+0.01s] DEBUG: Launching process 25889: /usr/bin/X :0 -seat seat0 -auth /run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.01s] DEBUG: XServer 0: Waiting for ready signal from X server :0 [+0.01s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.01s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.01s] DEBUG: Loading users from org.freedesktop.Accounts [+0.01s] DEBUG: User /org/freedesktop/Accounts/User1000 added [+0.07s] DEBUG: Seat seat0 changes active session to c15 [+0.60s] DEBUG: Got signal 10 from process 25889 [+0.60s] DEBUG: XServer 0: Got signal from X server :0 [+0.60s] DEBUG: XServer 0: Connecting to XServer :0 [+0.61s] DEBUG: Seat seat0: Display server ready, starting session authentication [+0.61s] DEBUG: Session pid=25902: Started with service 'lightdm-greeter', username 'lightdm' [+0.63s] DEBUG: Session pid=25902: Authentication complete with return value 0: Success [+0.63s] DEBUG: Seat seat0: Session authenticated, running command [+0.63s] DEBUG: Session pid=25902: Running command /usr/bin/lightdm-gtk-greeter [+0.63s] DEBUG: Creating shared data directory /var/lib/lightdm-data/lightdm [+0.63s] DEBUG: Session pid=25902: Logging to /var/log/lightdm/seat0-greeter.log [+0.70s] DEBUG: Activating VT 7 [+0.70s] DEBUG: Activating login1 session c47 [+0.71s] DEBUG: Seat seat0 changes active session to c47 [+0.71s] DEBUG: Session c47 is already active [+0.89s] DEBUG: Greeter connected version=1.26.0 api=1 resettable=false [+1.09s] DEBUG: Greeter start authentication for sjngm [+1.09s] DEBUG: Session pid=25945: Started with service 'lightdm', username 'sjngm' [+1.09s] DEBUG: Session pid=25945: Got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) I'm probably missing that one file that you need, so please let me know what you need (and also what not as I'm limited to 30k text here...).
Update: I found another possible solution on the Nvidia forums: Delete the file ~/.config/xfce4/xfconf/xfce-perchannel-xml/displays.xml. I have the same issue and narrowed it down to the Composition Pipeline setting. When it is enabled, the XFCE desktop does not become visible, but instead the LightDM background stays on screen. Switching between TTY forces the screen the update in a way that resolves the issue, but this is cumbersome. If you remove the "metamodes" setting from your X configuration, the problem should resolve, but you will again have screen tearing. What worked for me was to run the following command on session startup (you can add it in XFCE's Session and Startup settings) to force the screen to update with Composition Pipeline enabled: nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceCompositionPipeline = On }" Note that you might want to add ForceFullCompositionPipeline = On as well. This is obviously a workaround and I don't know what the root cause is or how to fix it. Other display managers seem to work fine with the setting enabled.
XFCE/LightDM: Tearing-Fix vs. No Desktop After Logging in
1,455,898,365,000
I have stripped my full desktop debian 7 x64 to the bare, leaving just xorg and lightdm for a kiosk application. I have changed the lighdm configuration to auto-login a user. When I boot, this works fine, xorg starts up and without prompt the user is logged in. However upon login an xterm window starts in the upper left-hand corner of the screen. I have tried in vain to figure out which instance actually started that xterm! I would of course want to replace it with my kiosk binary wrapped in a watchdog script.
In Debian xterm is automatically started if no window manager is selected. Even if you have no slightest idea about who started xterm easiest way to find this out: as root rename /usr/bin/xterm to /usr/bin/xterm_. Create /usr/bin/xterm script: #!/bin/bash ( echo $$; ps -f --forest ) >/tmp/xterm.txt Than take a look into output.
Who started xterm in my debian xterm+lightdm kiosk system?
1,455,898,365,000
I have just installed archlinux on a virtual machine and I managed to install lightdm by following the instructions given in https://wiki.archlinux.org/index.php/LightDM https://wiki.archlinux.org/index.php/Display_Manager but lightdm looks like this But I want it to look like the default one in ubuntu How can that be done ? Ps: I am running xfce4 as the desktop Environment
[The ArchWiki looks dead currently, so I don't know what is contained in the instructions you linked to.] To change the looks of LightDM, you need to install a theme and configure it. This page suggests that the relevant Arch packages might be lightdm-unity-greeter or lightdm-webkit-greeter.
ubuntu like lightdm in arch linux
1,562,851,255,000
When typing backspace in an empty password box in lightdm, noisy beep sound comes out. Is there a way to mute this beeping?
One solution is to turn off the beep with xset b off. This can be made to take effect for the lightdm greeter by adding it to the greeter-setup-script line of the lightdm.conf file at /etc/lightdm/lightdm.conf. Edit your /etc/lightdm/lightdm.conf config file to look like the following: [Seat:*] # ... greeter-setup-script=xset b off Gotcha: Ensure that you edit your greeter-setup-script setting in your [Seat:*] table, otherwise the setting won't take affect—see https://linux.debian.bugs.dist.narkive.com/2p9KbtdF/bug-686264-lightdm-greeter-setup-script-not-executed#post6 post.
Disable beeping in lightdm
1,562,851,255,000
I'm suspending my laptop very often, by manually calling pm-suspend command. Most of the times it works without problems. However, sometimes it resumes with a blank screen. Either a reboot or issuing sudo /etc/init.d/lightdm restart on TTY1 (Ctrl+ALT+F1) makes it work, but I loose all of my unsaved documents and working layout of course. Is there a way to make LightDM start on TTY1 without restarting it?
Root of the problem It turns out that the exact problem was issuing a screen lock command while the laptop's lid is closed: sleep 5s; physlock -d Run above command and immediately close the laptop lid. Wait like 10 seconds, then open the lid. A password prompt will wait for your password input. When the correct password is entered, you'll end up with a totally blank screen. Actual Solution Current workaround is running xrandr --auto on TTY7, within the same my-suspend script: echo "Locking display" physlock -d echo "suspending..." pm-suspend echo "Performing workaround for LightDM bug" while :; do xrandr --auto && break || sleep 1s done Answer to the original issue When this xrandr --auto command is issued on another tty, it doesn't work even though DISPLAY=:0 is set beforehand. However, the following procedure works: Switch to TTY1 (Ctrl + Alt + F1): Issue the following command: $ while :; do DISPLAY=:0 xrandr --auto && break || sleep 1s; done This command will keep failing every second with the following error: xrandr: Configure crtc 0 failed xrandr: Configure crtc 0 failed xrandr: Configure crtc 0 failed ... Switch to TTY7 (Ctrl + Alt + F7) Wait 1 second Voila!
LightDM sometimes resumes with a blank display after suspend
1,562,851,255,000
As you can see in the image below, on login lightdm has a certain desktop environment pre-selected, and also has the option to switch over to others. Where in the file system is this information stored? I do not see it in the /etc/lightdm folder, nor in /usr/share/lightdm, and from a cursory glance /etc/X11 doesn't contain it either.
On Debian and derivatives, display managers are supposed to offer to the user the session types described in .desktop files in /usr/share/xsessions and /usr/share/wayland-sessions. (Source: empirical observation and this thread, I can't find the authoritative document.) Each session manager or window manager package is supposed to drop a file in these directories. Display managers may offer additional choices, for example if they have a special affinity to some particular session manager, or if they have a guest session feature. And some (at least the old-fashioned xdm) don't give the user a choice. But lightdm does follow the general rule.
Where does lightdm store its desktop environment settings?
1,562,851,255,000
I have an Ubuntu 19.10-based distro with LightDM installed. I changed the username recently, but lightdm keeps displaying the old username. Is there a way I can fix this? I have tried fiddling with /etc/lightdm/lightdm.conf with no success. Attached are pictures demonstrating what I am talking about. Is this a lightdm issue? Or some other configuration that hasn't been modified? Is there a way to fix this? Thanks!
It looks like the name batcastle is the username, while live is the so-called fingername. Both are stored in the file /etc/passwd. The username (that is the login name) is somewhat more complicated to change, because most likely you want to have the home directory called /home/$(whoami). To change the username, use usermod. usermod -l newusername -d /home/newusername -m oldusername You may want to change the ownerships of the files and directories accordingly. This should not be necessary, because actually, on the filesystem, the owner information is only stored as the numerical ID: chown -R newusername /home/newusername To change the fingername, use chfn: chfn -f 'John Doe' username For more details, please refer to the man pages of these commands.
Lightdm displaying wrong username
1,562,851,255,000
Problem I'm transitioning the configuration of my multihead monitors from using some rather ugly scripts to /etc/X11/xorg.conf.d/10-monitor.conf. My layout has two monitors of 1920x1200, one rotated left. The scripts were able to configure this just fine using the following command: xrandr \ --output "DP-1" \ --mode 1920x1200 \ --pos 1200x360 \ --rotate normal \ --primary \ --output "DP-2" \ --mode 1920x1200 \ --pos 0x0 \ --rotate left I've tried to translate this to configuration: Section "Monitor" Identifier "DP-1" Option "Primary" "true" Option "Position" "1200 360" EndSection Section "Monitor" Identifier "DP-2" Option "Rotate" "left" EndSection This unfortunately has the side effect of setting the resolution of the rotated screen to 1600×1200, even though the preferred mode is still 1920×1200: $ xrandr […] DP-2 connected 1200x1600+0+0 left (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 59.95 + 1920x1080 60.00 1600x1200 60.00* […] How can I write configuration which will use the rotated monitor's preferred resolution of 1920x1200? Non-solutions Explicitly setting the screen size to fit both monitors: Section "Screen" Driver "radeon" SubSection "Display" Virtual 3120 1920 EndSubSection EndSection Explicitly setting the preferred mode for DP-2 (Option "PreferredMode" "1920x1200") caused the other screen to be reduced to 1600×1200, so that's probably a clue. Workaround Force the resolution by using xrandr --output DP-2 --mode 1920x1200.
What worked in the end was to explicitly set the virtual screen size and the preferred mode for both of the screens: Section "Monitor" Identifier "DP-1" Option "Primary" "true" Option "Position" "1200 360" Option "PreferredMode" "1920x1200" EndSection Section "Monitor" Identifier "DP-2" Option "Rotate" "left" Option "PreferredMode" "1920x1200" EndSection Section "Screen" Driver "radeon" SubSection "Display" Virtual 3120 1920 EndSubSection EndSection
X11 ignores preferred mode
1,562,851,255,000
I'm using pam_exec to do some root tasks. They take some time, and I'd like to tell the user to wait a moment. I'm doing the tasks there, and not later, because I need: Root permissions To rsync the home files from a server BEFORE anything gets loaded on the desktop My problem is: I can't show a window displaying anything. i'm loading this script from pam_exec to test whether this is a $DISPLAY issue or a user issue: #!/bin/bash case "$PAM_TYPE" in 'open_session') echo Plain exec &> /tmp/pamexec_output yad &>> /tmp/pamexec_output echo Set display &>> /tmp/pamexec_output DISPLAY=:0 yad &>> /tmp/pamexec_output echo Set user lightdm &>> /tmp/pamexec_output sudo -u lightdm yad &>> /tmp/pamexec_output echo Set user $PAM_USER &>> /tmp/pamexec_output sudo -u $PAM_USER yad &>> /tmp/pamexec_output echo Set user lightdm and display &>> /tmp/pamexec_output DISPLAY=:0 sudo -u lightdm yad &>> /tmp/pamexec_output echo Set user $PAM_USER and display &>> /tmp/pamexec_output DISPLAY=:0 sudo -u $PAM_USER yad &>> /tmp/pamexec_output echo PS AUX &>> /tmp/pamexec_output ps aux &>> /tmp/pamexec_output ;; esac I couldn't get my answer No windows are being shown, and the output: Plain exec No protocol specified No protocol specified (yad:25314): Gtk-WARNING **: cannot open display: :0 Set display No protocol specified No protocol specified (yad:25317): Gtk-WARNING **: cannot open display: :0 Set user lightdm No protocol specified No protocol specified (yad:25321): Gtk-WARNING **: cannot open display: :0 Set user jorge.suarez No protocol specified No protocol specified (yad:25325): Gtk-WARNING **: cannot open display: :0 Set user lightdm and display No protocol specified No protocol specified (yad:25328): Gtk-WARNING **: cannot open display: :0 Set user jorge.suarez and display No protocol specified No protocol specified (yad:25331): Gtk-WARNING **: cannot open display: :0 As a bonus, here is the final output, from ps aux. Maybe that could help: PS AUX USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 26684 2488 ? Ss Jan24 0:01 /sbin/init root 2 0.0 0.0 0 0 ? S Jan24 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Jan24 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S Jan24 0:00 [kworker/u:0] root 6 0.0 0.0 0 0 ? S Jan24 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S Jan24 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< Jan24 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< Jan24 0:00 [khelper] root 10 0.0 0.0 0 0 ? S Jan24 0:00 [kdevtmpfs] root 11 0.0 0.0 0 0 ? S< Jan24 0:00 [netns] root 12 0.0 0.0 0 0 ? S Jan24 0:00 [sync_supers] root 13 0.0 0.0 0 0 ? S Jan24 0:00 [bdi-default] root 14 0.0 0.0 0 0 ? S< Jan24 0:00 [kintegrityd] root 15 0.0 0.0 0 0 ? S< Jan24 0:00 [kblockd] root 16 0.0 0.0 0 0 ? S< Jan24 0:00 [ata_sff] root 17 0.0 0.0 0 0 ? S Jan24 0:00 [khubd] root 18 0.0 0.0 0 0 ? S< Jan24 0:00 [md] root 21 0.0 0.0 0 0 ? S Jan24 0:00 [khungtaskd] root 22 0.0 0.0 0 0 ? S Jan24 0:00 [kswapd0] root 23 0.0 0.0 0 0 ? SN Jan24 0:00 [ksmd] root 24 0.0 0.0 0 0 ? SN Jan24 0:00 [khugepaged] root 25 0.0 0.0 0 0 ? S Jan24 0:00 [fsnotify_mark] root 26 0.0 0.0 0 0 ? S Jan24 0:00 [ecryptfs-kthrea] root 27 0.0 0.0 0 0 ? S< Jan24 0:00 [crypto] root 35 0.0 0.0 0 0 ? S< Jan24 0:00 [kthrotld] root 36 0.0 0.0 0 0 ? S Jan24 0:00 [scsi_eh_0] root 37 0.0 0.0 0 0 ? S Jan24 0:08 [scsi_eh_1] root 38 0.0 0.0 0 0 ? S Jan24 0:00 [kworker/u:2] root 59 0.0 0.0 0 0 ? S< Jan24 0:00 [devfreq_wq] root 206 0.0 0.0 0 0 ? S Jan24 0:02 [jbd2/vda5-8] root 207 0.0 0.0 0 0 ? S< Jan24 0:00 [ext4-dio-unwrit] root 227 0.0 0.1 30844 1256 ? S Jan24 0:00 mountall --daemon root 302 0.0 0.0 17224 640 ? S Jan24 0:00 upstart-udev-bridge --daemon root 305 0.0 0.2 24524 2172 ? Ss Jan24 0:00 /sbin/udevd --daemon root 431 0.0 0.0 0 0 ? S< Jan24 0:00 [kpsmoused] root 504 0.0 0.1 19192 1032 ? Ss Jan24 0:00 rpcbind -w root 513 0.0 0.0 0 0 ? S Jan24 0:00 [jbd2/vda6-8] root 514 0.0 0.0 0 0 ? S< Jan24 0:00 [ext4-dio-unwrit] root 532 0.0 0.0 15180 404 ? S Jan24 0:00 upstart-socket-bridge --daemon root 570 0.0 0.0 0 0 ? S Jan24 0:00 [jbd2/vda7-8] root 573 0.0 0.0 0 0 ? S< Jan24 0:00 [ext4-dio-unwrit] root 655 0.0 0.0 0 0 ? S< Jan24 0:00 [rpciod] root 658 0.0 0.0 0 0 ? S< Jan24 0:00 [nfsiod] root 669 0.0 0.2 49948 2716 ? Ss Jan24 0:00 /usr/sbin/sshd -D 102 679 0.0 0.2 27184 2536 ? Ss Jan24 0:17 dbus-daemon --system --fork --activation=upstart root 709 0.0 0.3 79036 3096 ? Ss Jan24 0:00 /usr/sbin/modem-manager root 717 0.0 0.1 21180 1692 ? Ss Jan24 0:00 /usr/sbin/bluetoothd syslog 733 0.0 0.1 249464 1404 ? Sl Jan24 0:02 rsyslogd -c5 root 738 0.0 0.5 229848 5260 ? Ssl Jan24 0:05 NetworkManager root 746 0.0 0.0 0 0 ? S< Jan24 0:00 [krfcommd] root 756 0.0 0.6 188336 5844 ? Sl Jan24 0:23 /usr/lib/policykit-1/polkitd --no-debug statd 773 0.0 0.1 21496 1312 ? Ss Jan24 0:00 rpc.statd -L avahi 780 0.0 0.1 34396 1716 ? S Jan24 0:00 avahi-daemon: registering [ctdeskxyy.local] avahi 781 0.0 0.0 34268 472 ? S Jan24 0:00 avahi-daemon: chroot helper root 787 0.0 0.1 7256 1508 ? S Jan24 0:00 /sbin/dhclient -d -4 -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/sendsigs.omit.d/network-manager.dhclient-eth0.pid -lf /var/lib/dhcp/dhclient-05584152-142d-425d-b5b9-1e63697e0637-eth0.lease -cf /var/run/nm-dhclient-eth0.conf eth0 colord 808 0.0 1.2 491876 11588 ? Sl Jan24 0:01 /usr/lib/x86_64-linux-gnu/colord/colord root 938 0.0 0.0 19980 932 tty4 Ss+ Jan24 0:00 /sbin/getty -8 38400 tty4 root 948 0.0 0.0 19980 940 tty5 Ss+ Jan24 0:00 /sbin/getty -8 38400 tty5 root 961 0.0 0.2 69768 1904 tty3 Ss Jan24 0:00 /bin/login -- root 964 0.0 0.0 19980 936 tty6 Ss+ Jan24 0:00 /sbin/getty -8 38400 tty6 root 983 0.0 0.0 4452 812 ? Ss Jan24 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 984 0.0 0.1 19104 1024 ? Ss Jan24 0:00 cron daemon 985 0.0 0.0 16900 372 ? Ss Jan24 0:00 atd root 991 0.0 0.3 262560 3396 ? Ssl Jan24 0:01 lightdm whoopsie 993 0.0 0.5 202176 5024 ? Ssl Jan24 0:01 whoopsie nobody 1001 0.0 0.1 33016 1252 ? S Jan24 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/sendsigs.omit.d/network-manager.dnsmasq.pid --listen-address=127.0.0.1 --conf-file=/var/run/nm-dns-dnsmasq.conf --cache-size=0 --proxy-dnssec root 1023 0.0 0.1 701376 1192 ? Ssl Jan24 0:00 /usr/sbin/nscd nslcd 1111 0.0 0.2 443796 1916 ? Ssl Jan24 0:00 /usr/sbin/nslcd root 1254 0.0 0.4 586496 4152 ? Sl Jan24 0:08 /usr/sbin/console-kit-daemon --no-daemon root 1362 0.0 0.0 0 0 ? S Jan24 0:01 [flush-253:0] root 1534 0.0 0.2 76052 2032 tty1 Ss Jan24 0:00 /bin/login -- root 1536 0.0 0.4 219940 4272 ? Sl Jan24 0:01 /usr/lib/upower/upowerd rtkit 1567 0.0 0.1 160644 1136 ? SNl Jan24 0:00 /usr/lib/rtkit/rtkit-daemon 1000 1782 0.0 0.4 19556 4676 tty1 S Jan24 0:00 -bash root 1930 0.0 0.2 66712 1904 tty1 S Jan24 0:00 sudo su root 1931 0.0 0.1 66472 1816 tty1 S Jan24 0:00 su root 1941 0.0 0.2 17260 2324 tty1 S+ Jan24 0:01 bash root 1965 0.0 0.0 0 0 ? S Jan24 0:00 [lockd] root 2505 0.0 0.3 193524 3628 ? Sl Jan24 0:01 /usr/lib/udisks/udisks-daemon root 2506 0.0 0.0 45512 804 ? S Jan24 0:00 udisks-daemon: not polling any devices root 5436 0.0 0.4 98476 4256 ? Ss 08:01 0:00 /usr/sbin/cupsd -F root 11778 0.0 0.0 0 0 ? S< 08:01 0:00 [xfs_mru_cache] root 11779 0.0 0.0 0 0 ? S< 08:01 0:00 [xfslogd] root 11780 0.0 0.0 0 0 ? S< 08:01 0:00 [xfsdatad] root 11781 0.0 0.0 0 0 ? S< 08:01 0:00 [xfsconvertd] root 11784 0.0 0.0 0 0 ? S 08:01 0:00 [jfsIO] root 11785 0.0 0.0 0 0 ? S 08:01 0:00 [jfsCommit] root 11786 0.0 0.0 0 0 ? S 08:01 0:00 [jfsSync] root 13718 0.0 0.0 0 0 ? Z Jan24 0:00 [lightdm] <defunct> root 14197 0.0 0.1 24520 1640 ? S 08:03 0:00 /sbin/udevd --daemon root 14198 0.0 0.0 0 0 ? S< 08:03 0:00 [iprt] root 16911 0.0 0.2 69768 1904 tty2 Ss Jan24 0:00 /bin/login -- root 18750 0.0 0.3 124052 3712 ? Sl 09:12 0:00 /usr/lib/accountsservice/accounts-daemon root 21715 0.0 0.0 0 0 ? Z 09:20 0:00 [lightdm] <defunct> 4004 23593 0.0 0.3 207504 3592 ? Sl 09:35 0:00 /usr/lib/deja-dup/deja-dup/deja-dup-monitor root 24087 0.0 0.0 0 0 ? S 09:42 0:00 [kworker/0:1] root 24355 0.0 0.0 0 0 ? S 09:47 0:00 [kworker/0:2] root 24581 0.1 2.6 122056 24912 ? SN 09:48 0:00 /usr/bin/python /usr/sbin/aptd root 25026 0.1 0.0 0 0 ? S 09:52 0:00 [kworker/0:0] root 25134 1.8 2.6 148396 25280 tty7 Ss+ 09:53 0:00 /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch root 25226 0.0 0.3 155028 3136 ? Sl 09:53 0:00 lightdm --session-child 12 385 lightdm 25258 0.1 0.5 344020 5200 ? S<l 09:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog lightdm 25263 0.0 0.3 95984 3240 ? S 09:53 0:00 /usr/lib/pulseaudio/pulse/gconf-helper root 25313 0.0 0.1 16516 1376 ? Ss 09:54 0:00 /bin/bash /usr/local/lib/puppet-files/gestion-sesiones.sh log=/tmp/cosaaaa root 25334 0.0 0.1 14144 1020 ? R 09:54 0:00 ps aux 4004 26833 0.0 0.5 362740 5676 ? S<l Jan24 0:01 /usr/bin/pulseaudio --start --log-target=syslog 4004 26836 0.0 0.3 95968 3256 ? S Jan24 0:00 /usr/lib/pulseaudio/pulse/gconf-helper root 28396 0.0 0.0 0 0 ? Z Jan24 0:00 [lightdm] <defunct> 4004 29073 0.0 0.4 19536 4660 tty2 S+ Jan24 0:00 -bash 4004 29256 0.0 0.4 19488 4468 tty3 S+ Jan24 0:00 -bash Another interesting finding. This script: #!/bin/bash case "$PAM_TYPE" in 'open_session') ( sleep 5 yad &> /tmp/pam_output ) & ;; esac It works, but the window is being showed way after the desktop is loaded. So that won't help. It also works at logout, no problem there. Any ideas on how to solve this problem?
Instead of having the notifications occur in pam_exec, you could have pam_exec write to a file (like you are, with /tmp/pam_output), and have a separate daemon executed by lightdm before the user logs in, which monitors /tmp/pam_output and pops up a note when it sees new output. The background process run by lightdm would have the X environment and X11 cookies set up already, and would be run in the context of the lightdm user instead of root, which is more secure anyway. See this documentation on starting a script when the greeter starts.
Display dialog from pam_exec environment at login?
1,562,851,255,000
I've been using Awesome+LightDM with the GTK greeter on Arch Linux for some years, and I'm in the process of moving to NixOS. One issue with this has been the screen locker. I've mapped Windows-l to light-locker-command --lock. When activating that the screen goes black, then turns off. To get back to LightDM I have to press Ctrl-Alt-F7 and wait for about 10 seconds while some weird message about "being redirected to the unlock dialog" displays. I've tried installing and enabling both the "gtk" and "mini" greeters (not at the same time), but after restarting X neither of these seem to be used. How do I set either of them up? The relevant part of the configuration: services = { xserver = { displayManager.lightdm.enable = true; enable = true; layout = "us"; libinput.enable = true; windowManager = { awesome.enable = true; default = "awesome"; }; xkbOptions = "compose:caps"; xkbVariant = "dvorak-alt-intl"; }; }; I also tried enabling programs.slock, but that doesn't integrate with lightdm.
Set up a locker: services.xserver.xautolock.enable = true; , install xlockmore, then use it: awful.key({ modkey }, "l", function () awful.spawn("xautolock -locknow") end, {description = "lock the screen", group = "client"}),
How to configure a useable screen locker in Awesome+LightDM+NixOS?
1,562,851,255,000
Seen similar question many times on AskUbuntu, but most answers was bout unity-helpers or gconf ...canonical... etc, so this actually doesn't seem to work. The problem is that I decided to move to lightdm from gdm. Yep, it works,but I can't setup background image to it - always getting black bg color in exchange of picture. My configs: tempos@parmasse ~ $ cat /etc/lightdm/lightdm-gtk-greeter.conf # # logo = Logo file to use, either an image absolute path, or a path relative to the greeter data directory # background = Background file to use, either an image path or a color (e.g. #772953) # theme-name = GTK+ theme to use # icon-theme-name = Icon theme to use # font-name = Font to use # xft-antialias = Whether to antialias Xft fonts (true or false) # xft-dpi = Resolution for Xft in dots per inch (e.g. 96) # xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull) # xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr) # show-language-selector (true or false) # [greeter] #logo= background=/usr/share/backgrounds/lightdm.jpg #background=#772953 #theme-name=Adwaita #icon-theme-name=gnome #font-name= #xft-antialias= #xft-dpi= #xft-hintstyle= #xft-rgba= show-language-selector=true The file itself: tempos@parmasse ~ $ ls -la /usr/share/backgrounds/lightdm.jpg -rwxrwxrwx 1 root root 1362684 авг 14 12:36 /usr/share/backgrounds/lightdm.jpg
Thanks all. Seems, that it was some bug - in lightdm itself (meening package-specific or some libraries) or, possibly, it was simply installed with some errors/bugs. I'm now trying to install many different things, like compiz, awesome, enlightenment, lightdm and others, so can't be sure. The fact is today both lightdm and lightdm-gtk-greeter received updates, and this fixed background's problems even with original images and config.
Change lightdm background
1,562,851,255,000
I'm trying to get LightDM to work but for some reason it just shows up a black screen when I get to the user selection screen, GDM works fine though. I followed the ArchWiki doc and it's strange since it didn't involve any complex configuration: Installed lightdm from yaourt (no errors) On /etc/inittap changed d:3:initdefault: to d:5:initdefault: and x:5:respawn:/usr/sbin/gdm -nodaemon to x:5:respawn:/usr/sbin/lightdm >& /dev/null Not sure if it's something related to virtualbox or the actual install. Thanks in advance!
>&/dev/null is bash syntax. If you have ash and not bash as /bin/sh (I don't know which is the default on Arch), you need to write it the portable way: >/dev/null 2>/dev/null or 2>&1 >/dev/null. Check the system logs (ls -ltr /var/log, look at the recently modified files after you modify /etc/inittab). I might have guessed the problem wrong; the logs are likely to have enough information to find what's really going wrong.
LightDM shows a black screen in my Arch Guest VM
1,562,851,255,000
I run XFCE on Debian with two monitors next to eachother. The default positioning of the monitors is mixed up. I.e., when the mouse leaves the left edge of the physically left monitor it isn't blocked, but instead the mouse appears at the right edge of the physically right monitor. Of course I want it the other way around, that the physically "outer" edges of the monitors are blocking, while the physically "inner" edges are pass-through. When logged in, I swapped the position of the monitors in the display settings of XFCE (using xfce4-display-settings) and it works flawlessly. But the problem persists on the login screen, which I think lightdm is in charge of. What setting do I need to change to affect the monitor arrangement on the login screen?
Generating the command for the adequate configuration Install the graphical tool arandr: beside allowing to change the current users's layout easily, it can save this current layout in the form of a shell script that runs the xrandr command with all the required parameters to recreate the same layout. Example of generated "configuration": #!/bin/sh xrandr --output eDP-1 --mode 1920x1080 --pos 0x0 --rotate normal --output DP-1-1 --primary --mode 1920x1080 --pos 1920x0 --rotate normal --output HDMI-1 --off --output DP-1-3 --off --output DP-1-2 --off Using the configuration within lightdm.conf The parameter display-setup-script usually described in a comment in lightdm.conf: # display-setup-script = Script to run when starting a greeter session (runs as root) can run such command. Its environment will already be configured properly (eg: DISPLAY is set etc.). Just copy the generated configuration script in /etc/lightdm/ with appropriate permissions and execute it from LightDM within a Seat block. For example if the script above is copied, kept executable and named /etc/lightdm/dp-right-of-edp.sh, within the default [Seat:*] block (or possibly [SeatDefaults] depending on what's configured in the local installation), add this line: display-setup-script = /etc/lightdm/dp-right-of-edp.sh Of course it's possible to add logic in the script to dynamically select between multiple sets of configurations using any tool for probing (including xrandr itself despite its output not easy to parse). It should be possible to distinguish the Seat where such configuration should be applied, so it's not applied on remote XDMCP. This configuration could then instead be moved under a block named [Seat:seat0] which appears to be the main physical seat, instead of the default [Seat:*]
Lightdm multi screen arrangement
1,562,851,255,000
I'm using Debian Buster where I have Kerberos, LDAP and SSSD working. I was mounting my home directory on the client using NFS however I realized it was insecure. So I implemented Kerberos mounting. However when trying to login through lightdm on boot it goes black and boots me back to the lightdm login screen with no error. I found this in /var/log/syslog: Error reading existing Xauthority: Failed to open file “/home/ben/.Xauthority”: Permission denied Error writing X authority: Failed to open X authority /home/ben/.Xauthority: Permission denied I logged in as root from tty1 then did su ben, ran kinit and it seems I can't read/write to any file in my home directory that is owned by me - only ones with read set on other. Here is my /etc/exports from my server: /home/ 192.168.16.0/24(rw,sec=krb5p,sync,fsid=0,crossmnt,no_subtree_check) Here is my /etc/fstab: 192.168.16.20:/home /home nfs defaults,exec 0 0 Client's keytab file as requested: host/client@DOMAIN host/client@DOMAIN nfs/client@DOMAIN nfs/client@DOMAIN NFS principals in Kadmin on server: nfs/server@DOMAIN nfs/client@DOMAIN I've been debugging this for sometime and I'm really struggling to get anywhere. The mount looks like it's mounted correctly. My user has a Kerberos ticket. The permissions look perfect and I can read/write with the same user just fine on the server. Please let me know if you need any more information to help fix this issue. Update I found this in the auth log of the server when I try to login on the client. NEEDED_PREAUTH: ben@DOMAIN for krbtgt/DOMAIN@DOMAIN, Additional pre-authentication required ISSUE: authtime 1622558991, etypes {rep=18 tkt=18 ses=18}, ben@DOMAIN for krbtgt/DOMAIN@DOMAIN However I don't know why as I'm running NTP on the server and have ntpdate on the client pointing to the server. Also if I run watch -n 1 date -R on the client and server, place the terminal windows side by side they show the exact same time. This error also appears when authenticating with kinit so I'm not sure if it's related to the issue.
So I never found an error pointing me in the right direction. However I suspected id mapping was at fault due my user showing uid 1000 and the uid on the home directory showing 1000 too. After messing with config files and rebooting the server and client several times I solved the issue. The Solution Add the following lines to /etc/idmapd.conf on the server in the [General] section: Domain = domain Local-Realms = DOMAIN
Can't read ~/.Xauthority after implementing NFS Kerberos mount
1,562,851,255,000
I'm trying to start awesome windows manager as a subprocess of ssh-agent. It worked when I used startx (ssh-agent startx). But now I'm trying to make it work under lightdm. lightdm starts /usr/bin/xinitrcsession-helper: #!/bin/bash exec $HOME/.xinitrc ~/.xinitrc: ssh-agent awesome And what I get is: 509 1 lightdm /usr/bin/lightdm 526 509 Xorg /usr/lib/Xorg :0 -seat seat0 -auth /run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 877 509 lightdm lightdm --session-child 14 21 1003 877 xinitrcse /bin/bash /usr/bin/xinitrcsession-helper 1028 1003 awesome awesome 1029 1028 ssh-a ssh-agent awesome And set | grep SSH returns nothing. Then I start another xterm (ssh-agent xterm) and it works: 1636 1 xterm xterm 1638 1636 bash bash 1651 1638 vim vim 9435 1651 xterm xterm 9447 9435 ssh-a ssh-agent xterm 9449 9435 bash bash 10464 9449 ps ps -eHo pid,ppid,comm,args 10465 9449 les less The strange thing here is that ssh-agent is a child of a program it starts. Can you explain that? And how do I run awesome so that the programs I start after that could see ssh-agent? UPD Regarding ssh-agent being a child to the command it runs. That is made to be able to replace command with ssh-agent command. So, ssh-agent forks, and parent execs the command. UPD My bad, I was using xbindkeys to start xterm, and the former happened to be started before ssh-agent. Like in, xbindkeys && ssh-agent awesome. So, it didn't have SSH_* variables to pass to xterm. Or so is my most probable explanation. When using awesome's builtin facilities to start xterm, environment variables are passed down all right.
In your update you mentioned that you start xterm from xbindkeys and since you run xbindkeys && ssh-agent awesome bindkeys will not have the SSH-related environment, and as a consequence of that, xterm won't either. To solve this, I would suggest eval "$(ssh-agent)" xbindkeys && awesome Now, this would set the variables for both xbindkeys and awesome (which may well be what you need and want), but it would not automatically kill the ssh-agent process when you log out. For that, you could use (with bash), eval "$(ssh-agent)" trap 'eval "$(ssh-agent -k)"' EXIT xbindkeys && awesome or something similar. This would call ssh-agent -k which would kill the agent, as soon as that shell exited or was terminated by TERM, HUP or INT. Running eval on the output of ssh-agent -k would just unset the SSH-variables, and it may not be needed (since the script is about to exit anyway), so the trap may be set up to run just ssh-agent -k >/dev/null instead. The thing about ssh-agent being a child process of the command that it starts just looks strange. ssh-agent forks off the actual agent process, and then replaces the original process with that of the command it's supposed to run (using exec()). The result is that the original process (xterm in your second process tree) is the parent of the agent: /* * Fork, and have the parent execute the command, if any, or present * the socket data. The child continues as the authentication agent. */ if (D_flag || d_flag) { log_init(__progname, d_flag ? SYSLOG_LEVEL_DEBUG3 : SYSLOG_LEVEL_INFO, SYSLOG_FACILITY_AUTH, 1); format = c_flag ? "setenv %s %s;\n" : "%s=%s; export %s;\n"; printf(format, SSH_AUTHSOCKET_ENV_NAME, socket_name, SSH_AUTHSOCKET_ENV_NAME); printf("echo Agent pid %ld;\n", (long)parent_pid); fflush(stdout); goto skip; } pid = fork(); if (pid == -1) { perror("fork"); cleanup_exit(1); } if (pid != 0) { /* Parent - execute the given command. */ close(sock); snprintf(pidstrbuf, sizeof pidstrbuf, "%ld", (long)pid); if (ac == 0) { format = c_flag ? "setenv %s %s;\n" : "%s=%s; export %s;\n"; printf(format, SSH_AUTHSOCKET_ENV_NAME, socket_name, SSH_AUTHSOCKET_ENV_NAME); printf(format, SSH_AGENTPID_ENV_NAME, pidstrbuf, SSH_AGENTPID_ENV_NAME); printf("echo Agent pid %ld;\n", (long)pid); exit(0); } if (setenv(SSH_AUTHSOCKET_ENV_NAME, socket_name, 1) == -1 || setenv(SSH_AGENTPID_ENV_NAME, pidstrbuf, 1) == -1) { perror("setenv"); exit(1); } execvp(av[0], av); perror(av[0]); exit(1); } (the child process then continues executing the rest of the code) This allows you to kill the agent without much consequence to the command that you'd want to run, for example.
How do I use ssh-agent as a wrapper program?