date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,406,268,808,000 |
Is there some general way to find out the name of the driver which I have to install on my linux system given only the hardware name? Maybe some centralized webpage or application which gatters all the hardware information and it's related dirver? Or all which I can do is search it on a web browser? What do you do in this cases?
For example, I want to know the driver name for the hardware "Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller"
|
LKDDb
You can search for drivers that are included in the Linux Kernel here, http://cateee.net/lkddb/web-lkddb/. The primary page is here, http://cateee.net/lkddb/.
About LKDDb
LKDDb is an attempt to build a comprensive database of hardware and
protocols know by Linux kernels. The driver database includes numeric
identifiers of hardware, the kernel configuration menu needed to build
the driver and the driver filename. The database is build
automagically from kernel sources, so it is very easy to have always
the database updated.
Drivers not included
You typically have to search by the hardware name through the Linux Kernel to see if it provides a driver out of the box. If not then you'll need to go to the manufacturers website or if it's a reference design done by Intel or NVidia or someone, search their site for corresponding drivers.
What drivers am I using?
To see what driver/modules are being used by hardware you already have you can use the tool lspci -v.
For example:
$ lspci -v
00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02)
Subsystem: Lenovo Device 2193
Flags: bus master, fast devsel, latency 0
Capabilities: <access denied>
Kernel driver in use: agpgart-intel
00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller])
Subsystem: Lenovo Device 215a
Flags: bus master, fast devsel, latency 0, IRQ 45
Memory at f2000000 (64-bit, non-prefetchable) [size=4M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 1800 [size=8]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
Kernel modules: i915
Notice the lines that say "Kernel driver in use" and "Kernel modules".
What drivers/modules does my Kernel already have loaded?
You can look to the Kernel's /proc filesystem for this info:
$ less /proc/modules
tcp_lp 2111 0 - Live 0xffffffffa00fc000
aesni_intel 12131 1 - Live 0xffffffffa0185000
cryptd 7111 1 aesni_intel, Live 0xffffffffa013c000
aes_x86_64 7758 1 aesni_intel, Live 0xffffffffa0128000
aes_generic 26908 2 aesni_intel,aes_x86_64, Live 0xffffffffa00f3000
fuse 61966 3 - Live 0xffffffffa030b000
cpufreq_powersave 1154 0 - Live 0xffffffffa00f0000
sunrpc 201569 1 - Live 0xffffffffa0580000
vboxpci 13918 0 - Live 0xffffffffa0576000
vboxnetadp 18145 0 - Live 0xffffffffa056c000
...
You can also use the command lsmod to get this info in a prettier format:
$ lsmod | less
Module Size Used by
tcp_lp 2111 0
aesni_intel 12131 1
cryptd 7111 1 aesni_intel
aes_x86_64 7758 1 aesni_intel
aes_generic 26908 2 aesni_intel,aes_x86_64
fuse 61966 3
cpufreq_powersave 1154 0
sunrpc 201569 1
vboxpci 13918 0
vboxnetadp 18145 0
...
module info
You can use the command modinfo to find out more about a particular module:
$ modinfo tcp_lp
filename: /lib/modules/2.6.35.14-106.fc14.x86_64/kernel/net/ipv4/tcp_lp.ko
description: TCP Low Priority
license: GPL
author: Wong Hoi Sing Edison, Hung Hing Lun Mike
srcversion: 8BFC408F81AB96C2D21A317
depends:
vermagic: 2.6.35.14-106.fc14.x86_64 SMP mod_unload
What drivers/modules are available to my kernel?
You can look through this directory to see all the kernel drivers/modules that are provided by your system for use with your kernel:
$ ls /lib/modules/`uname -r`
build modules.alias modules.builtin.bin modules.drm modules.modesetting modules.pcimap modules.usbmap
extra modules.alias.bin modules.ccwmap modules.ieee1394map modules.networking modules.seriomap source
kernel modules.block modules.dep modules.inputmap modules.ofmap modules.symbols updates
misc modules.builtin modules.dep.bin modules.isapnpmap modules.order modules.symbols.bin vdso
You can list them out with this command:
$ find /lib/modules/`uname -r` -type f | less
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.dep.bin
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.ieee1394map
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.networking
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.dep
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.isapnpmap
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.builtin
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.seriomap
/lib/modules/2.6.35.14-106.fc14.x86_64/modules.usbmap
...
References
Howto: Display List of Modules or Device Drivers In the Linux Kernel
| Find driver (which is not automatically installed) for a specific hardware |
1,406,268,808,000 |
I'd like to put a script in /etc/pm/suspend.d/ that needs network access (for a very short time) before allowing the system to suspend. However, even with scripts named "001_something" in /etc/pm/suspend.d/ and /usr/lib/pm-utils/sleep.d/ I do not get any network access. It seems this is disabled before the scripts are run.
Why is networking disabled? How can I enable it?
Also, I am unable to make use of the pm-suspend.log in /var/log. It seems the file for the suspend part is overwritten as soon as the system is resumed?
The following has been observed in daemon.log:
Feb 7 22:09:04 zenbook NetworkManager[3606]: <info> sleep requested (sleeping: no enabled: yes)·
Feb 7 22:09:04 zenbook NetworkManager[3606]: <info> sleeping or disabling...·
Feb 7 22:09:04 zenbook NetworkManager[3606]: <info> (wlan0): now unmanaged·
Feb 7 22:09:04 zenbook NetworkManager[3606]: <info> (wlan0): device state change: activated -> unmanaged (reason 'sleeping') [100 10 37]·
Feb 7 22:09:04 zenbook NetworkManager[3606]: <info> (wlan0): deactivating device (reason 'sleeping') [37]·
I am using Debian Testing with Gnome 3.
EDIT: The problem is not related to pm-utils. As far as I know, NetworkManager disables the network (in nm-manager.c:do_sleep_wake). I don't know how to solve this, yet. See NetworkManager: disabled network when sending system to sleep
|
1. Quirks?
First I would confirm that your suspend is functioning correctly. Take a look at the quirks page and confirm that your suspend is functioning correctly and not just seeming like it's working right.
Sleep Quirk Debugger
2. Is your 001_something script executable?
Check to make sure that your 001_something script is executable!
% chmod +x 001_something
3. Does your 001_something script look correct?
Check to make sure your script conforms to what pm-utils is expecting.
Example script
#!/bin/bash
case "$1" in
hibernate|suspend)
ACTION BEFORE SUSPEND/HIBERNATE
;;
thaw|resume)
ACTION AFTER RESUME
;;
*)
;;
esac
exit $?
NOTE: Are you putting your attempts to use the network in the hibernate|suspend correct portion of the case/switch statement?
4. file in .d directory functioning (/etc/pm/suspend.d/ or /usr/lib/pm-utils/sleep.d/)?
Next I would confirm that your 001_something script is in fact getting picked up by suspend/hibernate correctly by having it simply echo out to a file some string just so you know that it's working.
echo "yup I'm working" > /tmp/pmck_`date +%Y-%T`.log
You should then see files such as pmck_2013-16:08:11.log in /tmp.
5. /var/log?
If the above .d directory is functioning, I would make a 001_something and have it copy the /var/log/pm-suspend.log file you think is getting overwritten to some other file under /tmp, that way you can at least confirm that logging is correct. This may gain you some further insight into what's happening.
cp /var/log/pm-suspend.log /tmp/pmlg_`date +%Y-%T`.log
6. Sleep hook number?
Also can you change the name of your hook file to 00-something instead of 001_something? Not sure but the man page indicates these values.
SLEEP HOOK ORDERING CONVENTION
00 - 49
User and most package supplied hooks. If a hook assumes that all of the usual services and userspace infrastructure
is still running, it should be here.
50 - 74
Service handling hooks. Hooks that start or stop a service belong in this range. At or before 50, hooks can assume
that all services are still enabled.
75 - 89
Module and non-core hardware handling. If a hook needs to load/unload a module, or if it needs to place non-video
hardware that would otherwise break suspend or hibernate into a safe state, it belongs in this range. At or before
75, hooks can assume all modules are still loaded.
90 - 99
Reserved for critical suspend hooks.
7. Network connectivity?
Add the following to your 001_something script:
TMP=/tmp/pmip_`date +%Y-%T`.log
# network status?
ip link show > $TMP
# dns working?
dig google.com +answer >> $TMP
# can we ping google?
ping -c 5 www.google.com >> $TMP
8. Bug with pm-utils, HAL, and Wheezy?
I came across this debian bug report and wonder if this might be the cause of your problem. The bug describes an issue with HAL and pm-utils. It sounds like removing HAL fixes the networking issue.
9. More verbose pm-utils debugging
Additionally there is this link which offers advice for suspend/resume issues specific to Debian. There is mention of a way to increase the logging of pm-utils by setting a variable, PM_DEBUG=true in the /usr/lib/pm-utils/pm-functions file.
excerpt
Enabling Debugging for pm-utils
The log of suspend and resume processes are in file
/var/log/pm-suspend.log. It contains moderately verbose information by
default. More information can be enabled for debugging by inserting
line export PM_DEBUG=true into the beginning of file
/usr/lib/pm-utils/pm-functions.
Perhaps this might be helpful in giving you more insight into what's going on with pm-utils!
10. ACPI shutting down network prior to pm-utils?
If the issue doesn't appear to be with pm-utils, it may be because of acpi. When you close the lid on your laptop, an acpi event is triggered, that event has an action associated to it.
EVENT File
% more /etc/acpi/events/lm_lid
event=button[ /]lid
action=/etc/acpi/actions/lm_lid.sh %e
ACTION File
% more /etc/acpi/actions/lm_lid.sh
#! /bin/sh
test -f /usr/sbin/laptop_mode || exit 0
# lid button pressed/released event handler
/usr/sbin/laptop_mode auto
Taking a closer look at laptop_mode you'll see that this tool is responsible for doing a variety of things, one of which is managing the status of your network devices.
laptop-mode maintains a directory, /etc/laptop-mode/conf.d, similar to other unix tools. In there are files related to the ethernet and wireless networking devices.
In the primary config. file, /etc/laptop-mode/laptop-mode.conf, is the ability to turn on more verbose messaging. Perhaps this will shed some additional light on what's going on?
VERBOSE_OUTPUT=1
Summary of above things to try based on the OP's feedback
1: Suspend works as far as battery usage and the sleep LED on my notebook are concerned. Otherwise I do not understand how the mentioned web page should help me find out.
2: It is.
3: It looks correct.
4: I get those files.
5: I get the corresponding log files, but these are not helpful to me.
6: 00 instead of 001 does not show any difference.
7: Things in this section just test for network connectivity. As said in my question, I do not have network connectivity as soon as the script is run. The wlan0 device is down. The log files: http://paste.debian.net/231760.
NOTE: I did not have dig installed (error msg. in paste.debian.net log), however it is clear that no network access is available (as said). I can see that it is down by inspecting the output of iwconfig, ip link show, ping, ... The perl script is the script in question.
BTW, as soon as the first line of /usr/lib/pm-utils/bin/pm-action is executed (from upowerd), the network is down already.
8: hal was installed, removing it does not change anything.
| pm-utils: No network in suspend scripts? |
1,406,268,808,000 |
This is for academic purpose. I want to know which commands are executed when we do something in GUI, for example creating a folder. I want to show that both the mkdir shell command and create folder option from GUI does the same thing.
|
You can observe what the process does with the strace command. Strace shows the system calls performed by a process. Everything¹ a process that affects its environment is done through system calls. For example, creating a directory can only be done by ultimately calling the mkdir system call. The mkdir shell command is a thin wrapper around the system call of the same name.
To see what mkdir is doing, run
strace mkdir foo
You'll see a lot of calls other than mkdir (76 in total for a successful mkdir on my system), starting with execve which loads the process binary image, then calls to load the libraries and data files used by the program, calls to allocate memory, calls to observe the system state, … Finally the command calls mkdir and wraps down, finishing with exit_group.
To observe what a GUI program is doing, start it and only observe it during one action. Find out the process ID of the program (with ps x, htop or any other process viewer), then run
strace -o file_manager.mkdir.strace -p1234
This puts the trace from process 1234 in the file file_manager.mkdir.strace. Press Ctrl+C to stop strace without stopping the program. Note that something like entering the name of the directory may involve thousands or tens of thousands of system calls: handling mouse movements, focus changes and so on is a lot more complex at that level than creating a directory.
You can select what system calls are recorded in the strace output by passing the -e option. For example, to omit read, write and select:
strace -e \!read,write,select …
To only record mkdir calls:
strace -e mkdir …
¹ Ok, almost everything. Shared memory only involves a system call for the initial setup.
| How to know which commands are executed when I do something in GUI |
1,406,268,808,000 |
Here's my problem:
I have a laptop running Arch that I just keep on at home. It's got a good 4 hour battery life, but sometimes my daughter is playing near where it's kept and ends out pulling the plug. Well, when I get home 5 hours later, my laptop had a hard shutdown.
Additionally, sometimes I'll leave it suspended and forget about it for a day or so - same problem.
Here's my proposition:
So my thought was that I could make a cron job that runs every 15 or 30 minutes or something, checking the battery life. If the battery life is < N minutes left, I could just hibernate the laptop. This would work fine if my laptop is in normal 'on'. But if I'm suspended, not so much. So my question is two fold - is there a better way to do this, and if not, is it possible to do some sort of monitoring in suspend mode - basically just run that cron job?
Here's what worked:
Following the uswsusp instructions on the Arch wiki, I installed uswsusp from the AUR. Using the following command:
wayne@jughead:~$ swapon -s
Filename Type Size Used Priority
/dev/sda2 partition 530140 56744 -1
I discovered /dev/sda2 was the name of my swap partition. So I set this in my /etc/suspend.conf
snapshot device = /dev/snapshot
resume device = /dev/sda2
I added uresume in my mkinitcpio.conf here:
HOOKS="base udev autodetect pata scsi sata resume uresume filesystems usbinput fsck"
I created /etc/pm/config.d/module and put
SLEEP_MODULE=uswsusp
in it.
Since my laptop was not recognized (# s2ram --test displayed Machine unknown) I had to use the --force option.
In /usr/lib/pm-utils/module.d/uswsusp I also changed all of the s2ram options to s2both.
|
Sounds like you want suspend-to-both/hybrid suspend which should do all the steps of hibernating, including writing RAM to disk, but not actually turn the machine off; instead, it'll go into S3 (standby). If you wake the machine up before the battery dies, it'll be fairly quick; if the battery dies, it'll be just as if you'd hibernated it.
| Is it possible to automatically wake from suspend? |
1,406,268,808,000 |
I have a CentOS release 5.4 linux box on Amazon EC2 that I'm trying to set up to be monitored via Nagios. The machine is in the same security group as the nagios server, but it seems to be unresponsive to pings or NRPE checks, although apparently port 22 is open.
The CentOS box can ping itself using it's internal IP address, and it can ping the Nagios server, but the server can not ping the CentOS box.
I know the CentOS box is using iptables, here are the contents of the /etc/sysconfig/iptables file (some ips changed for security):
# Generated by iptables-save v1.3.5 on May 16 11:28:45 2012
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [56:6601]
-A INPUT -s 149.15.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 72.14.1.153 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -s 184.119.28.174 -p tcp -m tcp --dport 5666 -j ACCEPT
COMMIT
# Completed on May 16 11:28:45 2012
The part that really gets me is that even after I do /etc/init.d/iptables stop:
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
I am still unable to ping the box or do NRPE checks on it.
What else could be preventing ping or other connections? I'm not sure what else to try.
Here is a list of processes found with sudo ps -A:
aio/0
atd
bash
cqueue/0
crond
dbus-daemon
dhclient
events/0
hald
hald-runner
init
kauditd
kblockd/0
khelper
khubd
kjournald
kmirrord
kmpathd/0
kpsmoused
kseriod
ksoftirqd/0
kswapd0
kthread
master
migration/0
mingetty
nscd
pdflush
pickup
qmgr
sshd
su
syslog-ng
udevd
watchdog/0
xenbus
xenwatch
xinetd
|
I dont' think that it's related to ping problem, but if you want to put selinux temporary off, you have this option:
setenforce 0
it put selinux from enforcing to permissive mode, to check its condition run
sestatus
to diable selinux permanently you can use system-config-securitylevel or edit with nano or vi /etc/selinux/config and change the parameter from SELINUX=enforcing to SELINUX=disabled.
for me there is a rule in Amazon EC2 that prevent to allow the ping traffic between your machines...
| What prevents a machine from responding to pings? |
1,406,268,808,000 |
Inside the /etc/fstab file, in the sixth column, there is a number that corresponds to whether a filesystem should be scanned for errors. Possible values are:
0 - skip
1 - high priority
2 - low priority
Why was fsck 'priority' introduced in /etc/fstab?
|
The field exists so you can define the order in which filesystems are checked. Different partitions on the same drive should not be checked at the same time since the IO going to each filesystem will compete with one another, and slow the whole process down. Filesystems on different physical disks could be set to check in the same pass to speed up the whole process since the IO to separate disks would not be competing.
| Why was fsck priority introduced in /etc/fstab? |
1,406,268,808,000 |
Who is responsible for creating the "/sys/class/drm" directory structure, more specifically the "/sys/class/drm/card0-LVDS-1" directory?
I am using kernel-2.6.38 and an nVidia card.
|
The DRM module is responsible for that subtree in SysFS. You can browse the source code for that in drivers/gpu/drm/drm_sysfs.c.
The subdirectories are per-connector, with a name of the form card%d-%s with %d replaced by an index (that I know nothing about) and %s replaced with the connector name.
Five files per device should show up:
Connection status
Enabled (or not)
DPMS state
Mode list
EDID
For some devices, you'll get extra information for sub-connectors too.
| /sys/class/drm directory structure |
1,406,268,808,000 |
I want to write a game which runs in a terminal. I do some terminal coloring and wanted to use some unicode characters for nice ascii art "graphics". But a lot of unicode characters aren't supported in the linux terminal (the non-X terminal, I don't know how you call it... VT100? I mean the terminal which uses the text mode for output, no graphic mode, so the same font as in bios is used to display the text.)
For example, I wanted to draw half character "pixels" using the "half block" characters U+2580 (▀) and U+2584 (▄) but these are not supported in the terminal. (These are only examples - I want to use a lot more special characters...)
Which characters does this font support? Is there any document or table listing these characters? Is this device-dependent or is there any "standard"?
|
That terminal is called the Linux console, or sometimes a “vt” (short for virtual terminal). The terminology can be confusing, especially since it's used inconsistently and sometimes incorrectly. You can find more information on terminology by reading What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?.
The Linux console supports user-configured fonts, so the answer to your question is “whatever the user set up”. The utility to change the font is consolechars, part of the Linux console tools. Only 8-bit fonts are supported by the hardware, though you can partly work around this by supporting unicode-encoded output but only having 256 glyphs (other characters are ignored). Read the lct documentation (online as of this writing, it should be included in your distribution's package) for more information.
If you use the Linux framebuffer, you can have proper unicode support, either directly or through fbterm.
The half-block characters are included in IBM code page 437, which is supported in ROM most PC video adapters. Depending on what characters you need, this may be enough.
Note that very few people use the Linux console these days. Some people cannot use it for various reasons (not running Linux, running on a remote X terminal, having a video adapter where text mode is buggy, …). I don't recommend spending much energy on supporting it.
| Charset / font in the Linux console |
1,406,268,808,000 |
I do not understand how namespaces interact with /proc. I assumed that /proc returns values based on the process that queries them.
For example, let's determine the PID of the current process inside the global PID namespace:
$ bwrap --bind / / readlink /proc/self
6182
This makes sense to me. However, when I isolate readlink in its own PID namespace:
$ bwrap --bind / / --unshare-pid readlink /proc/self
6177
I get the same result! To get the PID inside the namespace, I need to add --proc /proc:
$ bwrap --bind / / --unshare-pid --proc /proc readlink /proc/self
2
But shouldn't /proc always take the context of the reading process into account? Why is the extra procfs required and how is it related to the readlink process?
If I do not create a new PID namespace, the extra procfs makes no difference:
$ bwrap --bind / / --proc /proc readlink /proc/self
6179
|
This is one of the gotchas of namespaces. With
bwrap --bind / / --unshare-pid readlink /proc/self
you’ve created a new PID namespace, and a new mount namespace (because bwrap does that by default), but you’re explicitly bind-mounting the external / into that mount namespace. The result is that, inside the new mount namespace, /proc is the same as outside — try
bwrap --bind / / --unshare-pid ps -ef
The key feature here is described in man pid_namespaces:
A /proc filesystem shows (in the /proc/[pid] directories) only
processes visible in the PID namespace of the process that
performed the mount, even if the /proc filesystem is viewed from
processes in other namespaces.
(emphasis mine). You can see /proc memorising the appropriate PID namespace here.
So readlink sees /proc through the eyes of the PID namespace that performed the mount, not through its own PID namespace.
Adding --proc=/proc mounts /proc anew, inside the forked bwrap in the new PID namespace, so its contents reflect the new PID namespace.
| How does /proc interact with PID namespaces? |
1,406,268,808,000 |
I want to make sure that only one script can be run by any regular user of a system at a time. There can be multiple users logged in and each of them should only be able to run a command after any running commands of other users have finished.
A long time ago on UNIX I was using the batch command with a proper "one task only" queue definition to serialize scripts execution. It solved lot of lock management problems (it needed only a simple timeout to set in the scripts).
Now on Linux the batch command performs differently, one task is run at each minute, tasks run in parallel until 1.5 load average is reached.
I made my own lock management shell library to serialize execution but I wonder if there is a standard command for doing that.
|
flock is really excellent for this. You can use flock in a wrapper around your shell script, use it on the command line, or incorporate it into your script itself.
The best thing about flock is that while it waits, it doesn't wait in a busy loop.
It also always cleans up the lock when your process exits / flock exits.
Methods based on atomic file/directory creation can get locked out if the process exits without cleaning up (or if there is a kernel panic, or power failure, ...).
With flock, the Linux kernel does the cleanup.
From the manual,
(
flock -s 200
# ... commands executed under lock ...
) 200>/var/lock/mylockfile
In this form you can wrap a specific block of code in your shell script.
Or you can run it like this,
/usr/bin/flock /tmp/lockfile command
If you don't want to block/wait indefinitely, you can specify a timeout:
-w --timeout <secs> wait for a limited amount of time
Or just use a non blocking argument:
-n --nonblock fail rather than wait
| How to serialize command execution on linux? |
1,406,268,808,000 |
Say I have a USB device with a vendor id (VID) of 0123 and a product id (PID) of abcd.
0123:abcd
According to USB.org, product id assignment is entirely up to a manufacturer.
Product IDs (PIDs) are assigned by each vendor as they see fit
So there's nothing stopping a misguided vendor from selling a wide range of USB devices, all needing different drivers, and all using the same vendor and products ids.
USB Device A (needs driver X) -> 0123:abcd
USB Device B (needs driver Y) -> 0123:abcd
USB Device C (needs driver Z) -> 0123:abcd
USB.org acknowledges that this potential vendor behavior can be problematic.
Duplicate numbers may cause driver error
In a case where the ids are reused for cards needing different drivers, is there anything the OS can do to determine the appropriate driver?
Are there any other fields presented by the USB device that can be used (or are typically used) to infer the appropriate driver? I'm assuming only vendor id and product id are used to make that determination.
Or will a typical *nix system assume that there's a one <-> one relationship between 0123:abcd and the driver that should be used, and so all it can do is choose the 1 driver it thinks is appropriate?
I'm guessing, if only vendor id and product id are typically used, that only manual user intervention in loading the proper driver will work, and that there's not much else to do aside from being upset at the vendor for making things confusing.
|
There are some other pieces of information which can be used to select a device driver: a version number, the device class, subclass and protocol, and the interface class, subclass and protocol. (For the driver side of things on Linux, look at the USB_DEVICE macros. You can get an idea of the information available by looling at the output of lsusb -v.)
As you’d expect that’s still not enough, so before a driver is actually registered for a device, the kernel calls a probe function in the driver. That function gets to decide whether the device is actually supported by the driver. Generally speaking though, on Linux, devices with the same id but different implementations are handled by the same driver, which avoids having to map multiple drivers to one device. To see the exceptions to this rule, you can run
find /lib/modules/$(uname -r) -name \*.ko\* | xargs /sbin/modinfo | awk '/^filename:/ { filename = $2 } /^alias:/ { printf "%s %s\n", filename,$2 }' | sort | uniq -D -f 1 | uniq -u | less
which will list the few drivers which match conflicting ids (none of which are USB device drivers).
| Do vendor id and product id alone determine the driver used for a USB device? |
1,406,268,808,000 |
I'm not sure if I'm thinking about this the right way (and please correct me if I'm wrong), but the following is my understanding of ftrace.
In /sys/kernel/debug/tracing, there are the following files:
set_ftrace_filter
which will only trace the functions listed inside,
set_ftrace_notrace
which will only trace the functions NOT listed inside, and
set_ftrace_pid
which will only trace the processes with the pid inside.
My question is: is there a way to configure it so that ftrace will only trace processes that DO NOT have a certain pid (or process name)?
Analogy:
set_ftrace_filter : set_ftrace_notrace :: set_ftrace_pid : x
Does x exist, and if so, how do I use it?
For instance, if I wanted to trace all processes except the one with pid 48, is there some way to put something meaning not 48 into set_ftrace_pid?
I've been reading the documentation and searching the web, but I can't find either the way to achieve this or whether this is possible.
Why I'm doing this: I have a program that's tracing kernel-level system calls, but I want to write the program's pid (and the pids of its children, if necessary later) to a filter so that they aren't included with the trace data. When reading the trace, I could check the pid as I read each trace record and decide whether to use that record or not, but I would prefer not to add this overhead for every record that is read if there's a way to avoid it.
Thank you for your time!
|
I figured out how to do what I was describing, but it was a bit counter-intuitive, so I'm posting the answer here for people who might hit this page when searching (tl:dr; at bottom). As far as I know, there is no blanket way to just filter out processes with a certain PID from ftrace as easily as it is to tell it to ONLY consider processes with a certain PID, but in my case, I only care about raw system calls (sys_enter) and I found out how to eliminate records with certain PIDs from being included for those and this is how:
The ftrace directory is:
/sys/kernel/debug/tracing/
Inside, there is a directory called "events." From here, you can see all the things that ftrace can trace, but for my case, I go into "raw_syscalls."
Within raw_syscalls," the two subdirectories are sys_enter and sys_exit.
Within sys_enter (and sys_exit, for that matter), there are the following files:
enable
filter
format
id
trigger
"filter" is the one we care most about right now, but format has useful information regarding the fields of an entry produced by ftrace when sys_enter is enabled:
name: sys_enter
ID: 17
format:
field:unsigned short common_type; offset:0; size:2; signed:0;
field:unsigned char common_flags; offset:2; size:1; signed:0;
field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
field:int common_pid; offset:4; size:4; signed:1;
field:long id; offset:8; size:8; signed:1;
field:unsigned long args[6]; offset:16; size:48; signed:0;
Here, we care about common_pid.
If you want your trace to omit records from a process with PID n, you would edit
/sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/filter
To read:
common_pid != n
If the program you're trying to ignore while tracing has multiple threads or multiple processes, you just use the && operator. Say you want to omit processes with PIDs n, o, and p, you would edit the file so that it reads:
common_pid != n && common_pid != o && common_pid != p
To clear a filter, you just write "0" to the file:
echo "0" > /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/filter
...would do the trick.
enable has to contain "1" for the event you're tracing as well as tracing_on in the ftrace directory. Writing in 0 turns tracing of that event (or all tracing in the case of tracing_on) off.
Writing to these files requires root permissions.
That's about all I can think of. Thanks to anyone who read/voted on this and I hope my answer helps someone. If anyone knows a way that makes the way I did it look stupid, feel free to call me out.
tl;dr: to filter out records from process 48, write:
common_pid != 48
...to
/sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/filter
Filter multiple PIDs (eg. 48, 49, 53, 58) by writing this instead:
common_pid != 48 && common_pid != 49 && common_pid != 53 && common_pid !=58
Replace "events/raw_syscalls/sys_enter" with your desired event and replace my numbers with whatever PIDs you want to ignore.
| filter out certain processes and/or pids in ftrace? |
1,406,268,808,000 |
I want to be able to run a command which would print what exactly FAT version/subtype is a partition formatted in (FAT12/FAT16/FAT32/VFAT/exFAT)
Some guys suggests following command
# stat -f -c %T /boot/efi
msdos
or
# df -T | grep boot
/dev/sda2 vfat 262144 67916 194228 26% /boot/efi
here is what stat prints for exFAT
# stat -f -c %T /media/a1ex/7B57-DCAA/
fuseblk
These outputs look confusing, don't they?
|
vfat is only to represent that it's a FAT partition, according to the partition table and fstab. fdisk -l will tell you the same thing as df -T or mount.
I wouldn't use stat, I would use file /dev/sda2 or parted /dev/sda -l to get a better idea.
Side note: fuseblk is used for auto-mounted media. There is a clear difference between the /boot/efi and the /media/... example you showed.
| How to determine file system type reliably under Linux? |
1,406,268,808,000 |
I am using Centos 6.6 (x86_64)
Trying to install most stable mongodb version available.
but I am stuck with this error (which might seem repeated but none of the previous answers worked for me)
[root@localhost home]# sudo yum install -y mongodb-org
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: ftp.iitm.ac.in
* extras: ftp.iitm.ac.in
* updates: centos.01link.hk
http://repo.mongodb.org/yum/redhat/%24releaserver/mongodb-org/3.0/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: mongodb-org-3.0. Please verify its path and try again
My repo:
vim /etc/yum.repos.d/mongodb-org-3.0.repo
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releaserver/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
tried
yum clean all
yum check
yum erase apf
yum erase upgrade
also tried
sudo sed -i 's/https/http/g' /etc/yum.repos.d/mongodb-org-3.0.repo
my yum.conf
[root@localhost home]# cat /etc/yum.conf
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=19&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release
# This is the default, if you make this bigger yum won't see if the metadata
# is newer on the remote and so you'll "gain" the bandwidth of not having to
# download the new metadata and "pay" for it by yum not having correct
# information.
# It is esp. important, to have correct metadata, for distributions like
# Fedora which don't keep old packages around. If you don't like this checking
# interupting your command line usage, it's much better to have something
# manually check the metadata once an hour (yum-updatesd will do this).
# metadata_expire=90m
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d
[root@localhost home]#
Please help me figure this out !
also i have set SELinux=permissive
After fixing errors which sim pointed i am getting the following error
[root@localhost Hubatrix]# yum clean all
Loaded plugins: fastestmirror, refresh-packagekit, security
Cleaning repos: base extras mongodb-org-3.0 updates
Cleaning up Everything
Cleaning up list of fastest mirrors
[root@localhost Hubatrix]# cat /etc/yum.repos.d/mongodb-org-3.0.repo
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/repodata/repomd.xml
gpgcheck=0
enabled=1
[root@localhost Hubatrix]# sudo yum install -y mongodb-org
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Determining fastest mirrors
* base: centos.excellmedia.net
* extras: centos.excellmedia.net
* updates: centos.excellmedia.net
base | 3.7 kB 00:00
base/primary_db | 4.6 MB 01:21
extras | 3.4 kB 00:00
extras/primary_db | 31 kB 00:00
https://repo.mongodb.org/yum/redhat/6/mongodb-org/3.0/x86_64/repodata/repomd.xml/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: mongodb-org-3.0. Please verify its path and try again
|
The error is pretty clear from yum:
http://repo.mongodb.org/yum/redhat/%24releaserver/mongodb-org/3.0/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
There isn't a file at the other end of that URL for yum to download, hence the 404. Put that URL in your browser and start to navigate to see what files are actually there.
This is the correct URL when I browse their repo:
http://repo.mongodb.org/yum/redhat/6/mongodb-org/3.0/x86_64/repodata/repomd.xml
I suspect they moved things but didn't regenerate the repomd.xml file. You can work around the issue by downloading the packages manually and then using yum install .. to install things.
Typo
But I think there's a typo in your repo file:
baseurl=https://repo.mongodb.org/yum/redhat/$releaserver/mongodb-org/3.0/x86_64/
Should be this:
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
| Yum error while installing MongoDB on CentOS? |
1,406,268,808,000 |
This question answers to the question on how to find the what is part of cache. However, in the fincore executable you have to pass the filename to check if it is part of cache.
Is there a tools or a method to find all the entries that are part of the cached memory without passing the filenames.
PS: We are running it in an embedded system, and running a for loop and passing all the files to fincore itself is a more time and memory consuming process. Hence, I’m looking for other methods.
|
I don't know of any place where the kernel exposes the filenames associated with the blocks that it has cached. According to this answer
https://stackoverflow.com/a/4941371
The best you could probably do even with a custom kernel module would be to get a list of inodes and devices. From there you would still likely need to walk the filesystem looking for those files.
You may then ask "But, how does fincore know about the files I've listed?" Or you might not, but I found the method pretty clever, so here it is. The fincore tool works by doing the following:
calling mmap(2) on the given file (https://code.google.com/p/linux-ftools/source/browse/linux-fincore.c#260)
calling mincore(2) on the memory region returned by mmap (https://code.google.com/p/linux-ftools/source/browse/linux-fincore.c#279)
The mincore system call tells you whether the given pages of memory are in core memory (ie, would not cause a page fault when accessed). Since mmap lazily loads the mapped file, and we haven't read any of the mapped region yet, any pages that would not cause a page fault must otherwise be part of our cache.
| List all files that are present in the cache |
1,406,268,808,000 |
Is there a way to replace password lock screen in Linux (Mint Debian Edition) with a pin lock screen? Like the one found in Windows 8 for example.
It's annoying/inconvenient to have to input 16 character-long passwords every time I lock my computer, and insecure to decrease the password length to a pin-friendly length.
Clarification:
Pin: 4 character-long password that's used only in the lock screen. i.e. the regular user password to execute functions is still the same throughout the OS (for security).
|
You can do this via PAM configuration. For example, if you use XScreenSaver, you'd edit /etc/pam.d/xscreensaver and change the @include common-auth line.
Rather than repeat all the details, I'll point you to my answer to Set sudo password differently from login one. The procedure is almost exactly the same, except that you'll be editing the PAM config for your screensaver, instead of sudo.
Then you can set your PIN as your screensaver password.
| Pin Lock Screen |
1,406,268,808,000 |
I am currently using openSUSE 13.1 KDE. The big reason why I like openSUSE is YaST.
YaST does so much and makes many parts of life easier. YaST not only graphically allows me to add/remove/manage repositories, and packages. It allows me to manage my firewall, kernel, services, groups, sudo and a lot more from a GUI. My favorite is that YaST allows me to set up Apache Virtual Hosts with a few clicks of the mouse on my local desktop (I am a web developer).
Now I can and know how to manage most of these things in terminal, but sometimes I like the GUI.
Are there any alternatives to YaST out there? Either distro agnostic or specific to a distro (any linux distro). I just want to see what else is out there.
|
Why not to depend on YaST
There is nothing that does what YaST does for non-SUSE distros. There are little tools here and there but nothing as comprehensive. It's a blessing and a curse. People that come to depend on YaST miss out on how things under the hood actually work.
I would take the time to actually "learn" how things work rather than looking for another crutch. I'm not saying this to be mean, I used to use YaST in my day job and appreciate what it provides but it's a crutch.
Alternatives
1. Yast4Debian
If you're truly motivated I did come across this project which appears to be on hold but might be a good code base for you to pick up if you're truly looking for developing something like YaST for other distros.
YaST4Debian
2. YaST in Ruby
Also it looks like the upcoming version of YaST for SuSE 13.1 was ported to a Ruby implementation, so it might be easier to port thanks to this effort.
Coming soon: openSUSE 13.1 with YaST in Ruby
openSUSE: Porting YaST to Ruby
excerpt
Why did you want to port YaST to Ruby?
YaST was developed in YCP — a custom, simple, inflexible language. For a long time, many YaST developers felt that it slowed them down. It didn’t support many useful concepts like OOP or exception handling, code written in it was hard to test, there were some annoying features (like a tendency to be “robust”, which really means hiding errors). However, original YCP developers moved on to other projects and there wasn’t anyone willing to step in and improve the language.
It was obvious that the only way out of this situation is to change the implementation to some other widely used language (most people were thinking about scripting languages, like Ruby or Python, which offer great flexibility and shorter code compared to e.g. C++ or Java). Such a change would mean we wouldn’t need to maintain our own custom language. It would also allow us to use many third-party libraries and make contributing to the project much easier for outsiders. People wouldn’t have to learn a whole new language just because of YaST.
Changing the implementation language of such a big codebase as YaST is a huge effort, so it’s no wonder that developers mostly only talked about it — for years. It required someone external to the team (David) to decide that’s talking isn’t enough and we should just do it :-)
How were the results?
Good :-) We translated 96 YaST modules in total and currently there is no YCP code used in YaST except few obscure places like examples in the documentation (which need to be manually rewritten to reflect current best practices). YCP is also still used as a serialization format for some data files and for communicating between YaST components, but this does not affect development and we will probably get rid of that too over time.
the YaST portal on the openSUSE wiki
developer info here
3. Yast for Enterprise Linux (RHEL + Unbreakable)
Oracle has this hosted project which looks to be a modified version of YaST that's specifically for use on RHEL + Unbreakable Linux. I would assume that you could then use it for CentOS and possibly Fedora too.
Project: Yast
I'm not sure of its status but might be worth a look. Though it's likely developed in the original YaST codebase, so you might want to take a look at the Ruby implementation first.
| Something like YaST for non SUSE distros |
1,406,268,808,000 |
What I want to do is, to monitor a directory (not recursive, just one) for new files created and append those files to one single big file as they are being written.
The number of files that are being written is huge, could reach as much as 50,000.
By using inotifywait, I am monitoring the directory like:
inotifywait -m -e create ~/folder | awk '($2=="CREATE"){print $3}' > ~/output.file
So I am storing names of new files created in ~/output.file and then using a for loop
for FILE in `cat ~/output.file`
do
cat $FILE >> ~/test.out
done
It works fine, if the rate at which a file is being written (created) in ~/folder is like 1 file per second.
But the requirement is large, and the rate at which the files are being created is very high, like 500 files per minute (or even more).
I checked the number of files in the ~/folder after the process is complete, but it does not match the inotifywait output. There is a difference of like 10–15 files, varies.
Also, the loop
for FILE in `cat ~/output.file`
do
done
doesn't process all the files in ~/output.file as they are being written.
Can anyone please suggest me an elegant solution to this problem?
|
No need to post-process the output... use inotifywait options --format and --outfile
If I run:
inotifywait -m --format '%f' -e create /home/don/folder/ --outfile /home/don/output.file
then open another tab, cd to ~/folder and run:
time seq -w 00001 50000 | parallel touch {}
real 1m44.841s
user 3m22.042s
sys 1m34.001s
(so I get much more than 500 files per minute) everything works fine and output.file contains all the 50000 file names that I just created.
Once the process has finished writing the files to disk you can append their content to your test.out (assuming you are always in ~/folder):
xargs < /home/don/output.file cat >> test.out
Or use read if you want to process files as they are created. So, while in ~/folder you could run:
inotifywait -m --format '%f' -e create ~/folder | while read file; do cat -- "$file" >> ~/test.out; done
| Inotifywait for large number of files in a directory |
1,353,086,939,000 |
If I list /proc/<pid>/fd I see a number of entries for sockets. These entries have timestamps. At first I thought they were when the socket was created. But it doesn't always appear to be the case.
What does this timestamp mean?
|
It looks like the entries in /proc/x/fd are instantiated the first time you access them (via a lstat(2) or any system call that has them involved), and that's where the time comes from.
| Timestamp of socket in /proc/<pid>/fd |
1,353,086,939,000 |
I have multiple users on a server. They upload and download their files through FTP. Sometimes some heavy transfer causes high load on the server. I am wondering, if there is any way to limit the ftp speed to avoid high load.
Any help would be much appreciated.
|
I found a way to limit ftp speed:
In the /etc/proftpd.conf insert this line:
TransferRate RETR,STOR,APPE,STOU 2000
This will limit ftp speed to 2 megabyte per second.
After changing the file you should restart the proftpd service:
/etc/init.d/proftpd restart
| How to limit ftp speed |
1,353,086,939,000 |
extract from syslog:
CRON[pid]: (user) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -
execdir fuser -s {} 2>/dev/null \; -delete)
My CPU has been stuck at 99% for a few hours now, and I'm assuming it's because of this. Would anyone happen to know what this is, how it started and how to stop it?
EDIT: I tried top -n1 and I see this in return multiple times:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
PID user 20 0 0 0 0 Z 99.9 0.0 0:00.00 fuser <defunct>
this line repeats about 8 times.
EDIT2:
uname-a:
user SMP Tue Feb 14 13:27:41 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux`
lsb_release -a:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 11.10
Release: 11.10
Codename: code
EDIT 3:
After reboot, the system went back to the same 99% cpu usage and the same top -n1 result.
|
Found the answer here: http://www.flynsarmy.com/2011/11/fuser-using-100-cpu-in-ubuntu-11-10/
in /etc/cron.d/php5 on Ubuntu 11.10:
Replace
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete
With
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete
| CPU stuck at 99% for a few hours: figuring out logs |
1,353,086,939,000 |
Here are the flags from /proc/cpuinfo:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf
pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid
sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida
arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
I clearly have a pclmulqdq flag, but I'm not sure if it means PCLMUL instruction set support. How can I find what the flag means, or what flag PCLMUL corresponds to?
|
From the information available at Wikipedia and Intel, I'd assume that yes.
From the Wikipedia entry:
PCLMULQDQ Performs a carry-less multiplication of two 64-bit integers
which matches the flag you have.
| Do I have PCLMUL instruction set support? |
1,353,086,939,000 |
Sorry - I don't remember the exact name. I know there is mechanism to patch the kernel at runtime by loading modules without need of the reboot as long as the structures involved are not affected. It is used by servers for security patches and recently by Ubuntu & Fedora.
What is the name of mechanism
Is there any how-to for hand-compiled kernels
Is it possible to automatically check if the change x.y.z.a -> x.y.z.a+1 changed any structure or not
|
I think you are looking for Ksplice. I haven't really followed the technology so I'm not sure how freely available the how-to information is but they certainly have freely available support for some Fedora and Ubuntu versions.
| Patching Linux kernel on-line (i.e. without rebooting) |
1,353,086,939,000 |
I only have a single on-board sound card which is a Realtek ALC298 and I do not have any needs for advanced sound configurations. Just a working sound system to listen to youtube videos, watch movies etc... So far I've followed many online articles. To summarize all of which I've tried:
Figure out if channel(s) are muted. I used alsamixer and also checked the pavucontrol, both of which show no muted channels. I repeated this step when was on 3rd step (read below) and new channels did show from time to time, but ultimately no sound.
Figure out if it's ALSA or just PulseAudio issue. So I used aplay -l:
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC298 Analog [ALC298 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 7: HDMI 1 [HDMI 1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 8: HDMI 2 [HDMI 2]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 9: HDMI 3 [HDMI 3]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 10: HDMI 4 [HDMI 4]
Subdevices: 1/1
Subdevice #0: subdevice #0
From there used a PCM formatted wav file aplay -D plughw:0,0 test.wav which gave:
Playing WAVE 'test.wav' : Signed 32 bit Little Endian, Rate 44100 Hz, Stereo
But nothing! no sound nowhere, speakers or headphones. I conculded that it's an ALSA problem and not a PulseAudio issue, but I do have a doubt as PulseAudio daemon was running throughout this step. As an interesting side note, when I was doing this step the gnome sound settings showed the sound bars moving as if something was playing :D
I found an article on kernel website about HDA audio and a kernel ability to dynamically reconfigure the audio codec without having to reboot the machine. I managed to find and use the hdajackretask utility which is part of alsa-tools repo and it provided me with a GUI. This utility writes the pin modifications to user_pin_configs file (FYI verified this manually after reboot). However I could not figure out the right combination of pin reassignments. Following are the pins that can be reassigned:
0x12
0x13
0x14
0x17
0x18
0x19
0x1a
0x1d
0x1e
0x1f
0x21
My idea here was to basically use ALC269 model as I saw an interesting patch file when googling. Link is for rasp pi, but I figured it's worth a shot seeing the ALC269 is a supported kernel HDA audio model. Although this did not change anything, perhaps someone can benefit from it.
Any help is appreciated here. I'm way beyond my linux skills.
PS: manjaro, linux56 although all distributions have the same issue with the sound card. I've installed almost every distro in the past a few months hoping sound would work.
Edit 1
Added a pastebin of alsa-info.sh for more information.
|
Good news! A very smart Arch user by the name of ronincoder discovered a fix for the headphone jack. I worked with ronincoder to make a kernel patch [1] and our patch made it into the 5.7 kernel release! It was also applied to the 5.4 LTS kernel. I booted both 5.7.2 and 5.4.46 and the headphone jack audio is loud and clear. :)
Does it work for you? It should if you have a Samsung Notebook 9 Pro NP930SBE-K01US or NP930MBE-K04US (ronincoder's is the former, mine is the latter). You can check your laptop model by running alsa_info.sh and looking at "Board Name". The Realtek ALC298 codec in the NP930SBE-K01US and NP930MBE-K04US identifies itself with "Subsystem Id" 0x144dc169 and 0x144dc176, respectively. If snd_hda_intel sees either of these ids it implements the fix.
What about the speakers? I reported the no-sound-on-internal-speakers issue on the kernel bugzilla [2]. Linux sound maintainer Jaroslav Kysela speculates that there may be some amplifiers connected to the HDA codec which are not initialized by the BIOS, and are thus not active in Linux. He suggests dumping the codec communication for the Windows driver using QEMU. We could then parse the dump and replay the communication in Linux using Early Patching [3] or writing another kernel patch. It's been a month since Jaroslav made this suggestion and I've made some progress but I still don't have a good dump. Please join the discussion on the kernel bugzilla if you'd like to help me. ^^
[1] For reference, our patch made it into Linus' tree as commit 14425f1f521f (ALSA: hda/realtek: Add quirk for Samsung Notebook).
[2] https://bugzilla.kernel.org/show_bug.cgi?id=207423
[3] https://www.kernel.org/doc/html/v4.17/sound/hd-audio/notes.html#early-patching
| ALSA, PulseAudio and Intel HDA PCH with no sound |
1,353,086,939,000 |
how to force "machinectl shell" or systemd-run to ask for password in terminal instead of dialog window?
I can run a command as root using:
machinectl shell --uid=root --setenv='DISPLAY=:1.0' --setenv=SHELL=/bin/bash .host /bin/bash -lc 'startxfce4'
but it ask for the password using the dialog window
I want to have the same behavior of sudo (sudo ask for the password using the terminal so I can script it easily)
on way I found is using ssh like that:
ssh -t MyActualNormalUser@localhost
then run the same command as above:
machinectl shell --uid=root --setenv='DISPLAY=:1.0' --setenv=SHELL=/bin/bash .host /bin/bash -lc 'thunar'
now machinectl ask for the password using the terminal instead of the GUI dialog window!
How can I achieve the same result without using ssh? is it possible to force machinectl/pkexec to ask for the password in the terminal?
why not use sudo? sudo do not create a new session for the command I run , machinectl run a totally separated session which make life in scripting easier. and as I read machinectl/pkexec are the su/sudo replace if I m not wrong...
|
Run a command as another user
to run something as another user we have different methods:
machinectl: this create a separate session
ssh: this create a separate session
systemd-run: this do not create a separate session, but create a separated service unit that can be controlled too like the session.
for example when I run loginctl session-status I get this error: Could not get properties: Caller does not belong to any known session , because of no session ID.
pkexec: this do not create a separate session
sudo: this do not create a separate session
How to pass the password in terminal (not gui)
we can use:
pkexec: this need pkttyagent
machinectl: this ask for password using gui , to use tty for pass we need pkexec/sudo or ssh
systemd-run: this ask for password using gui , to use tty for pass we need pkexec/sudo or ssh
sudo: sudo have to be replaced by pkexec
ssh: this will need root password or we need to use pkexec/sudo or ssh user@localhost
Conclusion:
only machinectl and ssh gave me a separated session , systemd-run is not bad too but it is for scripts not for creating sessions.
and to gain root we can use pkexec
machinectl
timeout 3s sshpass -e pkttyagent -p $(echo $$) &
pkexec machinectl shell --uid=root --setenv="DISPLAY=:1.0" --setenv=SHELL=/bin/bash .host /bin/bash -lc "startxfce4"
ssh
timeout 3s sshpass -e pkttyagent -p $(echo $$) &
pkexec ssh -t root@localhost "bash -lc 'export DISPLAY=:1.0 ; startxfce4'"
systemd-run
timeout 3s sshpass -e pkttyagent -p $(echo $$) &
pkexec systemd-run --pty --pipe --wait --collect --service-type=exec --uid=root bash -lc "export DISPLAY=:1.0 ; export SHELL=/bin/bash ; startxfce4"
pkttyagent : is needed to force pkexec to ask for the password using the terminal instead of the dialogue Gui
timeout 3s : is needed because pkttyagent will not die alone.
| sudo equivalent in systemd |
1,353,086,939,000 |
I've seen a few other questions where they show you how to connect to a network using bash, but I haven't seen anything where you connect to a captive portal network from the command line using Linux.
Is there a way to do login in a captive portal without being in graphics mode/having a Window manager?
|
As the underlying layers /Os is not talking WISpr/not running a program to deal with captive portals, for connecting to a captive portal in the command line, you only need a browser or a script.
One of the possible solutions is using lynx, a text mode browser.
It will work in most captive portals, and will allow you to enter your login and password to authenticate in the captive portal. I am not sure it is WISpr aware (i.e. a few rare portals where WISpr is mandatory)
In the past there were also bash scripts floating around for FON, they are not working nowadays. as in https://gist.github.com/cusspvz/3ab1ea9110f4ef87f0d2e1cd134aca67 or this one https://gist.github.com/itay-grudev/d3d4eb0dc4e239d96c84
A good clue how to write such a script can be seen here in python. However, you will have to adapt it to your specific needs.
https://github.com/Palakis/fortilogin
However for the majority of portals out there, lynx is fine.
See the related question Captive portal using Apache
For having an idea what is the WISpr tags I am talking about, see Getting WISPr tags from a FON authentication portal
For others reading this question, to be able to test a browser like Chrome , Firefox or lynx in a Mac authenticating in a portal, you need to disable CNA. See related Disabling CNA in MacOS
P.S. With the notable exception of major telecoms, and some wireless vendors like Ruckus (and a couple of ready-made captive portals like PfSense and CoovaChili), many (re)implementations of captive portals only implement the captive/redirection part and do not implement WISpr.
Being captive portals automagically dealt with by Apple, Windows, Android and iOS only adds to the confusion of many people not knowing how to deal with captive portals when in less complex systems because they have that nice layer of abstraction in more complex systems.
To deal with captive portals in systems not detecting them, you need to open a browser and hit reload/try to open a web page, to get presented with a page for accepting the provider clauses/ToS, and/or to get authenticated.
| How use a captive portal when in text mode? |
1,353,086,939,000 |
I have a OSX machine where sort runs GNU sort from coreutils 8.26 (installed from Homebrew), and a Linux machine where sort runs GNU sort from coreutils 8.25.
On the Mac:
mac$ echo -e "{1\n2" | sort
2
{1
While on Linux:
linux$ echo -e "{1\n2" | sort
{1
2
I'm aware that sort depends on the locale. I ran locale on the Linux machine, prepended each line of output with export and ran the resulting lines on the OSX machine before running (in the same terminal) the sort command again, which gave the same output as before.
I noticed, however, that running locale on the Mac doesn't show all of the lines which appear on Linux, and I'm not sure if this is related.
The locale on Linux:
linux$ locale
LANG=en_CA.UTF-8
LANGUAGE=en_CA:en
LC_CTYPE="en_CA.UTF-8"
LC_NUMERIC="en_CA.UTF-8"
LC_TIME="en_CA.UTF-8"
LC_COLLATE="en_CA.UTF-8"
LC_MONETARY="en_CA.UTF-8"
LC_MESSAGES="en_CA.UTF-8"
LC_PAPER="en_CA.UTF-8"
LC_NAME="en_CA.UTF-8"
LC_ADDRESS="en_CA.UTF-8"
LC_TELEPHONE="en_CA.UTF-8"
LC_MEASUREMENT="en_CA.UTF-8"
LC_IDENTIFICATION="en_CA.UTF-8"
LC_ALL=en_CA.UTF-8
And locale on OSX:
mac$ locale
LANG="en_CA.UTF-8"
LC_COLLATE="en_CA.UTF-8"
LC_CTYPE="en_CA.UTF-8"
LC_MESSAGES="en_CA.UTF-8"
LC_MONETARY="en_CA.UTF-8"
LC_NUMERIC="en_CA.UTF-8"
LC_TIME="en_CA.UTF-8"
LC_ALL="en_CA.UTF-8"
I've found that if I set LC_ALL=C on both machines, they both sort 2 before {1. But if I set LC_ALL=en_CA.UTF-8 on both machines I have the differing output as above. Same if I set LC_ALL=en_CA.utf8 on both machines. (locale -a lists en_CA.utf8 on the Linux machine but en_CA.UTF-8 on the OSX machine.)
Any idea what is going on here?
|
I did some digging on the same problem the other day, so let me share a technical answer.
On macOS, /usr/share/locale/en_US.UTF-8/LC_COLLATE (or en_CA.UTF-8, same thing) is a symlink to /usr/share/locale/la_LN.US-ASCII/LC_COLLATE, which is generated from la_LN.US-ASCII.src with colldef. Here's the entirety of la_LN.US-ASCII.src:
# ASCII
#
# $FreeBSD: src/share/colldef/la_LN.US-ASCII.src,v 1.2 1999/08/28 00:59:47 peter Exp $
#
order \
\x00;...;\xff
You can verify that the binary LC_COLLATE file is indeed generated from la_LN.US-ASCII.src by verifying checksums:
$ colldef -o /dev/stdout usr-share-locale.tproj/colldef/la_LN.US-ASCII.src | sha256sum
9ec9b40c837860a43eb3435d7a9cc8235e66a1a72463d11e7f750500cabb5b78 -
$ sha256sum </usr/share/locale/en_US.UTF-8/LC_COLLATE
9ec9b40c837860a43eb3435d7a9cc8235e66a1a72463d11e7f750500cabb5b78 -
The ruleset is easily understandable: just compare the byte values one by one. So the collation rules for en_US.UTF-8 are the same as the POSIX locale (aka C locale). { is 0x7B, 2 is 0x32, so { comes after 2.
This ruleset is an artifact of FreeBSD 5, synced into Mac OS X 10.3 Panther. See colldef directory in FreeBSD 5.0.0 source tree. It never changed on OS X / macOS since.
On Linux, locale programs and data are part of glibc. See glibc localedata/locales tree, or /usr/share/i18n/locales on Debian/Ubuntu. If you inspect /usr/share/i18n/locales/en_US, you'll see that it pulls in iso14651_t1_common for LC_COLLATE rules. So it follows ISO 14651 rules for collation.
There are more details in the blog post: https://blog.zhimingwang.org/macos-lc_collate-hunt.
| Why does Gnu sort sort differently on my OSX machine and Linux machine? |
1,353,086,939,000 |
I know you can change a process niceness with setpriority or nice or renice.
However, does Linux automatically adjust/change a process niceness without user input?
I have a process for which I use setpriority in C, like so:
setpriority(PRIO_PROCESS, 0, -1)
When the process is running, I can see its niceness value is now -1 by running htop.
While investigating a crash on a remote machine, the output of htop was provided to me. I noticed that the niceness value for this process had changed on one instance to 0 and on another instance to 6. I'd like to know if this was changed by the kernel or if the only way to change this value is by having a user or script deliberately make the change.
|
To my knowledge, the Linux kernel does not change the niceness of a process, and I fail to see why it would since it doesn't have to to lower the priority of a process. The niceness is an information given to the kernel, telling it how nice that process is willing to be. The kernel scheduler is free to take this information into account the way it wants in order to change the priority of a process, it doesn't need to change its value.
On the other hand, in user land, there are daemons like AND whose task is to renice processes according to rules set up by the admin. Do you have such a daemon installed on your server?
However, the AND daemon does not renice processes owned by root, and since you set a priority of -1 with setpriority(), I assume this is the case here. Therefore, the only reason I see for that change in niceness is user interaction.
That said, since you are using htop, it is possible that the process has been reniced inadvertently by pressing the ] key or the F8 key.
| Does Linux Change a Process Niceness Automatically? |
1,353,086,939,000 |
$ setserial /dev/ttyUSB0 -G
Cannot get serial info: Inappropriate ioctl for device
What does this error mean? stty works fine:
$ stty -F /dev/ttyUSB0
speed 9600 baud; line = 0;
eof = ^A; min = 1; time = 0;
-brkint -icrnl -imaxbel
-opost -onlcr
-isig -icanon -iexten -echo -echoe -echok -echoctl -echoke
|
This means that the driver does not support the IOCTL that setserial is using:
setserial gets the information via an ioctl() call. In case the driver
for your device does not support TIOCGSERIAL, the "invalid argument" is
returned.
(Debian bug report)
I think stty should be able to perform any configuration you need for a USB-Serial device.
| setserial: Cannot get serial info: Inappropriate ioctl for device |
1,353,086,939,000 |
Consider you've been informed about a bad sector like this:
[48792.329933] Add. Sense: Unrecovered read error - auto reallocate failed
[48792.329936] sd 0:0:0:0: [sda] CDB:
[48792.329938] Read(10): ...
[48792.329949] end_request: I/O error, dev sda, sector 1545882485
[48792.329968] md/raid1:md126: sda: unrecoverable I/O read error
for block 1544848128
[48792.330018] md: md126: recovery interrupted.
How do I find out which file might include this sector? How to map a sector to file? Or how to find out if it just maps to free filesystem space?
The mapping process should be able to deal with the usual storage stack.
For example, in the above example, the stack looks like this:
/dev/sda+sdb -> Linux MD RAID 1 -> LVM PV -> LVM VG -> LVM LV -> XFS
But, of course, it could even look like this:
/dev/sda+sdb -> Linux MD RAID 1 -> DM_CRYPT -> LVM PV -> LVM VG -> LVM LV -> XFS
|
The traditional way is to copy all files elsewhere and see which one triggers a read error. Of course, this does not answer the question at all if the error is hidden by the redundancy of the RAID layer.
Apart from that I only know the manual approach. Which is way too bothersome to actually go through with, and if there is a tool that does this magic for you, I haven't heard of it yet, and I'm not sure if more generic tools (like blktrace) would help in that regard.
For the filesystem, you can use filefrag or hdparm --fibmap to determine block ranges of all files. Some filesystems offer tools to make the lookup in the other direction (e.g. debugfs icheck) but I don't know of a syscall that does the same, so there seems to be no generic interface for block->file lookups.
For LVM, you can use lvs -o +devices to see where each LV is stored; you also need to know the pvs -o +pe_start,vg_extent_size for Physical Extent offset/sizes. It may actually be more readable in the vgcfgbackup. This should allow you to translate the filesystem addresses to block addresses in each PV.
For LUKS, you can see the offset in cryptsetup luksDump.
For mdadm, you can see the offset in mdadm --examine. If the RAID level is something other than 1, you will also need to do some math, and more specifically, you need to know the RAID layout in order to understand which address on the md device may translate to which block of which RAID member device.
Finally you will need to take partition offsets into account, unless you were using the disks directly without any partitioning.
| How to find out which file is affected by a bad sector? |
1,353,086,939,000 |
On Linux: Normally pseudo terminals are allocated one after the other.
Today I realized that even after a reboot of my laptop the first opened
terminal window (which was always pts/0 earlier) suddenly became pts/5.
This was weird and made me curious. I wanted to find out which process is occupying the device /dev/pts/0 and had no luck using common tools like who and lsof or even ps as suggested in the comment:
pf@pfmaster-P170EM:pts/6 /var/log 1115> ps auxww | grep pts/0
pf 7042 0.0 0.0 17208 964 pts/6 S+ 12:32 0:00 grep --color=auto pts/0
What I'm missing here? Possibly infected by a rookit?
|
If you have fuser installed and have the permission to use sudo:
for i in $(sudo fuser /dev/pts/0); do
ps -o pid= -o command= -p $i
done
eg:
24622 /usr/bin/python /usr/bin/terminator
24633 ksh93 -o vi
| Which process is occupying a certain pseudo terminal pts/X? |
1,353,086,939,000 |
I would like to know how I can run two ongoing processes at the same time in Linux/bash. basically, I have a Node web server, and a MJPG-Streamer server. I want to run both these processes at once, but they are ongoing processes. I heard about running them as background processes, but I want them to be the same priority as a foreground process.
|
When you say priority, you probably mean the nice-level of the process. To quote Wikipedia:
nice is a program found on Unix and Unix-like operating systems such
as Linux. It directly maps to a kernel call of the same name. nice is
used to invoke a utility or shell script with a particular priority,
thus giving the process more or less CPU time than other processes. A
niceness of −20 is the highest priority and 19 or 20 is the lowest
priority. The default niceness for processes is inherited from its
parent process, usually 0.
Running a process in the background does not inflict on it's nice-level. It's entirely the same as when you're running it in the foreground.
So you can easily run your application/process in the background by invoking it with a trailing '&'-sign:
my-server &
You can also send a foreground-process to the background, by pressing ctrl+z (pauses the execution) followed by bg+enter.
You can list running background-tasks with the command jobs.
To get it back to the foreground you must find out its job-ID with the jobs-command, and run fg [job-ID] (for example: fg 1)
Background tasks will send all their output to your shell. If you don't want to see their output, you'll need to redirect it to /dev/null:
my-server 1>/dev/null &
...which will redirect normal output into the void. Errors will still be visible.
| How do I run two ongoing processes at once in linux/bash? |
1,353,086,939,000 |
I decided to encrypt my root partition with LUKS+LVM.
My ThinkPad setup:
Samsung 830 128GB SSD
750GB HDD
Core 2 Duo 2,5 GHz P9500
8GB RAM
But the more I read, the less I understand about those two following subjects:
1a. The cipher
I was going to use SHA1 instead of 2/512 (as some suggest), because of that quote from cryptsetup FAQ:
5.20 LUKS is broken! It uses SHA-1!
No, it is not. SHA-1 is (academically) broken for finding collisions, but not for using it in a key-derivation function. And that collision vulnerability is for non-iterated use only. And you need the hash-value in verbatim.
This basically means that if you already have a slot-key, and you have set the PBKDF2 iteration count to 1 (it is > 10'000 normally), you could (maybe) derive a different passphrase that gives you the the same slot-key. But if you have the slot-key, you can already unlock the key-slot and get the master key, breaking everything. So basically, this SHA-1 vulnerability allows you to open a LUKS container with high effort when you already have it open.
The real problem here is people that do not understand crypto and claim things are broken just because some mechanism is used that has been broken for a specific different use. The way the mechanism is used matters very much. A hash that is broken for one use can be completely secure for other uses and here it is.
Which I read as "there is no point of using anything other than SHA-1". But then some people tell me, that it's not exactly like that. So I no longer know what to think.
1b.
Also, I could not find any information whether the cipher has any influence on disk read/write/seek performance once the disk is unlocked and system logged into.
So does the complexity of the cipher affect only the "performance" on password entering stage, or also during normal use of the system?
2. The algorithm
I have been reading on this since couple of days, but the more I read, the more confused I get. Everything I read says that AES is the fastest, and Serpent is the slowest. But not according to my laptop:
$ cryptsetup benchmark
Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 344926 iterations per second
PBKDF2-sha256 198593 iterations per second
PBKDF2-sha512 129007 iterations per second
PBKDF2-ripemd160 271933 iterations per second
PBKDF2-whirlpool 134295 iterations per second
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 149.8 MiB/s 147.9 MiB/s
serpent-cbc 128b 51.0 MiB/s 196.4 MiB/s
twofish-cbc 128b 127.6 MiB/s 152.5 MiB/s
aes-cbc 256b 114.3 MiB/s 113.8 MiB/s
serpent-cbc 256b 51.2 MiB/s 198.9 MiB/s
twofish-cbc 256b 129.8 MiB/s 167.5 MiB/s
aes-xts 256b 153.3 MiB/s 150.6 MiB/s
serpent-xts 256b 176.4 MiB/s 184.1 MiB/s
twofish-xts 256b 160.8 MiB/s 159.8 MiB/s
aes-xts 512b 115.4 MiB/s 112.1 MiB/s
serpent-xts 512b 178.6 MiB/s 184.2 MiB/s
twofish-xts 512b 160.7 MiB/s 158.9 MiB/s
So it appears that Serpent's not only the fastest, but on top of that it is the fastest with the most complex key.
Shouldn't it be the other way around? Am I reading it wrong, or something?
|
1a - it really doesn't matter all that much. which ever hash you use for the key derivation function, LUKS makes sure it will be computationally expensive. It will simply loop it until 1 second real time has passed.
1b - the key derivation method has no influence on performance. the cipher itself does. cryptsetup benchmark shows you as much.
2 - AES is the fastest if your CPU is modern enough to support AES-NI instructions (hardware acceleration for AES). If you go with serpent now you may not be able to utilize the AES-NI of your next laptop.
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 1165084 iterations per second
PBKDF2-sha256 781353 iterations per second
PBKDF2-sha512 588426 iterations per second
PBKDF2-ripemd160 726160 iterations per second
PBKDF2-whirlpool 261882 iterations per second
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 692.9 MiB/s 3091.3 MiB/s
serpent-cbc 128b 94.6 MiB/s 308.6 MiB/s
twofish-cbc 128b 195.2 MiB/s 378.7 MiB/s
aes-cbc 256b 519.5 MiB/s 2374.0 MiB/s
serpent-cbc 256b 96.5 MiB/s 311.3 MiB/s
twofish-cbc 256b 197.9 MiB/s 378.0 MiB/s
aes-xts 256b 2630.6 MiB/s 2714.8 MiB/s
serpent-xts 256b 310.4 MiB/s 303.8 MiB/s
twofish-xts 256b 367.4 MiB/s 376.6 MiB/s
aes-xts 512b 2048.6 MiB/s 2076.1 MiB/s
serpent-xts 512b 317.0 MiB/s 304.2 MiB/s
twofish-xts 512b 368.7 MiB/s 377.0 MiB/s
Keep in mind this benchmark does not use storage so you should verify these results with whatever storage and filesystem you are actually going to use.
| Trying to understand LUKS encryption |
1,353,086,939,000 |
I want to block font substitution in specific apps on Linux, but my research indicates that it might be controlled only at the system level, probably with fontconfig. I have found some discussion of how to direct fontconfig to substitute particular fonts, but nothing on how to competely turn off the feature.
The best answer would be how to disable glyph fallback for individual apps, but doing it system wide would be better than nothing.
|
There appears to be absolutely no way to disable font fallback. If you have fontconfig installed, the font substitution mechanism will always be active.
I have discovered two narrow ways to limit fallback. First, to block fallback in applications that use Pango, you can use the Pango fallback attribute to compile a special version that doesn't use fallback fonts. (Edit: Someone suggested to me that it may also be possible to do this by modifying the GtkTextLayout and GtkTextView within an app.)
Second, you can restrict the fontconfig search path to a directory with just the font(s) you want.
| How to block glyph fallback on Linux? |
1,353,086,939,000 |
I don't know about the console font format, for a normal truetype font, I could use gnome-font-viewer to preview it, but what about console font? If I don't switch back to another tty, and use setfont command, is there a way to view it in X?
|
I don't think there's any widely-used tool. Try psfedit. There's also NAFE which lets you convert between console fonts and an ASCII pixel representation, or Cse to edit a font from the console. I haven't used any of these, so I'm not particularly recommending them, just mentioning their existence.
| What tool can preview console font? |
1,353,086,939,000 |
I am trying to achieve something similar to this:
https://superuser.com/questions/67659/linux-share-keyboard-over-network
The difference is that I need the remote keyboard to be usable separate from my local keyboard. The method described in the link seems to pipe the events into an existing device file. I need the remote keyboard to show as a physical (slave) device when I run xinput list
Why do I need this? I am trying to play a two player game but I don't have an external USB keyboard, so I want to pipe the keypresses from the remote computer to a fake device (so I can assign one device per player).
|
I found a project called netevent on GitHub which does exactly what I need. It makes local devices available to a remote computer.
I was able to forward the mouse, but not the keyboard due to compatibility issues.
Technically, this answers my question of how to share the keyboard over the network and have it appear as a separate device.
| Share keyboard over network as separate device? |
1,353,086,939,000 |
I am looking for a way to profile a single process including time spent for CPU, I/O, memory usage over time and optionally system calls.
I already know callgrind offering some basic profiling features but only with debugging information and lacking most of the other mentioned information.
I know strace -c providing a summary about all system calls and their required CPU time.
I know several IO-related tools like (io)top, iostat, vmstat but all of them are lacking detailed statistics about a single process. There is also /proc/$PID/io providing some IO statistics about a single process, but I would have to read it at fixed intervals in order to gather IO information over time.
I know pidstat providing CPU load, IO statistics and memory utilization but no system calls, only at a high granularity and not over time.
One could of course combine several of the described tools to gather those information over time, but lacking a high granularity and thus missing important information. What I am looking for is a single tool providing all (or at least most) of the mentioned information, ideally over time. Does such a tool exist?
|
Meanwhile I wrote my own program - audria - capable of monitoring the resource usage of one or more processes, including current/average CPU usage, virtual memory usage, IO load and other information.
| Detailed Per-Process Profiling |
1,353,086,939,000 |
Is there an easy way in bash to flush out the standard input?
I have a script that is commonly run, and at one point in the script read is used to get input from the user. The problem is that most users run this script by copying and pasting the command line from web-based documentation. They frequently include some trailing whitespace, or worse, some of the text following the sample command. I want to adjust the script to simply get rid of the extra junk before displaying the prompt.
|
This thread on nonblocking I/O in bash might help.
It suggests using stty and dd.
Or you could use the bash read builtin with the -t 0 option.
# do your stuff
# discard rest of input before exiting
while read -t 0 notused; do
read input
echo "ignoring $input"
done
If you only want to do it if the user is at a terminal, try this:
# if we are at a terminal, discard rest of input before exiting
if test -t 0; then
while read -t 0 notused; do
read input
echo "ignoring $input"
done
fi
| Bash flush standard input before a read |
1,353,086,939,000 |
Sometimes you need to unmount a filesystem or detach a loop device but it is busy because of open file descriptors, perhaps because of a smb server process.
To force the unmount, you can kill the offending process (or try kill -SIGTERM), but that would close the smb connection (even though some of the files it has open do not need to be closed).
A hacky way to force a process to close a given file descriptor is described here using gdb to call close(fd).
This seems dangerous, however. What if the closed descriptor is recycled? The process might use the old stored descriptor not realizing it now refers to a totally different file.
I have an idea, but don't know what kind of flaws it has: using gdb, open /dev/null with O_WRONLY (edit: an comment suggested O_PATH as a better alternative), then dup2 to close the offending file descriptor and reuse its descriptor for /dev/null. This way any reads or writes to the file descriptor will fail.
Like this:
sudo gdb -p 234532
(gdb) set $dummy_fd = open("/dev/null", 0x200000) // O_PATH
(gdb) p dup2($dummy_fd, offending_fd)
(gdb) p close($dummy_fd)
(gdb) detach
(gdb) quit
What could go wrong?
|
Fiddling with a process with gdb is almost never safe though may be
necessary if there's some emergency and the process needs to stay open
and all the risks and code involved is understood.
Most often I would simply terminate the process, though some cases may
be different and could depend on the environment, who owns the
relevant systems and process involved, what the process is doing,
whether there is documentation on "okay to kill it" or "no, contact
so-and-so first", etc. These details may need to be worked out in a
post-mortem meeting once the dust settles. If there is a planned
migration it would be good in advance to check whether any processes
have problematic file descriptors open so those can be dealt with in a
non-emergency setting (cron jobs or other scheduled tasks that run
only in the wee hours when migrations may be done are easily missed if
you check only during daytime hours).
Write-only versus Read versus Read-Write
Your idea to reopen the file descriptor O_WRONLY is problematic as not
all file descriptors are write-only. John Viega and Matt Messier take a
more nuanced approach in the "Secure Programming Cookbook for C and C++"
book and handle standard input differently than standard out and
standard error (p. 25, "Managing File Descriptors Safely"):
static int open_devnull(int fd) {
FILE *f = 0;
if (!fd) f = freopen(_PATH_DEVNULL, "rb", stdin);
else if (fd == 1) f = freopen(_PATH_DEVNULL, "wb", stdout);
else if (fd == 2) f = freopen(_PATH_DEVNULL, "wb", stderr);
return (f && fileno(f) == fd);
}
In the gdb case the descriptor (or also FILE * handle) would need to
be checked whether it is read-only or read-write or write-only and an
appropriate replacement opened on /dev/null. If not, a once read-only
handle that is now write-only will cause needless errors should the
process attempt to read from that.
What Could Go Wrong?
How exactly a process behaves when its file descriptors (and likely also
FILE * handles) are fiddled behind the scenes will depend on the
process and will vary from "no big deal" should that descriptor never be
used to "nightmare mode" where there is now a corrupt file somewhere due
to unflushed data, no file-was-properly-closed indicator, or some other
unanticipated problem.
For FILE * handles the addition of a fflush(3) call before closing
the handle may help, or may cause double buffering or some other issue;
this is one of the several hazards of making random calls in gdb
without knowing exactly what the source code does and expects. Software
may also have additional layers of complexity built on top of fd
descriptors or the FILE * handles that may also need to be dealt with.
Monkey patching the code could turn into a monkey wrench easily enough.
Summary
Sending a process a standard terminate signal should give it a chance
to properly close out resources, same as when a system shuts down
normally. Fiddling with a process with gdb will likely not properly
close things out, and could make the situation very much worse.
| Safest way to force close a file descriptor |
1,353,086,939,000 |
Netfilter connection tracking is designed to identify some packets as "RELATED" to a conntrack entry.
I'm looking to find the full details of TCP and UDP conntrack entries, with respect to ICMP and ICMPv6 error packets.
Specific to IPv6 firewalling, RFC 4890 clearly describes the ICMPv6 packets that shouldn't be dropped
http://www.ietf.org/rfc/rfc4890.txt
4.3.1. Traffic That Must Not Be Dropped
Error messages that are essential to the establishment and maintenance
of communications:
Destination Unreachable (Type 1) - All codes
Packet Too Big (Type 2)
Time Exceeded (Type 3) - Code 0 only
Parameter Problem (Type 4) - Codes 1 and 2 only
Appendix A.4 suggests some more specific checks that could be performed on Parameter Problem messages if a firewall has the
necessary packet inspection capabilities.
Connectivity checking messages:
Echo Request (Type 128)
Echo Response (Type 129)
For Teredo tunneling [RFC4380] to IPv6 nodes on the site to be possible, it is essential that the connectivity checking messages are
allowed through the firewall. It has been common practice in IPv4
networks to drop Echo Request messages in firewalls to minimize the
risk of scanning attacks on the protected network. As discussed in
Section 3.2, the risks from port scanning in an IPv6 network are much
less severe, and it is not necessary to filter IPv6 Echo Request
messages.
4.3.2. Traffic That Normally Should Not Be Dropped
Error messages other than those listed in Section 4.3.1:
Time Exceeded (Type 3) - Code 1
Parameter Problem (Type 4) - Code 0
In the case of a linux home router, is the following rule sufficient to protect the WAN interface, while letting through RFC 4890 ICMPv6 packets? (ip6tables-save format)
*filter
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
Addendum:
of course, one needs other rules for NDP and DHCP-PD:
-A INPUT -s fe80::/10 -d fe80::/10 -i wanif -p ipv6-icmp -j ACCEPT
-A INPUT -s fe80::/10 -d fe80::/10 -i wanif -p udp -m state --state NEW -m udp --sport 547 --dport 546 -j ACCEPT
In other terms, can I safely get rid of the following rules to comply with RFC 4980, keeping only the "RELATED" rule first?
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
|
I don't know the answer, but you can find out yourself.
Use these rules (creates an empty chain "NOOP" for accounting purposes):
*filter
...
:NOOP - [0:0]
...
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j NOOP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
...
Then sometimes later, use ip6tables-save -c to see the counters for the above rules. If the counters are > 0 for the NOOP rules above the "RELATED" line but 0 for the ACCEPT rules below, you know the "RELATED" match has taken care of accepting them. If the counter for some NOOP rule is 0, then you can't tell yet for that particular icmpv6 type whether RELATED does it or not. If some ACCEPT line has its counter > 0, then you do need that explicit rule.
| netfilter TCP/UDP conntrack RELATED state with ICMP / ICMPv6 |
1,353,086,939,000 |
Why is this a binary multi-megabyte blob /etc/udev/hwdb.bin and why under /etc?
Should I store it with etckeeper?
|
man hwdb:
Hardware Database Files
-- snipping unnecessary documentation details for this answer ---
The content of all hwdb files is read by systemd-hwdb(8) and compiled
to a binary database located at /etc/udev/hwdb.bin, or alternatively
/usr/lib/udev/hwdb.bin if you want ship the compiled database in an
immutable image. During runtime, only the binary database is used.
man systemd-hwdb:
systemd-hwdb [options] update
Update the binary database.
You don't need to put this file in any /etc/ version control, as long as you figure out when your specific distro runs systemd-hwdb. Search for any systemd units that could be generating this file at boot or at specific runtime trigger/action.
And, it's up to the distribution to choose if it will store this binary at /etc/udev or /usr/lib/udev under the name hwdb.bin.
| Why is this a binary multi-megabyte blob `/etc/udev/hwdb.bin` under `/etc`? |
1,353,086,939,000 |
I'm using mint 18.1 and searching a way to resize of opened window and center it by hotkey (resize to default value that set up).
In cinnamon keyboard preferences I found in shortcuts a way to set window's position at center, but didn't find anything about set default window size (in other preferences sections).
Is there a way to do such stuff?
|
Install Devilspie:
sudo apt-get update && sudo apt-get install devilspie
Devilspie is a non-gui utility that lets you make applications start in
specified workplaces, in specified sizes and placements, minimized or
maximized and much more based on simple config files.
Create a Devilspie configuration directory for the current user and then create a configuration file for devilspie, for example for gnome-terminal:
mkdir ~/.devilspie && echo gedit ~/.devilspie/gnome-terminal.ds
Add a desired contents, for example:
(if (is (application_name) "Terminal")
(begin
(geometry "800x400")
(center)
)
)
Then assign a custom keyboard shortcut in Cinnamon:
Go to main menu and search for app "Keyboard" and run it.
Choose the "Shortcuts" tab, choose Category "Custom Shortcuts" in the left and click "Add custom shortcut"
In the command window, enter "devilspie"
Click Add an in the Keyboard bindings, choose your desired shortcut, for example I chose Ctrl+Alt+D and close this window:
Now after pressing the shortcut, all open Terminal windows will change the size to desired size and will be centered.
Study the Devilspie documentation for other possibilities and usage.
| cinnamon resize window to default size by hotkey |
1,353,086,939,000 |
I'm currently using Debian Testing (stretch) after a hard drive crash on my laptop, but I'm facing a weird issue with it. The laptop (Acer 5830TG) has a non-removable three-cell 6000mAh Li-Ion battery, current capacity only 335mAh due to wear, which does not permit charging until battery voltage drops below 10.9 V. Previously the laptop had Debian Testing Jessie, Fedora 21 and Slax Live, but none of those shut down automatically on low battery (even voltage below 10.8 V). The latest Debian is shutting down if the battery level is below 10%, and currently I'm facing frequent short term power cuts.
So what is wrong with that?
Some power saving udev/systemd/dbus rules?
Any kind of new kernel feature to avoid battery over-discharge?
Or system misconfiguration?
Points to note
I got battery voltage/capacity, etc., from /sys/class/power_supply/BAT0
not any kind of hardware issue;
tested with Slax, Lubuntu, Tiny Core live USB
not any kind of desktop or display manager issue; I do not use any display manager and logout X session when no AC power.
|
Problem solved by myself, its UPower daemon, which is automatically started by dbus-daemon, as soon as I start any X session it starts automatically, but closing X session does not stop upower daemon. So logout X session and run
sudo service upower stop
and problem solved.
| Undesired shutdown on low battery - Debian Testing |
1,353,086,939,000 |
I am trying to forward all outgoing traffic from port 80 to port 8080 using iptables and I tried the following rule, though it did not work:
iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-port 8080
Also, for incoming traffic I need a rule that forward port on the same host not to be as proxy.
|
You will need that under nat
e.g
*nat
:PREROUTING ACCEPT
:POSTROUTING ACCEPT
:OUTPUT ACCEPT
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
you can run this: with example of forwarding 80 to 8080 and so on...
Incoming on 80 to 8080
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
Outgoing on 80 to 8080
iptables -t nat -A POSTROUTING -p tcp --dport 80 -j SNAT --to-ports 8080
Note: I haven't tested this.
| forward outgoing traffic port using iptables |
1,353,086,939,000 |
I have DVB-T USB dongle plugged into my Linux server (GUI-less). It works correctly, but I want to stream TV programs from server to my PC. For this I use Kaffeine that way:
ssh -X -p 666 -i /home/maciek/.ssh/id_rsa media@media env LANG=pl_PL.UTF-8 /usr/bin/kaffeine
As You can see, ssh works on port 666 and starts kaffeine on server but display results on my PC. Nice but the problem is audio redirection. Is any way to redirect audio together with video and keyboard/mouse?
|
X11 has two neat aspects: it's a de facto standard for display on Linux, and it's network-transparent. There is unfortunately no such thing for sound. There are sound servers which do exactly what you want; unlike X which works out of the box, sound servers tend to require a little setup.
JACK and Pulseaudio are the two choices that I recommend investigating. Pulseaudio is the default sound system on Ubuntu, which gives it an edge in terms of using on Ubuntu and in terms of tutorials available. JACK prides itself on its low latency, which is important when watching a movie.
You'll need to do three things:
Set up Kaffeine for JACK or Pulseaudio output, and indicate a port (say 5551).
On your desktop computer, set up the sound server to listen on a port (say 5552 — it can be the same as in your soundless server or not). The Arch wiki has howtos for JACK and Pulseaudio. This Ask Ubuntu question also has hints about PA. There is a guide for JACK2 on the JACK site.
Set up SSH to forward connections from the server to the client: -R 5551:localhost:5552
| Ssh audio redirection |
1,353,086,939,000 |
I have 1 BlueTooth adapter on my laptop. Using the command line, I want to be able to enable / disable it.
This would be the same functionality that is achieved using GUI -> Bluetooth settings -> Bluetooth ON | OFF.
|
sudo rfkill unblock bluetooth
However, it seems to timeout sporadically. So needs some work...
| Using Command Line (Linux) how do I enable the Bluetooth Adapter? |
1,353,086,939,000 |
I use the following scp syntax in order to transfer a lot of files from Linux red-hat 5 to windows machine (under Temp directory).
SSH server is already installed on windows machine. I use this line in my shell scripts:
sshpass -p '$password' /usr/bin/scp -o StrictHostKeyChecking=no $FILE [email protected]:'D:/Temp'
For most cases files transferred successfully, but sometimes scp seems to get stuck during file transferring? Connectivity appears OK like ping, etc.
I get the following error from scp (after a long time):
ssh_exchange_identification: read: Connection reset by peer
why scp isn't stable and stuck, and what's the solution for this problem?
What are others good alternative for scp? (consider that I need 100% of stability)
|
Hardware
I wouldn't be that suspicious of scp. If it's working some of the time this sounds much more like a hardware issue with either your:
network card (linux or windows host)
wiring
switch/router
I would perform some benchmarking to eliminate these items first. You can see these U&L Q&A's for starters:
How To Diagnose Faulty (onboard) Network Adapter
Linux network troubleshooting and debugging
Software
Debugging scp & ssh
You can add -v switches to both of these commands to get more verbose output. For example:
# generate sample data
$ dd if=/dev/zero of=10MB.testfile bs=1k count=10k
10240+0 records in
10240+0 records out
10485760 bytes (10 MB) copied, 0.0422862 s, 248 MB/s
$ ls -l 10MB.testfile
-rw-rw---- 1 saml saml 10485760 Jul 29 17:09 10MB.testfile
# test copy 10MB file
$ scp -v 10MB.testfile remoteserver:~
Executing: program /usr/bin/ssh host removeserver, user (unspecified), command scp -v -t -- ~
OpenSSH_5.5p1, OpenSSL 1.0.0e-fips 6 Sep 2011
debug1: Reading configuration data /home/saml/.ssh/config
debug1: Applying options for *
debug1: Applying options for removeserver
debug1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
Control socket connect(/home/saml/.ssh/[email protected]:22): Connection refused
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 22.
debug1: Connection established.
debug1: identity file /home/saml/.ssh/id_dsa type 2
debug1: identity file /home/saml/.ssh/id_dsa-cert type -1
debug1: identity file /home/saml/.ssh/qm-dev-servers type 1
debug1: identity file /home/saml/.ssh/qm-dev-servers-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH_4*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.5
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.1.200' is known and matches the RSA host key.
debug1: Found key in /home/saml/.ssh/known_hosts:30
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: publickey
debug1: Offering public key: /home/saml/.ssh/id_dsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Offering public key: /home/saml/.ssh/qm-dev-servers
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: setting up multiplex master socket
ControlSocket /home/saml/.ssh/[email protected]:22 already exists, disabling multiplexing
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env XMODIFIERS = @im=none
debug1: Sending env LANG = en_US.utf8
debug1: Sending command: scp -v -t -- ~
Sending file modes: C0660 10485760 10MB.testfile
Sink: C0660 10485760 10MB.testfile
10MB.testfile 100% 10MB 3.3MB/s 00:03····
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
debug1: fd 1 clearing O_NONBLOCK
Transferred: sent 10499080, received 4936 bytes, in 4.0 seconds
Bytes per second: sent 2610912.6, received 1227.5
debug1: Exit status 0
You can add additional -v switches to get more verbose output. For example:
$ scp -vvv ...
Windows Firewall
In researching this a bit more I came across this workaround which would back up @Gilles notion that this may be a firewall issue. The solution was to disable stateful inspection on the Windows side that's running the sshd service using the following command (as an administrator):
% netsh advfirewall set global statefulftp disable
References
Strange Problem: Connection Reset By Peer
| scp stuck when trying to copy files from Linux to windows |
1,367,304,176,000 |
I recently came across a Linux feature I have never seen before, where pressing the PrntScr button on the keyboard prints a physical piece of paper with the contents of my console.
I really need to find out how to disable this. It is driving me crazy.
I followed a guide on creating a custom keymap, and I tried remapping it to Esc and loading my custom keymap instead, but it didn't seem to work. By disabling, I mean I would preferably like the key to not send any input at all, and ideally I would like to allow CUPS to continue running.
What exactly controls this behavior? And are there any specific man pages I can read about this?
EDIT: A little bit of additional info I should have given: I launch Openbox after logging into a TTY rather than using a DM. I am looking for a solution that would disable printing even if I were on a TTY, since PrntScr prints from a TTY as well.
|
You should be able to disable PrntScr on the console with a custom keymap. On archlinux the procedure is as follows (it should be similar for other distros):
cd /usr/share/kbd/keymaps/i386/qwerty
copy your default keymap to a new file: cp us.map.gz personal.map.gz
gunzip the new map file: gunzip personal.map.gz
edit personal.map using your favorite editor:
switch to a tty, run showkey and press PrntScr to get the key code. On my system it outputs:
keycode 99 press
keycode 99 release
so PrntScr code is 99.
Add
keycode 99 = nul
to personal.map
gzip the map file: gzip personal.map then run loadkeys personal to load the custom keymap then hit PrntScr to test the new keymap.
make it permanent by (creating if not present and) editing /etc/vconsole.conf: replace KEYMAP=us with KEYMAP=personal.
reboot
The above works only on console, you will have to disable PrntScr also in X.
One way to do that is to comment it out in your X keycodes file (the one corresponding to your keyboard - linux uses /usr/share/X11/xkb/keycodes/evdev). Key code is <PRSC>, just comment it out (add // in front of it) e.g. replacing
<PRSC> = 107;
with
// <PRSC> = 107;
completely disables PrntScr.
| Fully Disable PrntScr Key |
1,367,304,176,000 |
I have a 4 port bridge:
root@Linux-Switch:~# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000024cd2cb0 no eth0
eth1
eth2
eth3
My goal is to limit the upload speed of the eth2 interface. (eth0 is the uplink interface to the upstream switch). I've been trying to do this via tc and iptables.
# tried in both the filter table and mangle table
iptables -A FORWARD -t mangle -m physdev --physdev-in eth2 -j MARK --set-mark 5
tc qdisc add dev eth0 root handle 1:0 htb default 2
tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1mbit ceil 1mbit
tc class add dev eth0 parent 1:0 classid 1:2 htb rate 5mbit ceil 5mbit
tc filter add dev eth0 parent 1:0 handle 5 fw flowid 1:1
I can see that the iptables rule is matching-
root@Linux-Switch:~# iptables -vL -t mangle
...
Chain FORWARD (policy ACCEPT 107K packets, 96M bytes)
pkts bytes target prot opt in out source destination
38269 11M MARK all -- any any anywhere anywhere PHYSDEV match --physdev-in eth2 MARK set 0x5
...
root@Linux-Switch:~#
But the tc config is not reading the fw mark; all traffic in port eth2 is being limited to the 5Mb default, not the 1Mb I'm attempting to configure.
root@Linux-Switch:~# tc -s class show dev eth0
class htb 1:1 root prio 0 rate 1000Kbit ceil 1000Kbit burst 100Kb cburst 100Kb
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: 200000 ctokens: 200000
class htb 1:2 root prio 0 rate 5000Kbit ceil 5000Kbit burst 100Kb cburst 100Kb
Sent 11465766 bytes 39161 pkt (dropped 0, overlimits 0 requeues 0)
rate 6744bit 3pps backlog 0b 0p requeues 0
lended: 39161 borrowed: 0 giants: 0
tokens: 2454400 ctokens: 2454400
root@Linux-Switch:~#
What am I doing wrong?
|
I figured it out- I had to specify a 'protocol' in the filter. I could find much documentation on this- all the examples I could find specified the protocol as 'ip' but since this a switch, I thought I'd try 'all' and it worked!
tc qdisc add dev eth0 root handle 1:0 htb default 2
tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1mbit ceil 1mbit
tc class add dev eth0 parent 1:0 classid 1:2 htb rate 5mbit ceil 5mbit
tc filter add dev eth0 parent 1:0 handle protocol all 5 fw flowid 1:1
| tc on bridge port |
1,367,304,176,000 |
I have a new requirement to purge MySQL dump files that are older than 30 days. The files use a naming convention of "all-mysql-YYYYMMDD-HHMM.dump". The files are located on SAN mounted file system, so restoration is not an issue, but the drive space is limited unfortunately and fills up quickly so it requires frequent human intervention.
Example of files names
all-mysql-20130324-2330.dump
all-mysql-20130325-2330.dump
all-mysql-20130326-2330.dump
My first thought was to use "find" inside a batch script with -mtime +30, however, the modification times cannot be guaranteed and some of the older archives could evade the purge date :)
I created the following BASH script, but I was hoping there was there was a cleaner way to perform this operation.
#!/bin/bash
STARTING_DIR=$(pwd)
FILE_PREFIX=all-mysql-
BACKUP_DIR=/opt/backup/mysql/dumps
ARCHIVE_WINDOW_DAYS=30
cd $BACKUP_DIR
# Create YYYYMMDD datestamp for Today - $ARCHIVE_WINDOW_DAYS
ARCHIVE_WINDOW_IN_SECS=$(echo "$(date +%s) - (${ARCHIVE_WINDOW_DAYS} * 86400)" | bc)
PURGE_BEFORE_DATE=$(date -d @${ARCHIVE_WINDOW_IN_SECS} +%Y%m%d)
for backup_file in $FILE_PREFIX*
do
# Trim prefix, time portion of date stamp, and file extension
# from $backup_file to allow numeric comparison against YYYYMMDD
backup_trim_tmp=${backup_file#${FILE_PREFIX}}
backup_trimmed=${backup_trim_tmp%-****.dump}
if [ ${PURGE_BEFORE_DATE} -gt ${backup_trimmed} ]
then
rm $backup_file
fi
done
cd $STARTING_DIR
|
Another way to delete all except the last 30 files:
rm $(ls -r | tail -n +31)
Or here is a shorter version of the script in the original post:
cd /opt/backup/mysql/dumps
d=$(date -r $(($(date +%s)-30*86400)) +%Y%m%d)
for f in all-mysql-*; do
[[ ${f#all-mysql-} < $d ]] && rm $f
done
| Cleaner way to delete files on Linux which include a datestamp as part of file name |
1,367,304,176,000 |
I'm trying to make udev stop mounting one of my devices at boot time, and I've created a rule in /etc/udev/rules.d/ called 1-myblacklist.rules. All the rule does is matches the device by kernel identifier (ie. sdb) and and set the attribute OPTION to "ignore_device"
udevadm test /sys/block/sdb
Shows that the my rules file is parsed as the first entry, but all subsequent rules still gets applied. And the partitions on the drive still shows up on my desktop (XFCE).
|
I just wanted to post the solution to this problem, in-case somebody else is faced with a similar challenge.
Adding the following rules file did the trick:
/etc/udev/rules.d/90-hide-partitions.rules
KERNEL=="sda2",ENV{UDISKS_PRESENTATION_HIDE}="1"
KERNEL=="sda3",ENV{UDISKS_PRESENTATION_HIDE}="1"
| Making udev ignore certain devices during boot |
1,367,304,176,000 |
Is there any way of configuring keyboard shortcuts in the Linux virtual console?
For example, if I go to tty1 then press the key combination Ctrl+Alt+H, I would like the script /usr/bin/hello.sh to be executed.
Ideally, the shortcut would be available even before logging in (in which case it would be executed with the privileges of a user that I specify). I don't mind modifying the kernel either, if that's the only way of accomplishing this. Also, it doesn't have to be a shell script, it can also be a normal ELF binary or even a kernel module making system calls.
Example use cases
I'm in the console and browsing the web with something like links and I want to turn down the screen brightness. I press Fn+End, which happens to be the brightness down key and produces a single keycode, and a program runs which reduces the brightness by writing something in a /sys file.
I'm in a console text editor and listening to some music in the background that's being played by mpd. I press the ⏯ (play/pause) key, which again produces a single keycode, and that has the effect of executing a program which sends a signal to mpd to pause the current song.
Solution
Following dirkt's idea of using /dev/input, I created evd (event daemon) for this purpose. The application can be started in the background and will take over the keyboard wherever you are, including before login and within X.
|
Partial answer (because it's just an outline, and untested):
Write a demon which listens to whatever /dev/input device corresponds to your main keyboard (there are symlinks, look at them). Start that demon as the user you specify, using whatever init system you have (systemd, sysv, whatever).
The demon processes key events as defined in input-events-codes.h (or look at the source code of evtest). It has a state machine that recognizes your desired key sequences, and spawns whatever process you specify when such a sequence is complete.
This should be available before you login, and will always execute as the same user, no matter what user you are logged in at the virtual console. It will also execute under X, again as the same user.
Alternative, if you want to execute something in a shell: Use tmux or a similar program which can bind key sequences to actions. I suppose it should also be possible to automatically start tmux and attach to a new session whenever you log in at a virtual console, but I haven't looked into that.
This won't work before log in, but will also work in graphical terminal emulators that have keyboard focus, and will execute the script as the user which is logged in.
| Keyboard shortcuts in the virtual terminal |
1,367,304,176,000 |
To enable a serial console on Linux, one uses getty (most usually, its variant agetty). This binary takes as argument, among other, the value to initialize the TERM variable with.
On Debian, with Sys V init, the default was vt100. With systemd, the default used to be vt102, and nowadays it's vt220.
After playing a bit with QEMU virtual machines and virt-viewer, as well as virsh console command, I noticed some things:
With vt100, ls --color displays colors, but vim's syntax highlighting doesn't work
with vt102 or vt220, neither of them displays colors
Only with TERM variable set to linux, do both ls and vim use colors
So I guess that independently of the actual "color support", each application looks at the TERM variable and acts accordingly, which would explain the differences noted above.
After reading the Serial Console HOWTO, I understand that the value of the TERM variable should depend on the actual model of physical terminal which would be connected to the serial port, according to its capabilities.
Note that, according to Lennart Poettering's blog, TERM should be set to linux only with real virtual terminals (as opposed to serial ones). On the other hand, Arch Linux' Wiki doesn't seem to mind (see the /etc/inittab lines it proposes).
So my questions are:
In a general case, what happens if the TERM variable is set to linux on a console connected to a less-capable terminal, like a DEC VT100, VT102 or VT220, or some RS-232 software terminal emulators like minicom or termite ?
More realistically (in my particular case), is it OK to set the TERM variable to linux in a "virtual" serial console on a QEMU VM, to which I will connect through virt-viewer or virsh console ?
|
The TERM setting tells the application program what capabilities the terminal it is communicating with has, and how to utilize these capabilities (typically via a library like ncurses). In plain English: it tells what control sequences (escape sequences) it should send to move the cursor arount the screen, to change the text color, how to erase a region of the screen, what sequences the function keys transmit, etc. Some of these capabilities may be missing, like color support.
Most of the terminal types in use today are somehow related to the "grand daddy" of "glass ttys", the DEC VT100. This is why terminal types are mostly interchangeable, so setting the wrong type typcally results in a mostly working setup, but with some glitches.
So, to answer the questions "which should I use" and "what happens if I use the wrong setting"? Some control sequences may be mismatched, i.e. the program sends cursor movement sequences that differ from the ones the terminal emulator expects. Or color support is missing. (Btw. the original VT100 did definitely not support color...) The correct setting should be provided by the terminal emulator documentation, but there is no harm in experimenting to see which setting works the best. It's OK to use "linux" if it works for you.
| Suitables TERM variable values for a serial console |
1,367,304,176,000 |
I've learned that the firmware-subsystem uses udevd to copy a firmware to the created sysfs 'data' entry.
But how does this work in case of a built-in driver module where udevd hasn't started yet?
I'am using a 3.14 Kernel.
TIA!
|
I read through the kernel sources, especially drivers/base/firmware_class.c, and discovered that
CONFIG_FW_LOADER_USER_HELPER
would activate the udev firmware loading variant (obviously only usable for loadable modules when udev is running). But as mentioned on LKML this seems to be an obsolete method.
Furthermore firmware required by built-in modules is loaded from initramfs by fw_get_filesystem_firmware() through a kernel_read(), to be precise.
| How does linux load firmeware for built-in driver modules [duplicate] |
1,367,304,176,000 |
Without programs open, my computer uses about 512M of memory. Yesterday, I had nothing open, yet 2 GB of mem use (used - cache = 2153):
total used free shared buffers cached
Mem: 3261 2875 386 30 199 523
-/+ buffers/cache: 2153 1108
Swap: 8187 0 8187
Top showed no processes taking this up:
top - 23:10:38 up 1 day, 14:35, 3 users, load average: 0,31, 0,94, 1,29
Tasks: 172 total, 3 running, 169 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6,5 us, 4,2 sy, 0,0 ni, 89,1 id, 0,1 wa, 0,0 hi, 0,1 si, 0,0 st
KiB Mem: 3340164 total, 2937728 used, 402436 free, 201484 buffers
KiB Swap: 8384444 total, 180 used, 8384264 free. 531636 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2520 halfgaar 20 0 3869744 173620 38568 S 1,6 5,2 52:20.03 plasma-desktop
1535 root 20 0 246420 108512 40420 S 2,0 3,2 22:36.65 Xorg
2665 halfgaar 20 0 1354660 50624 15116 R 0,0 1,5 0:10.08 krunner
2513 halfgaar 20 0 2966468 48564 19280 S 0,0 1,5 0:34.62 kwin
2306 halfgaar 20 0 1329360 41448 12488 S 0,0 1,2 0:09.80 kded4
2675 halfgaar 20 0 796712 37360 13804 S 0,0 1,1 0:04.23 kmix
2619 halfgaar 20 0 649136 34160 14204 S 0,0 1,0 0:00.95 akonadi_mailfil
2629 halfgaar 20 0 621348 33860 13876 S 0,0 1,0 0:00.88 akonadi_sendlat
2562 halfgaar 20 0 1242180 33212 2504 S 0,2 1,0 3:20.05 mysqld
2611 halfgaar 20 0 649132 33048 14140 S 0,0 1,0 0:01.29 akonadi_archive
18552 halfgaar 20 0 508376 32948 24108 S 2,6 1,0 0:02.23 konsole
2645 halfgaar 20 0 506340 32204 8796 S 0,0 1,0 0:05.13 mintUpdate
2626 halfgaar 20 0 552648 31768 14152 S 0,0 1,0 0:00.93 akonadi_notes_a
2430 halfgaar 20 0 556864 30052 9484 S 0,0 0,9 0:10.57 ksmserver
2546 halfgaar 20 0 866520 28528 12584 S 0,0 0,9 0:04.34 knotify4
2302 halfgaar 20 0 382404 26896 10112 S 0,0 0,8 0:01.17 kdeinit4
2304 halfgaar 20 0 387792 23516 4892 S 0,0 0,7 0:00.55 klauncher
2648 halfgaar 20 0 541576 22824 13864 S 0,0 0,7 0:01.36 polkit-kde-auth
2623 halfgaar 20 0 390412 19216 13712 S 0,0 0,6 0:00.79 akonadi_newmail
2615 halfgaar 20 0 340388 18200 13276 S 0,0 0,5 0:00.75 akonadi_maildis
2621 halfgaar 20 0 303972 17884 13272 S 0,0 0,5 0:00.70 akonadi_migrati
2612 halfgaar 20 0 306052 17856 13188 S 0,0 0,5 0:00.71 akonadi_followu
2606 halfgaar 20 0 327700 16772 12600 S 0,0 0,5 0:00.53 akonadi_agent_l
2613 halfgaar 20 0 321704 16740 12576 S 0,0 0,5 0:00.52 akonadi_agent_l
2614 halfgaar 20 0 327680 16560 12420 S 0,0 0,5 0:00.54 akonadi_agent_l
2325 halfgaar 20 0 735344 14928 10116 S 0,0 0,4 0:04.63 kactivitymanage
2313 halfgaar 20 0 282096 14832 9488 S 0,0 0,4 0:00.74 kglobalaccel
2554 halfgaar 20 0 276912 14472 10148 S 0,0 0,4 0:02.04 kuiserver
Just to try, I dropped caches:
echo 3 > /proc/sys/vm/drop_caches
And memory usage dropped:
total used free shared buffers cached
Mem: 3261 850 2411 30 1 79
-/+ buffers/cache: 770 2491
Swap: 8187 0 8187
How can this be? Why is cache being stored in a way the kernel thinks it's not cache? Can it be the cache of my ecryptfs encrypted home dir? I did just run a backup of that, so a lot of files and meta data on it were cached.
Linux Mint 17.1
Kernel 3.13.0-37
|
Writing a 1 to drop_caches only drops the ( data ) cache. The 3 also drops the dentry cache, or cache of names of files on the disk. If you had recently been working with directories containing many small files, that would account for it.
| What memory is not used by processes and freed by `echo 3 > /proc/sys/vm/drop_caches`? |
1,367,304,176,000 |
What I am looking for is a free and open source solution. If the distro I use matters, it is Open SUSE. VLC supports only WMV1&2.
|
Look up DLNA. I don't know what packages on OpenSUSE would provide it, but it's your best bet. Under Ubuntu, DLNA is provided by the package Rygel (although there is a plug in for Rhythmbox called Coherence).
| How can I setup Apache on Linux to stream WMV-HD to Xbox 360? |
1,367,304,176,000 |
Today I tried to install Linux Mint to test my programs on it. When I run Linux Mint from my .iso file I get this:
So I don't know what should I do. My pc has Intel Core I3 x64, Radeon r7 240 and it works fine with everything. Linux Mint 18.3 cinnamon-32bit
|
Seems like window resizing bug, I just reloaded it and made the window full sized.
| Linux Mint corrupted display, on first run in VirtualBox [closed] |
1,367,304,176,000 |
I'm working on a software that builds Pacman packages (which basically are tarballs with some special metadata files). The test suite builds some packages, then compares the resulting package to a recorded expected result.
One of the fields in the metadata recorded in the package is the installed size of the package, determined by running du -s --apparent-size on the root directory before tar'ing it.
All of this works perfectly fine on my local Arch Linux boxes where I develop. The packages, including their installed size in bytes (not even kilobytes, bytes!) is reproduced exactly every time I run the test.
Now I've also enabled this test on Travis, where it runs (as far as I understand from the Travis docs) on an Ubuntu-12.04-based container. There, the test passes most of the times. Most of the times. Sometimes, it calculates installed sizes that are off by 80-99%.
Here is an example of a test that fails: https://travis-ci.org/holocm/holo/builds/89326780 (The test just before that succeeded.) One of the relevant diffs is
@@ -37,7 +37,7 @@
pkgdesc = my foo bar package
url =
packager = Unknown Packager
- size = 37728
+ size = 1464
arch = any
license = custom:none
replaces = foo-bar<2.1
The puzzling thing about this is that it only happens some of the time, with no apparent pattern. The test arranges the same files as it always does, runs du -s --apparent-size on the resulting tree, and arrives at a completely wrong result. I have tried to reproduce this on a Ubuntu 12.04 VM, and while I have seen it appear there once or twice, I could not see any patterns emerge there either that would help me reproduce the problem.
Maybe someone here has an idea what could cause this issue?
EDIT: Oh, there is one pattern that I observed, actually. du runs once for each testcase. When it fails for the first testcase, it will fail for all testcases in this run.
|
Well, I've been prompted to put this as an answer by @derobert
The problem you have is AUFS.... check the problems associated with
it, check the reasons it's not in the latest kernels, check it's
"stability", check it's "POSIX completeness". – Hvisage Jan 24 at
20:55
| Why is `du --apparent-size` sometimes off by more than 90%? |
1,367,304,176,000 |
I have used GNU/Linux on systems from 4 MB RAM to 512 GB RAM. When
they start swapping, most of the time you can still log in and kill
off the offending process - you just have to be 100-1000 times more
patient.
On my new 32 GB system that has changed: It blocks when it starts
swapping. Sometimes with full disk activity but other times with no
disk activity.
To examine what might be the issue I have written this program. The
idea is:
1 grab 3% of the memory free right now
2 if that caused swap to increase: stop
3 keep the chunk used for 30 seconds by forking off
4 goto 1
-
#!/usr/bin/perl
sub freekb {
my $free = `free|grep buffers/cache`;
my @a=split / +/,$free;
return $a[3];
}
sub swapkb {
my $swap = `free|grep Swap:`;
my @a=split / +/,$swap;
return $a[2];
}
my $swap = swapkb();
my $lastswap = $swap;
my $free;
while($lastswap >= $swap) {
print "$swap $free";
$lastswap = $swap;
$swap = swapkb();
$free = freekb();
my $used_mem = "x"x(1024 * $free * 0.03);
if(not fork()) {
sleep 30;
exit();
}
}
print "Swap increased $swap $lastswap\n";
Running the program forever ought to keep the system at the limit of
swapping, but only grabbing a minimal amount of swap and do that very
slowly (i.e. a few MB at a time at most).
If I run:
forever free | stdbuf -o0 timestamp > freelog
I ought to see swap slowly rising every second. (forever and timestamp
from https://github.com/ole-tange/tangetools).
But that is not the behaviour I see: I see swap increasing in jumps
and that the system is completely blocked during these jumps. Here the
system is blocked for 30 seconds with the swap usage increases with 1
GB:
secs
169.527 Swap: 18440184 154184 18286000
170.531 Swap: 18440184 154184 18286000
200.630 Swap: 18440184 1134240 17305944
210.259 Swap: 18440184 1076228 17363956
Blocked: 21 secs. Swap increase 2000 MB:
307.773 Swap: 18440184 581324 17858860
308.799 Swap: 18440184 597676 17842508
330.103 Swap: 18440184 2503020 15937164
331.106 Swap: 18440184 2502936 15937248
Blocked: 20 secs. Swap increase 2200 MB:
751.283 Swap: 18440184 885288 17554896
752.286 Swap: 18440184 911676 17528508
772.331 Swap: 18440184 3193532 15246652
773.333 Swap: 18440184 1404540 17035644
Blocked: 37 secs. Swap increase 2400 MB:
904.068 Swap: 18440184 613108 17827076
905.072 Swap: 18440184 610368 17829816
942.424 Swap: 18440184 3014668 15425516
942.610 Swap: 18440184 2073580 16366604
This is bad enough, but what is even worse is that the system sometimes
stops responding at all - even if I wait for hours. I have the
feeling it is related to the swapping issue, but I cannot tell for
sure.
My first idea was to tweak /proc/sys/vm/swappiness from 60 to 0 or
100, just to see if that had any effect at all. 0 did not have an
effect, but 100 did cause the problem to arise less often.
How can I prevent the system from blocking for such a long time?
Why does it decide to swapout 1-3 GB when less than 10 MB would suffice?
System info:
$ uname -a
Linux aspire 3.8.0-32-generic #47-Ubuntu SMP Tue Oct 1 22:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Edit:
I tested if the problem is due to 32 GB RAM by removing 24 GB and trying with only 8 GB - I see the same behaviour.
I can also reproduce the swapping behaviour (though not the freezing) by installing GNU/Linux Mint 15 in VirtualBox.
I cannot reproduce the problem on my 8 GB laptop: The script above runs beautifully for hours and hours - swapping out a few megabytes, but never a full gigabyte. So I compared all the variables in /proc/sys/vm/* on both systems: They are exactly the same. This leads me to believe the problem is elsewhere. The laptop runs a different kernel:
Linux hk 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Maybe something in the VM system changed from 3.2.0 to 3.8.0?
|
The problem disappeared after upgrading to:
Linux aspire 3.16.0-31-lowlatency #43~14.04.1-Ubuntu SMP PREEMPT Tue Mar 10 20:41:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
It is not given that it was this kernel upgrade, that fixed it.
| GNU/Linux swapping blocks system |
1,365,539,480,000 |
I am using Fedora 16 in my DELL n4110. I recently upgraded the kernel from 3.2 to 3.3. In contradiction to the official claim, my system still drains battery as hell. It only provides 1:30 to 2 hrs of backup under normal stress as before, where as Windows provides 3hrs/+ of backup under similar stress.
Below are some screen shots from powertop, stats on the services running in my box and few lines from grub.cfg.
Overview
Idle stats
Frequency stats
Device stats
tunable
services
/etc/init.d/ceph: ceph conf /etc/ceph/ceph.conf not found; system is not configured.
dc_client.service - SYSV: Distcache is a Distributed SSL Session Cache Client Proxy.
Loaded: loaded (/etc/rc.d/init.d/dc_client)
Active: inactive (dead)
CGroup: name=systemd:/system/dc_client.service
dc_server.service - SYSV: Distcache is a Distributed SSL Session Cache server.
Loaded: loaded (/etc/rc.d/init.d/dc_server)
Active: inactive (dead)
CGroup: name=systemd:/system/dc_server.service
# Generated by ebtables-save v1.0 on Sat Apr 21 09:35:32 NPT 2012
*nat
:PREROUTING ACCEPT
:OUTPUT ACCEPT
:POSTROUTING ACCEPT
httpd.service - The Apache HTTP Server (prefork MPM)
Loaded: loaded (/lib/systemd/system/httpd.service; disabled)
Active: inactive (dead)
CGroup: name=systemd:/system/httpd.service
No active sessions
iscsid.service - LSB: Starts and stops login iSCSI daemon.
Loaded: loaded (/etc/rc.d/init.d/iscsid)
Active: active (running) since Sat, 21 Apr 2012 08:11:58 +0545; 1h 23min ago
Process: 1011 ExecStart=/etc/rc.d/init.d/iscsid start (code=exited, status=0/SUCCESS)
Main PID: 1069 (iscsid)
CGroup: name=systemd:/system/iscsid.service
├ 1056 iscsiuio
├ 1068 iscsid
└ 1069 iscsid
libvirtd.service - LSB: daemon for libvirt virtualization API
Loaded: loaded (/etc/rc.d/init.d/libvirtd)
Active: active (running) since Sat, 21 Apr 2012 08:11:58 +0545; 1h 23min ago
Process: 1086 ExecStart=/etc/rc.d/init.d/libvirtd start (code=exited, status=0/SUCCESS)
Main PID: 1111 (libvirtd)
CGroup: name=systemd:/system/libvirtd.service
├ 1111 libvirtd --daemon
└ 1183 /usr/sbin/dnsmasq --strict-order --bind-interfaces...
started
No open transaction
netconsole module not loaded
Configured devices:
lo Auto_ADW-4401 Auto_PROLiNK_H5004N Auto_korky p4p1
Currently active devices:
lo p4p1 virbr0
radvd.service - router advertisement daemon for IPv6
Loaded: loaded (/lib/systemd/system/radvd.service; disabled)
Active: inactive (dead)
CGroup: name=systemd:/system/radvd.service
sandbox is running
svnserve.service - LSB: start and stop the svnserve daemon
Loaded: loaded (/etc/rc.d/init.d/svnserve)
Active: inactive (dead)
CGroup: name=systemd:/system/svnserve.service
grub.cfg
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Fedora (3.3.1-5.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='(hd0,msdos6)'
search --no-floppy --fs-uuid --set=root 2260640d-2901-49e4-b14f-bf9addb04eb7
echo 'Loading Fedora (3.3.1-5.fc16.x86_64)'
linux /vmlinuz-3.3.1-5.fc16.x86_64 root=/dev/mapper/vg_machine-lv_root ro pcie_aspm=force i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 rd.lvm.lv=vg_machine/lv_root rd.md=0 rd.dm=0 KEYTABLE=us quiet SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 rd.lvm.lv=vg_machine/lv_swap LANG=en_US.UTF-8
echo 'Loading initial ramdisk ...'
initrd /initramfs-3.3.1-5.fc16.x86_64.img
}
menuentry 'Fedora (3.3.1-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='(hd0,msdos6)'
search --no-floppy --fs-uuid --set=root 2260640d-2901-49e4-b14f-bf9addb04eb7
echo 'Loading Fedora (3.3.1-3.fc16.x86_64)'
linux /vmlinuz-3.3.1-3.fc16.x86_64 root=/dev/mapper/vg_machine-lv_root ro pcie_aspm=force i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 rd.lvm.lv=vg_machine/lv_root rd.md=0 rd.dm=0 KEYTABLE=us quiet SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 rd.lvm.lv=vg_machine/lv_swap LANG=en_US.UTF-8
echo 'Loading initial ramdisk ...'
initrd /initramfs-3.3.1-3.fc16.x86_64.img
}
Is this normal? Are there still problems with power consumption in 3.3?
Is there any way to report this problem to the official kernel group???
|
The problem is gone with new versions of linux kernel :). I have not seen power regression since ubuntu 14.
| Linux kernel 3.3 power regression |
1,365,539,480,000 |
The status of UBIFS in Linux on top of MLC NAND has never been exactly perfect. And while this entry has now been removed from the FAQ nowadays, the support for UBIFS on top of MLC NAND has now been officially reported as unsupported:
ubi: Reject MLC NAND
Full thread on patchwork.kernel.org:
https://patchwork.kernel.org/patch/10256063/
So I am now looking for a long term filesystem replacement for a MLC NAND as found on a MIPS Creator CI20:
CI20_Hardware: ROM/NAND
This is a Samsung K9GBG08UOA NAND flash and it does not appear that there is a way to put this device in SLC mode.
It seems that jffs2 is also not an alternative:
jffs2: do not support the MLC nand
Is there any other alternative filesystem (possibly with comparable performance) ?
|
So it seems that two options are possible:
git revert b5094b7f135be and then,
wait for more work on MLC+NAND
The fact that MLC NANDs are not supported by UBI is not necessarily
definitive. I have a branch with all the work we've done to add MLC
support to UBI 2. If you have time to invest in it, feel free to
take over this work.
Anyway, the decision to remove this driver is not mine, and this patch
allows me to at least compile-test this driver.
Something to try out:
ext4 atop the MTD block layer
| Linux: Alternative to UBIFS on MLC NAND |
1,365,539,480,000 |
I have a directory on my Debian system. The directory is:
root@debian:/3/20150626# stat 00
File: `00'
Size: 6 Blocks: 0 IO Block: 4096 directory
Device: fe00h/65024d Inode: 4392587948 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2015-06-25 20:00:00.086150791 -0400
Modify: 2015-07-07 12:39:04.174903234 -0400
Change: 2015-07-07 12:39:04.174903234 -0400
Birth: -
The directory is empty:
root@debian:/3/20150626# ls -al 00
total 0
drwxr-xr-x 3 root root 6 Jul 7 12:39 .
drwxr-xr-x 3 root root 23 Jul 7 12:56 ..
But my system doesn't think so:
root@debian:/3/20150626# rm -rf 00
rm: cannot remove `00': Directory not empty
I don't know why this would happen nor am I able to find a way to move forward. Can anyone provide assistance?
None of the previous questions that I could locate solved this specific issue. But, to address some of the questions I've seen asked on similar posts:
a.) The folder was created by a running process, which has created many folders before and these folders have been removed many times before. This specific one is stuck in limbo.
b.) There should not be anything written to this directory now. I have checked many times and the ls -al output always returns nothing.
c.) I have checked lsof and there is nothing open for this directory:
root@debian:/3/20150626# lsof 00
root@debian:/3/20150626#
d.) rm is not aliased to anything else. It's pretty close to stock Debian...nothing special done with any of the core Bash programs such as rm, etc.
e.) Renaming is permitted but still unable to delete:
root@debian:/3/20150626# mv 00 delete_me
root@debian:/3/20150626# ls -al
total 0
drwxr-xr-x 3 root root 30 Jul 7 13:45 .
drwxr-xr-x 7 root root 105 Jul 7 12:57 ..
drwxr-xr-x 3 root root 6 Jul 7 12:39 delete_me
root@debian:/3/20150626# rm -rf delete_me
rm: cannot remove `delete_me': Directory not empty
root@debian:/3/20150626# ls -al delete_me/
total 0
drwxr-xr-x 3 root root 6 Jul 7 12:39 .
drwxr-xr-x 3 root root 30 Jul 7 13:45 ..
**Note, hereafter referring to as "delete_me" since I renamed it and I'm just going to go with the flow.
f.) This is the only directory that is returned when I run find on it.
root@debian:/3/20150626# find / -type d -name delete_me
/3/20150626/delete_me
root@debian:/3/20150626# find delete_me
delete_me
g.) lsattr shows nothing:
root@debian:/3/20150626# lsattr
---------------- ./delete_me
|
Found the answer. Something was wrong with the linkage, as @JeffSchaller suggested. The solution is to run xfs_check to see that the links were incorrect, then xfs_repair to fix them.
run mount to view the device name. Mine is /dev/mapper/vg3-lv3
umount /3
xfs_check /dev/mapper/vg3-lv3 which returned the following:
link count mismatch for inode 4392587948 (name ?), nlink 3, counted 2
link count mismatch for inode 12983188890 (name ?), nlink 1, counted 2
xfs_repair /dev/mapper/vg3-lv3 which indicated that the links were corrected:
resetting inode 4392587948 nlinks from 3 to 2
resetting inode 12983188890 nlinks from 1 to 2
Turns out I had another inode that was linked incorrectly.
Thanks for all the help but using the black magic of xfs_repair, my problem is solved.
| sudo rm -rf returns "cannot remove directory" on empty directory owned by root |
1,365,539,480,000 |
I am connected with SSH to a machine on which I don't have root access. To install something I uploaded libraries from my machine and put them in the ~/lib directory of the remote host.
Now, for almost any command I run, I get the error below (example is for ls) or a Segmentation fault (core dumped) message.
ls: relocation error: /lib/libpthread.so.0: symbol __getrlimit, version
GLIBC_PRIVATE not defined in file libc.so.6 with link time reference
The only commands I have been successful running are cd and pwd until now. I can pretty much find files in a directory by using TAB to autocomplete ls, so I can move through directories.
uname -r also returns the Segmentation fault (core dumped) message, so I'm not sure what kernel version I'm using.
|
Since you can log in, nothing major is broken; presumably your shell’s startup scripts add ~/lib to LD_LIBRARY_PATH, and that, along with the bad libraries in ~/lib, is what causes the issues you’re seeing.
To fix this, run
unset LD_LIBRARY_PATH
This will allow you to run rm, vim etc. to remove the troublesome libraries and edit your startup scripts if appropriate.
| Almost no commands working - relocation error: symbol __getrlimit, version GLIBC_PRIVATE not defined in libc.so.6 |
1,365,539,480,000 |
Is it possible for ping command in Linux(CentOS) to send 0 bytes. In windows one can define using -l argument
command tried
ping localhost -s 0
PING localhost (127.0.0.1) 0(28) bytes of data.
8 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64
8 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64
^C
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
man ping
-s packetsize
Specifies the number of data bytes to be sent. The default is
56, which translates into 64 ICMP data bytes when combined with
the 8 bytes of ICMP header data.
Edit1: adding windows output of ping just in case some one needs it
ping 127.0.0.1 -l 0
Pinging 127.0.0.1 with 0 bytes of data:
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Ping statistics for 127.0.0.1:
Packets: Sent = 3, Received = 3, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
ping 127.0.0.1
Pinging 127.0.0.1 with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Ping statistics for 127.0.0.1:
Packets: Sent = 3, Received = 3, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
|
A ping cannot be 0 bytes on Linux, Windows or any other platform that claims to be able to send pings. At the very least the packet must contain an IP header and a non-malformed no-trick-playing ping will also include an ICMP header, which is 8 bytes long.
It is possible that windows differs in how they output the bytes received. Linux tells you the size of the ICMP portion of the packet (8 bytes for the ICMP header plus any ICMP data present). Windows may instead print the number of ICMP payload data bytes so that while it tells you "0", those 8 ICMP header bytes are still there. To truly have 0 ICMP bytes that means your packet is a raw IP header and no longer an ICMP ping request. The point is, even if windows is telling you the ping packet is 0 bytes long, it isn't.
The minimum size of an ICMP echo request or echo reply packet is 28 bytes:
20 byte IP header,
4 byte ICMP header,
4 byte echo request/reply header data,
0 bytes of ICMP payload data.
When ping on linux prints:
8 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64
Those 8 bytes are the 4 byte ICMP header and the 4 byte ICMP echo reply header data and reflect an ICMP payload data size of 0 bytes.
| ping to send 0 bytes |
1,365,539,480,000 |
Say I want execute cmd on all *.cpp and *.hpp files that contain the word FOO.
As far as just finding those files goes, I know I can do,
find /path/to/dir -name '*.[hc]pp' -exec grep -l 'FOO' {} +
but what is the proper way to extend the processing so that I can execute, say cmd on each of those files?
I know I could do -exec bash -c '...' and write the "if file content contains FOO, then run cmd on the file" logic in the ..., but that feels like a cannon to shoot a fly.
|
-exec … \; is also a test, it succeeds iff the command inside returns exit status 0. -exec … + can process many pathnames with one command and it always succeeds, so it's not a useful test.
grep -q plays nicely with -exec … \; because it returns exit status 0 when there is a match (even if an error was detected), 1 otherwise.
Turn your -exec grep … + into -exec grep -q … \; to test files one by one, so you can add another -exec that will conditionally run your desired command:
find /path/to/dir -name '*.[hc]pp' -exec grep -q 'FOO' {} \; -exec …
In general, the fact -exec … \; is a test allows you to build custom tests where you can test for virtually anything; especially because you can run sh -c and therefore use pipelines, variables you can manipulate, shell conditionals and such to implement a test (but mind this: Is it possible to use find -exec sh -c safely?).
| How to execute a command on all files whose names match a pattern and whose contents match a pattern? |
1,365,539,480,000 |
I am customizing a Linux system with a kernel of 6.4.0, and I have noticed a strange issue. When I execute top > a.txt and then open a window to execute cat a.txt, I find that the cat a.txt result is the result of multiple top attempts, but I wouldn't have this problem with other Linux systems. In other Linux systems, the cat command only displays the results of the top command once. I don't know why?
# top > a.txt
# cat a.txt
# ls /usr/bin/top -ltr
lrwxrwxrwx 1 1000 1000 17 Aug 23 02:34 /usr/bin/top -> ../../bin/busybox
add top type:
# type top
top is /usr/bin/top
|
Some top implementation (e.g. on OpenBSD, but presumably on some Linux systems too) detects whether its output is going to the terminal or to something that isn't a terminal (to a pipe or to a file) and changes its behaviour depending on this. If the output goes to a terminal, the utility will be in interactive mode, accepting input from the user and regularly updating its display. When the output goes elsewhere, then top runs in non-interactive mode and will, by default, just output one "screen" of results and then quit.
The top implementation included in Busybox does not do this and will continue to output screen updates even when output goes into a file or pipe. Each update will be preceded by a control character sequence that clears the screen, so cat should show you the final screen update (unless the terminal is not interpreting the control characters), while if you open the file in an editor or use less to view its content, you may see several consecutive screen updates. This also seems to be the case for the procps-ng variant of top sometimes found on Linux systems.
If you want Busybox's top only to output a single screen of output to a file, use
top -n 1 >file
As to why this is so, it just comes down to the developers of the various top implementations having implemented the tool differently. If you want further information about the decisions taken while implementing the top utility in Busybox, you may want to ask in a Busybox-specific forum or mailing list.
| When redirecting top to a file, why does cat command on that file display output of multiple top attempts? |
1,365,539,480,000 |
I know that i can use nmap to see which ports are open on specific machine.
But what i need is a way to get it from the host side itself.
Currently, if i use nmap on one of my machines to check the other one, i get for an example:
smb:~# nmap 192.168.1.4
PORT STATE SERVICE
25/tcp open smtp
80/tcp open http
113/tcp closed ident
143/tcp open imap
443/tcp open https
465/tcp open smtps
587/tcp open submission
993/tcp open imaps
Is there a way to do this on the host itself? Not from a remote machine to a specific host.
I know that i can do
nmap localhost
But that is not what i want to do as i will be putting the command into a script that goes through all the machines.
EDIT:
This way, nmap showed 22 5000 5001 5432 6002 7103 7106 7201 9200 but lsof command showed me 22 5000 5001 5432 5601 6002 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7201 7210 11211 27017
|
On Linux, you can use:
ss -ltu
or
netstat -ltu
To list the listening TCP and UDP ports.
Add the -n option (for either ss or netstat) if you want to disable the translation from port number and IP address to service and host name.
Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name).
That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss/netstat (0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc).
For the host firewall configuration, you can look at the output of iptables-save.
Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss/netstat to spot those situations (where incoming connections are not being accepted and fill up a queue).
| A way to find open ports on a host machine |
1,365,539,480,000 |
I'm using Debian 8. How do I get my external IP address from a command line? I thought the below command would do the job ...
myuser@myserver:~ $ /sbin/ifconfig $1 | grep "inet\|inet6" | awk -F' ' '{print $2}' | awk '{print $1}'
addr:192.168.0.114
addr:
addr:127.0.0.1
addr:
but as you can see, it is only revealing the IP address of the machine in the LAN. I'm interested in knowing its IP for the whole world.
|
You mean whatever routable IP your dsl/cable modem/etc. router has?
You need to either query that device OR ask an outside server what IP it sees when you connect to it. The easiest way of doing that is to search google for "what is my ip" and like the calculation searches, it will tell you in the first search result. If you want to do it from the command line, you'll need to check the output of some script out there that will echo out the information. The dynamic dns service dyndns.org has one that you can use - try this command
wget http://checkip.dyndns.org -O -
You should get something like
HTTP request sent, awaiting response... 200 OK
Length: 105 [text/html]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s <html><head><title>Current IP Check</title></head><body>Current IP Address: 192.168.1.199</body></html>
- 100%[===================>] 105 --.-KB/s in 0s
2017-09-20 14:16:00 (15.4 MB/s) - written to stdout [105/105]
I've changed the IP in mine to a generic non-routable and bolded it for you.
If you want just the IP, you'll need to parse it out of there - quick and dirty, but it works for me. And I'm 100% sure there is a better safer way of doing it...
wget http://checkip.dyndns.org -O - | grep IP | cut -f 2- -d : | cut -f 1 -d \<
Which will give you just
192.168.1.199
| How do I get my IP address from the command line? [duplicate] |
1,365,539,480,000 |
I tried doing ls [a-z][a-z], but it doesn't seem to be working.
|
With bash, set the glob settings so that missing matches don't trigger an error:
shopt -u failglob # avoid failure report (and discarding the whole line).
shopt -s nullglob # remove (erase) non-matching globs.
ls ?c c?
Question-mark is a glob character representing a single character. Since you want two-character filenames, one of them has to be a c, and so it's either the first character or the last character.
With shopt -s dotglob this would also surface a file named .c.
If there are no matching files, setting these shell options causes all of the arguments to be removed, resulting in a bare ls -- listing anything/everything by default.
Use this, instead:
shopt -s nullglob ## drop any missing globs
set -- ?c c? ## populate the $@ array with (any) matches
if [ $# -gt 0 ] ## if there are some, list them
ls -d "$@"
fi
| How do I display file names that contain two characters and one of them is c? |
1,365,539,480,000 |
$ sudo su
# dd if=/dev/zero of=./myext.img bs=1024 count=100
.
.
.
# modprobe loop
# losetup --find --show myext.img
/dev/loop0
# mkfs -t myext /dev/loop0
.
.
.
# mkdir mnt
# mount /dev/loop0 ./mnt
# cd mnt
# ls -al
total 17
drwxr-xr-x 3 root root 1024 Jul 21 02:22 .
drwxr-xr-x 11 shisui shisui 4096 Jul 21 02:22 ..
drwx------ 2 root root 12288 Jul 21 02:22 lost+found
(Cut out some of the output of some commands). My first question is, why isn't mnt showing up in the ls -al output? All I see is root. I cd'd into \mnt so I expected to see it in my ls -al output.
But then what is the third link?
Finally, are all the link numbers in this ls -al output hard links? Or does this link count also include symbolic links?
|
You don’t see mnt in the ls -al output because you’re inside mnt; it is represented by .
There’s another hard link to ., lost+found/..; this explains the count of 3 links to the directory:
. which points to the directory itself;
.. which also points to the directory, because it’s the root directory in the file system (see Why does a new directory have a hard link count of 2 before anything is added to it?);
lost+found/.., which points back to the root directory (again, in the file system, so mnt here).
The link counts shown by ls -l count hard links only; symlinks aren’t included.
| Why does this new directory have a link count of 3? |
1,365,539,480,000 |
There is a way to visualize multiple terminals at the same time without running a Xorg session ?
I have a really low profile machine that could be great for some basic stuff but has an horrible support for the GPU in terms of drivers and computing power.
|
Check out tmux and/or screen. A comparison of the two programs which satisfy essentially the same needs can be found on the tmux FAQ.
A very good blog post for getting started with tmux is at Hawk Host: TMUX the terminal multiplexer part 1 and part 2.
If you want to know more about tmux's versatility, there's a nice book/e-book that covers a lot of ground at a leisurely pace: tmux: Productive Mouse-Free Development by Brian P. Hogan.
| Multiple terminals at once without an X server |
1,365,539,480,000 |
We have so many options for window managers, shells, desktop environments, distros, and kernel architectures under Linux, but why after (maybe) 20 years do we only have X.org server (including its predecessor) as the bottom layer of GUI?
I know about XFree86, and Y, but most of them are stuck. Is it so hard to create a new (i.e modern) one? Or is there any other reason we are stuck on X.org?
|
There are several other implementations of X11, but none of them have all the features & driver support that X.org has.
There are also some framebuffer-based solutions like DirectFB and whatever Android uses.
And recently work has been ongoing on Project Wayland, which maybe one day might (partially) replace X11.
| linux x.org alternative |
1,365,539,480,000 |
How can I execute a command only if a certain file exceeds a defined size? Both should at the end run as a oneliner in crontab.
Pseudocode:
* * * * * find /cache/myfile.csv -size +5G && echo "file is > 5GB"
|
To use the file size as a precondition you can use stat or find:
[ -n "$(find /cache/myfile.csv -prune -size +5G 2>/dev/null)" ] && echo "file is > 5GB"
Or if the target command (echo, here) is short, put it into the exec part of `find
find /cache/myfile.csv -prune -size +5G -exec echo "file is > 5GB" \;
The -prune is in case myfile.csv might be a file of type directory, to prevent find from descending into it.
| How to run a command only if a specific file has a certain size |
1,365,539,480,000 |
What steps do you take to make a vanilla Ubuntu system run faster and use less memory? I'm using Ubuntu as the OS for my general purpose PC, but it's on slightly older hardware and I want to get as much out of it as I can. Short of leaner distros, what things do you do to make it run a bit faster for basic web browsing and word processing?
|
Many Distributions offer what is called a Just Enough Operating System, or JeOS. How you go about installing these varies from distro to distro.
Under Debian based distributions, such as Ubuntu, if you use a Server Install ISO, you can install the JeOS by pressing F4 on the first menu screen to pick "Minimal installation".
Many distributions also provide Netinstall or USB boot install mediums that, because of limited resources, provide very striped down base systems to be built upon.
| How do you free up resources in Ubuntu? |
1,365,539,480,000 |
I need to enable the case insensitive filesystem feature (casefold) on ext4 of a Debian 11 server with a backported 6.1 linux kernel with the required options compiled in.
The server has a swap partition of 2GB and a big ext4 partition for the filesystem, which it also boots from. I only have ssh access as root and cannot access the physial/virtual host itself, so I don't have access to (virtual) usb sticks or cdrom media.
What is the fastest way to enable the casefold feature? tune2fs doesn't want to do it because the fileystem is mounted.
Idea: Drop the swap, install a small rescue system in it, reboot into said rescue system, change the filesystem options of the root partition, reboot into the live partition and restore the swap. For this to work however I need to prepare an extra linux system just to do the tune2fs command needed.
Is there a better way? Any rescue systems I can already use and preconfigure for the required network settings after a reboot?
|
I like your approach; it's clean in that it doesn't require modification of the data on your main system.
And, yes, I think that if you want to run tune2fs then by a large margin, the easiest solution is to run that from a running Linux, so that there's no real way around having to run it when the main file system isn't mounted.
I don't think your network setup is of any significance – you know exactly what you want your system to do; preconfiguring network to give you an SSH shell into it is going be harder than just running tune2fs … /dev/disk/by-partuuid/… in a script that's autonomously executed (and which then moves on to do what is needed to boot your normal system).
Now, two options:
Your debian currently boots using an initrd containing an initramfs (I expect it does)
It doesn't.
In the first case, modifying that initrd generation process to just include the necessary tune2fs invocation, generate a new initrd, booting with that, is probably the easiest. Mind you, initrds are really what you want to avoid building: custom fully-fledged Linux systems (which just happen to be Linux distro's ways to initialize the system before mounting the root file system and continuing the main boot process). It's just that debian already builds these for you, anyways :)
I must admit it's been a decade (or more) since I did something like that for a debianoid Linux, so I'm not terribly much of a help on how; check out debian's (sadly seemingly a bit sparse/outdated) documentation on it, and see what you have in /etc/mkinitrd.
In the second case, your approach seems sensible.
| How to change the casefold ext4 filesystem option of the root partition, if I only have ssh access |
1,365,539,480,000 |
After installing PyCharm on Pop! OS (by extracting the download) there is no easy way to run the program.
I have probably installed it in my Documents folder. Not sure what the convention is.
To run PyCharm I need to go to the folder pycharm-community-2019.2.4/bin, open terminal and run
./pycharm.sh
Any way to make my life easier?
|
You can use main menu. There is Tools -> Create Desktop Entry. It might require root permissions.
| PyCharm has no shortcut or launcher |
1,365,539,480,000 |
Both hier(7) and file-hierarchy(7) man pages claim to describe the conventional file system hierarchy. However, there are some differences between them. For example, hier(7) describes /opt and /var/crash, but file-hierarchy(7) does not. What are the differences between these two descriptions. Which one do real Linux systems use?
|
The hier manual page has a long history that dates back to Unix Seventh Edition in 1979. The one in Linux operating systems is not the original Unix one, but a clone.
At the turn of the century, FreeBSD people documented existing long-standing practice, namely that system administrators adjust stuff for their own systems, and that a good system administrator changes that manual page to match the local adjustments.
Of course, Linux operating systems are notoriously bad when it comes to doco. The hier manual page is rarely fully adjusted to the actual operating system by the distribution maintainers, if it is adjusted at all. Debian, for example, does not patch it at all, and simply provides the underlying generic hier manual page from Michael Kerrisk's Linux Manpages Project as-is.
(The BSDs have a generally much stronger tradition of the people who are making changes to the operating system including changes to its doco in what they do. Their doco is better as a result. But it is itself still woefully outdated in some areas. For example: The FreeBSD manual for the ul command has been missing large parts of the tool since 2.9BSD.)
So Lennart Poettering wrote his own manual page for systemd, file-hierarchy, in 2014. As you can see, despite its claim it really is not "more minimal" than the hier page. For starters, it documents a whole load of additional things about user home directories.
Thus there are two different manual pages from two different sets of people, none of whom are the distribution maintainers themselves, who actually decide this stuff.
The simple truth is that real Linux-based operating systems adhere to neither. There are distribution variations from vanilla systemd that don't get patched into the file-hierarchy page by the distribution maintainers; and as mentioned the hier page often does not get locally patched either.
They do not adhere to the Linux Filesystem Hierarchy Standard moreover. Several operating systems purposefully deviate from it, and a few of them document this. A few Linux operating systems intentionally do not reference it at all, such as GoboLinux. As you can see from the further reading, Arch Linux used to reference it but has since dropped it.
(I have a strong suspicion, albeit that I have done no rigorous survey, that Arch Linux dropping the FHS is the tipping point, and that adherence to the FHS is the exception rather than the norm for Linux operating systems now.)
For many Linux operating systems there simply is not a single manual page for this. The actual operating system will be an admixture of hier, file-hierarchy, the Linux Filesystem Hierarchy Standard, and individual operating system norms with varying degrees of documentation.
Further reading
Jonathan de Boyne Pollard (2016). "Gazetteer". nosh Guide. Softwares.
Binh Nguyen (2004-07-30). Linux Filesystem Hierarchy. Version 0.65. The Linux Documentation Project.
https://wiki.archlinux.org/index.php/Frequently_asked_questions#Does_Arch_follow_the_Linux_Foundation.27s_Filesystem_Hierarchy_Standard_.28FHS.29.3F
https://netarky.com/programming/arch_linux/Arch_Linux_directory_structure.html
https://wiki.gentoo.org/wiki/Complete_Handbook/Users_and_the_Linux_file_system#Linux_file_system_hierarchy
https://www.suse.com/support/kb/doc/?id=7004448
https://sta.li/filesystem/
Daniel J. Bernstein. The root directory. cr.yp.to.
| What's the difference between 'hier(7)' and 'file-hierarchy(7)' man pages? |
1,365,539,480,000 |
sudo su - will elevate any user(sudoer) with root privilege.
su - anotheruser will switch to user environment of the target user, with target user privileges
What does sudo su - username mean?
|
Just repeating both @dr01 and @OneK's answers because they are both missing some fine details:
su - username - Asks the system to start a new login session for the specified user. The system will require the password for the user "username" (even if its the same as the current user).
sudo su - username will do the same, but first ask the system to be elevated to super user mode, after which su will not ask for "username"'s password because a super user is allowed to change into any other user without knowing their password. That being said, sudo in itself enforces security by by checking the /etc/sudoers file to make sure the current user is allowed to gain super user permissions,and possibly verifying the current user's password.
I would also like to comment that to gain a super user login session, please use sudo -i (or sudo -s) as sudo su - is just silly: its asking sudo to give super user permissions to su so that su can start a login shell for the super user - when sudo can achieve the same result better it by itself.
| su - user Vs sudo su - user |
1,365,539,480,000 |
This syntax prints "linux" when variable equals "no":
[[ $LINUX_CONF = no ]] && echo "linux"
How would I use regular expressions (or similar) in order to make the comparison case insensitive?
|
Standard sh
No need to use that ksh-style [[...]] command, you can use the standard sh case construct here:
case $LINUX_CONF in
([Nn][Oo]) echo linux;;
(*) echo not linux;;
esac
Or naming each possible case individually:
case $LINUX_CONF in
(No | nO | NO | no) echo linux;;
(*) echo not linux;;
esac
bash
For a bash-specific way to do case-insensitive matching, you can do:
shopt -s nocasematch
[[ $LINUX_CONF = no ]] && echo linux
Or:
[[ ${LINUX_CONF,,} = no ]] && echo linux
(where ${VAR,,} is the syntax to convert a string to lower case).
You can also force a variable to be converted to lowercase upon assignment with:
typeset -l LINUX_CONF
That also comes from ksh and is also supported by bash and zsh.
More variants with other shells:
zsh
set -o nocasematch
[[ $LINUX_CONF = no ]] && echo linux
(same as in bash).
set -o extendedglob
[[ $LINUX_CONF = (#i)no ]] && echo linux
(less dangerous than making all matches case insensitive)
[[ ${(L)LINUX_CONF} = no ]] && echo linux
[[ $LINUX_CONF:l = no ]] && echo linux
(convert to lowercase operators)
set -o rematchpcre
[[ $LINUX_CONF =~ '^(?i)no\z' ]]
(PCRE syntax)
ksh93
[[ $LINUX_CONF = ~(i)no ]]
or
[[ $LINUX_CONF = ~(i:no) ]]
Note that all approaches above other than [nN][oO] to do case insensitive matching depend on the user's locale. Not all people around the world agree on what the uppercase version of a given letter is, even for ASCII ones.
In practice for the ASCII ones, at least on GNU systems, the deviations from the English rules seem to be limited to the i and I letters and whether the dot is there or not on the uppercase or lowercase version.
What that means is that [[ ${VAR,,} = oui ]] is not guaranteed to match on OUI in every locale (even when the bug in current versions of bash is fixed).
| bash - case-insensitive matching of variable |
1,365,539,480,000 |
I have a CSV file like
a.csv
"1,2,3,4,9"
"1,2,3,6,24"
"1,2,6,8,28"
"1,2,4,6,30"
I want something like
b.csv
1,2,3,4,9
1,2,3,6,24
1,2,6,8,28
1,2,4,6,30
I tried awk '{split($0,a,"\""); But did not help.Any help is appreciated.
|
Use gsub() function for global substitution
$ awk '{gsub(/"/,"")};1' input.csv
1,2,3,4,9
1,2,3,6,24
1,2,6,8,28
1,2,4,6,30
To send output to new file use > shell operator:
awk '{gsub(/"/,"")};1' input.csv > output.csv
Your splitting to array approach also can be used, although it's not necessary, but you can use it as so:
$ awk '{split($0,a,/"/); print a[2]}' input.csv
1,2,3,4,9
1,2,3,6,24
1,2,6,8,28
1,2,4,6,30
Note that in this particular question the general pattern is that quotes are in the beginning and end of the line, which means we can also treat that as field separator, where field 1 is null, field 2 is 1,2,3,4, and field 3 is also null. Thus, we can do:
awk -F '"' '{print $2}' input.csv
And we can also take out substring of the whole line:
awk '{print substr($0,2,length()-2)}' quoted.csv
Speaking of stripping first and last characters, there's a whole post on stackoverflow about that with other tools such as sed and POSIX shell.
| how to remove the double quotes in a csv [duplicate] |
1,365,539,480,000 |
Some of my jobs are getting killed by the os for some reason. I need to investigate why this is happening. The jobs that I run don't show any error messages in their own logs, which probably indicates os killed them. Nobody else has access to the server. I'm aware of OOM killer, are there any other process killers? Where would I find logs for these things?
|
oom is currently the only thing that kills automatically.
dmesg
and /var/log/messages should show oom kills.
If the process can handle that signal, it could log at least the kill.
Normally memory hogs get killed. Perhaps more swap space can help you, if the memory is only getting allocated but is not really needed.
Else: Get more RAM.
| what process killers does linux have? [closed] |
1,365,539,480,000 |
I want to list all the physical volume associated with logical volume.
I know lvdisplay, pvscan, pvdisplay -m could do the job. but I don't want to use these commands. Is there any other way to do it without using lvm2 package commands?
Any thoughts on comparing the major and minor numbers of devices?
|
There are two possibilities:
If you accept dmsetup as a non-lvm package command (at openSUSE the is a separate package device-mapper) then you can do this:
dmsetup table "${vg_name}-${lv_name}"
Or you do this:
start cmd: # ls -l /dev/mapper/linux-rootfs
lrwxrwxrwx 1 root root 7 27. Jun 21:34 /dev/mapper/linux-rootfs -> ../dm-0
start cmd: # ls /sys/block/dm-0/slaves/
sda9
| list the devices associated with logical volumes without using lvm2 package commands |
1,365,539,480,000 |
On my Linux machine, it isn't clear to me why if I do the following then I don't get only the version string ("1.5.0_32").
# java -version | grep version | awk '{print $NF}'
java version "1.5.0_32"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_32-b05)
Java HotSpot(TM) Server VM (build 1.5.0_32-b05, mixed mode)
Why don't grep or awk work?
Just to show that grep and awk work on other example
# echo ' java version "1.5.0_32" ' | grep version | awk '{print $NF}'
"1.5.0_32"
|
Try like this:
java -version 2>&1 | grep version | awk '{print $NF}'
Looks like the output is going to stderr.
Also, grep is not needed:
java -version 2>&1 | awk '/version/{print $NF}'
| Output of `java -version` not matched by grep or awk |
1,365,539,480,000 |
Given a zip file zipfile.zip, we know that it contains a file called text.txt.
Is there a way to read the content of text.txt without unzipping zipfile.zip?
|
You can dump the file directly to stdout, for example. You are still technically unzipping it, but not to disk:
$ unzip -p zipfile.zip text.txt
For example, to count the lines you could do this:
$ unzip -p zipfile.zip text.txt | wc -l
The -c option is similar, but will write the name of each extracted file right before the contents.
| Read file content in a zip file without unzipping? |
1,554,251,819,000 |
I would like use a program like tail to follow a file as it's being written to, but not display the most recent lines.
For instance, when following a new file, no text will be displayed while the file is less than 30 lines. After more than 30 lines are written to the file, lines will be written to the screen starting at line 1.
So as lines 31-40 are written to the file, lines 1-10 will be written to the screen.
If there is no easy way to do this with tail, maybe a there's a way to write to a new file a prior line from the first file each time the first file is extended by a line, and the tail that new file...
|
Maybe buffer with awk:
tail -n +0 -f some/file | awk '{b[NR] = $0} NR > 30 {print b[NR-30]; delete b[NR-30]} END {for (i = NR - 29; i <= NR; i++) print b[i]}'
The awk code, expanded:
{
b[NR] = $0 # save the current line in a buffer array
}
NR > 30 { # once we have more than 30 lines
print b[NR-30]; # print the line from 30 lines ago
delete b[NR-30]; # and delete it
}
END { # once the pipe closes, print the rest
for (i = NR - 29; i <= NR; i++)
print b[i]
}
| Using "tail" to follow a file without displaying the most recent lines |
1,554,251,819,000 |
Is there a way to know which partition you actually booted from?
fdisk -l reveals a "Boot" column that I definitely don't have on my NVME. Is this just legacy information?
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1126399 1124352 549M b W95 FAT32
/dev/sda2 1126400 975688107 974561708 464.7G 7 HPFS/NTFS/exFAT
/dev/sda3 975689728 976769023 1079296 527M 27 Hidden NTFS WinRE
...
Device Start End Sectors Size Type
/dev/nvme0n1p1 616448 2458216447 2457600000 1.1T Linux filesystem
/dev/nvme0n1p2 2458216448 3907024031 1448807584 690.8G Linux filesystem
/dev/nvme0n1p3 2048 616447 614400 300M EFI System
Partition table entries are not in disk order.
Considering lsblk shows that /boot/efi is mounted I'm 90% sure that it's using my nvme drive, I just wanted to confirm that's true even though there's no boot indicator from fdisk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 549M 0 part
├─sda2 8:2 0 464.7G 0 part
└─sda3 8:3 0 527M 0 part
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 99M 0 part
├─sdb2 8:18 0 16M 0 part
└─sdb3 8:19 0 1.8T 0 part
nvme0n1 259:0 0 1.8T 0 disk
├─nvme0n1p1 259:1 0 1.1T 0 part
├─nvme0n1p2 259:2 0 690.8G 0 part /
└─nvme0n1p3 259:3 0 300M 0 part /boot/efi
I also noticed Disklabel type is dos for /dev/sda and gpt for /dev/nvme0n1 if that factors in.
|
Since your system apparently boots in UEFI style, the answer to the titular question is:
Run efibootmgr -v as root, see the four-digit ID on the BootCurrent: line (usually the first line of output), then look at the corresponding BootNNNN line to find both the PARTUUID of the partition to boot from, and the filename containing the actual boot manager/loader used.
Then run lsblk -o +PARTUUID to see the partition-unique UUIDs embedded in the GPT partition table. Find the UUID you saw on the BootNNNN line of the efibootmgr -v output, and you'll know the partition.
(On MBR-partitioned disks, there is no real partition UUID, and so a shorter combination of a disk signature number and a partition number is displayed in place of a real partition UUID.)
The Disklabel type is definitely a factor here: it indicates your sda uses classic MBR partitioning and boot sequence, while your nvme0n1 uses GPT partitioning and UEFI-style booting.
While the GPT partition table can store a boot flag that is essentially the same as the Boot flag field in the fdisk -l output of a MBR-partitioned disk, booting MBR-style from a GPT-partitioned disk is expected to be a rare corner case, and so fdisk -l will not include it. The native UEFI-style way will not use such a flag at all, since it's now the system firmware's job to know both the name of the bootloader file and the PARTUUID of the partition to load it from.
But if such a legacy flag is enabled on a GPT partition, using the i command (= print information about a partition) of a modern Linux fdisk will show it, by the presence of a LegacyBIOSBootable keyword on the Attrs: line of output.
To actually toggle such a flag, you would have to use the experts-only extra commands of a GPT-aware Linux fdisk: first x, then A to toggle the flag.
If you just want a list the partition table with the UEFI partition flags included, you can use fdisk -x /dev/nvme0n1. Be advised that the output is quite a bit wider than the traditional fdisk -l output.
If you are booting using the classic MBR/BIOS style, then the answer to the title question is "you don't, really." There is no ubiquitous standard way for BIOS-style firmware to tell the OS which device was actually used to boot the system. This was a long-standing problem on all OSs and OS installers on systems using legacy BIOS-style boot.
If the /sys/firmware/edd directory exists, it may contain information that allows the identification of the boot disk, by identifying the order in which BIOS saw the disks in. By convention, the current boot disk is moved to the first hard disk position (also known as "disk 0x80") in the BIOS disk list, and most BIOS-based bootloaders rely on this fact.
So if /sys/firmware/edd/int13_dev80 exists, and the bootloader has not switched the BIOS int13 IDs of the disks around (GRUB can do so, if you have a custom dual/multi-boot configuration that requires swapping disk IDs), then the information within may be useful to identify the actual boot disk used by the firmware.
Unfortunately the BIOS extension required to have this information available was not as widespread as it could have been, and not always completely and correctly implemented even when it was present. I've seen a lot of systems with no EDD info available, some systems with incomplete EDD info, and even one system in which querying the EDD info caused the boot to hang.
(Apparently the EDD info interface was designed by Dell, so if you mostly work with Dell systems, you may have better luck than me.)
| How do I tell which partition I booted from? |
1,554,251,819,000 |
This GRUB Quiet Splash says:
The splash (which eventually ends up in your /boot/grub/grub.cfg )
causes the splash screen to be shown.
At the same time you want the boot process to be quiet, as otherwise
all kinds of messages would disrupt that splash screen.
Although specified in GRUB these are kernel parameters influencing the
loading of the kernel or its modules, not something that changes GRUB
behaviour.
However, I have not found splash on https://www.kernel.org/doc/html/v5.0/admin-guide/kernel-parameters.html, but AFAIK it works on modern distros which are kernel 5+ based. Why?
|
If you specify a boot option that the kernel does not recognize, it does not cause an error: the unknown boot parameter will have no effect to the kernel, other than being listed in /proc/cmdline. Then initramfs scripts or other userspace programs can look for it and use it to modify their behavior.
The unknown boot parameters are also passed to the init process, whichever it may be (whether SysVinit, systemd or something else). In fact, this is how important troubleshooting/recovery boot options work, like single to boot a SysVinit system to single-user mode, or systemd.unit=emergency.target for the closest equivalent on a system with systemd.
If your distribution uses user-space boot splash software like plymouth, the kernel just "passes through" any splash/nosplash boot option to /proc/cmdline, and plymouth in initramfs will check for it.
Your distribution may have other troubleshooting/recovery functions implemented as extra boot options by the initramfs generator package. In Debian/Ubuntu and related distributions, see man 7 initramfs-tools for a list of boot options specific to initramfs files created by the initramfs-tools package; in modern RedHat/Fedora, see man dracut.
| Why splash is not in kernel parameters list but works? |
1,554,251,819,000 |
I'm constantly having the situation where I want to correlate the output of lsblk which prints devices in a tree with their name in the scheme of /dev/sdXY with the drives /dev/disk/by-id/ names.
|
The by-id names consists of the drive model together with the serial something which lsblk can be instructed to list:
lsblk -o name,model,serial
The output of this command will look something like this:
NAME MODEL SERIAL
sda SAMSUNG HD203WI S1UYJ1VZ500792
├─sda1
└─sda9
sdb ST500DM002-1BD14 W2APGFP8
├─sdb1
└─sdb9
sdc ST500DM002-1BD14 W2APGFS0
├─sdc1
└─sdc9
For posterity here's also a longer command with some commonly used columns:
sudo lsblk -o name,size,fstype,label,model,serial,mountpoint
The output of which could be:
NAME SIZE FSTYPE LABEL MODEL SERIAL MOUNTPOINT
sda 1,8T zfs_member SAMSUNG HD203WI S1UYJ1VZ500792
├─sda1 1,8T zfs_member storage /home
└─sda9 8M zfs_member
sdb 465,8G btrfs ST500DM002-1BD14 W2APGFP8
├─sdb1 465,8G btrfs
└─sdb9 8M btrfs
sdc 465,8G btrfs ST500DM002-1BD14 W2APGFS0
├─sdc1 465,8G btrfs rpool /
└─sdc9 8M btrfs
| Make lsblk list devices by-id |
1,554,251,819,000 |
I am working on Linux Ubuntu, and I want a bash script whose output is to convert the timezone 7 hours in advance from my server time.
My server time:
Mon Jul 23 23:00:00 2017
What I want to achieve:
Mon Jul 24 06:00:00 2017
I have tried this one in my bash script:
#!/bin/bash
let var=$(date +%H)*3600+$(date +%M)*60+$(date +%S)
seven=25200
time=$(($var+$seven))
date=$(date --date='TZ="UTC+7"' "+%Y-%m-%d")
hours=$(date -d@$time -u +%H:%M:%S)
echo "$date" "$hours"
the output was:
2017-07-23
06:00:00
The hours works, but the date still matches the server date. Is there another way to solve this?
|
Taking your question literally, if you just want to get a date string for 7 hours later than the current time in the current zone, that's easy:
date -d "7 hours" "+%Y-%m-%d %H:%M:%S"
If what you're really wanting to do is pull the local date/time in some other timezone, though, then you'd be better off following the advice in some of the other answers.
| How to change time zone in a bash script? |
1,554,251,819,000 |
I've added myself into the sudoers users list by using the command
root@debian:/home/oshirowanen#adduser oshirowanen sudo
If I try to run that command again,
root@debian:/home/oshirowanen# adduser oshirowanen sudo
The user `oshirowanen' is already a member of `sudo'.
root@debian:/home/oshirowanen#
All looks good so far.
When I then exit the root user and try to install/remove/search something using my own account, it doesn't work and complains that I am not a sudoer... For example
root@debian:/home/oshirowanen# exit
exit
oshirowanen@debian:~$ sudo aptitude search ice
[sudo] password for oshirowanen:
oshirowanen is not in the sudoers file. This incident will be reported.
oshirowanen@debian:~$
Why is this happening?
This is what I get from visudo
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
|
You need to log in again after adding yourself to a group to get the correct privileges.
To verify with two shells:
alice $ sudo adduser test
alice $ su - test
alice $ sudo adduser test sudo
test $ sudo ls
test is not in the sudoers file. [...]
test $ exit
alice $ su - test
test $ sudo ls
examples.desktop
To clarify, any shells which were opened before the user was added to the sudo group do not have the new privileges.
| How to add self to sudoers list? |
1,554,251,819,000 |
I am on a closed network (i.e. no connectivity to the internet).
I have a bourne shell script that asks for the user to enter a regular expression for use with grep -P.
Generally speaking, I like to do some form of input validation.
Is there a way to test a string variable to see if it is a (valid) regex?
(Copying things from the internet onto my system can be done, but it takes forever and is a PITA -- thus I am looking for way to do it natively.)
|
No, but with some tools it's not hard to test whether a regex compiles or not.
For example, with grep: echo | grep -P '[' - the exit code, $?, will be 2, indicating an error occurred (and for this example, grep will print "grep: missing terminating ] for character class" to stderr - you can redirect stderr to /dev/null if you only want the exit code).
An exit code of 1 indicates that the regex compiled OK but didn't match the input.
These exit codes are specific to GNU grep. Other tools, if they even have such a capability, will probably have different exit codes, and different ways of indicating specific kinds of errors.
Note that this is not even remotely close to telling you whether a regex will correctly match what you want it to (and not match what you don't want it to).
In short, try it and test the exit code. And know your tools.
| Does Bourne Shell have a regex validator? |
1,554,251,819,000 |
I would like to check whether a Linux machine supports io_uring. How can this be done?
Is there some kernel file that describes support for this, or do all Linux 5.1+ kernels have support?
|
io_uring doesn’t expose any user-visible features, e.g. as a sysctl; it only exposes new system calls. It is available since kernel 5.1, but support for it can be compiled out, and it might be backported to older kernels in some systems.
The safest way to check for support is therefore to check whether the io_uring system calls are available. If you have /proc/kallsyms, you can look there:
grep io_uring_setup /proc/kallsyms
Another way to check for the system call is to attempt a safe but malformed call, and check whether the resulting error is ENOSYS, for example:
#include <errno.h>
#include <linux/io_uring.h>
#include <stddef.h>
#include <sys/syscall.h>
#include <unistd.h>
int main(int argc, char **argv) {
if (syscall(__NR_io_uring_register, 0, IORING_UNREGISTER_BUFFERS, NULL, 0) && errno == ENOSYS) {
// No io_uring
} else {
// io_uring
}
}
On a kernel supporting io_uring, the available operations vary as new features are introduced with new kernel versions; to determine the supported operations, use io_uring_get_probe.
| How to tell if a Linux machine supports io_uring? |
1,554,251,819,000 |
I have this output.
[root@linux ~]# cat /tmp/file.txt
virt-top time 11:25:14 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
1 R 0 0 0 0 0.0 0.0 96:02:53 instance-0000036f
2 R 0 0 0 0 0.0 0.0 95:44:07 instance-00000372
virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f
2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372
You can see it has two blocks and i want to extract last block (if you see first block it has all CPU zero which i don't care) inshort i want to extract following last lines (Notes: sometime i have more than two instance-*) otherwise i could use "tail -n 2"
1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f
2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372
I have tried sed/awk/grep and all possible way but not get close to desire result.
|
This feels a bit silly, but:
$ tac file.txt |sed -e '/^virt-top/q' |tac
virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f
2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372
GNU tac reverses the file (many non-GNU systems have tail -r instead), the sed picks lines until the first that starts with virt-top. You can add sed 1,2d or tail -n +3 to remove the headers.
Or in awk:
$ awk '/^virt-top/ { a = "" } { a = a $0 ORS } END {printf "%s", a}' file.txt
virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f
2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372
It just collects all the lines to a variable, and clears that variable on a line starting with virt-top.
If the file is very large, the tac+sed solution is bound to be faster since it only needs to read the tail end of the file while the awk solution reads the full file from the top.
| extract lines from bottom until regex match |
1,554,251,819,000 |
Ex:
Input file
A<0>
A<1>
A_D2<2>
A_D2<3>
A<4>
A_D2<6>
A<9>
A_D2<10>
A<13>
Desired Output:
A<0>
A<1>
A_D2<2>
A_D2<3>
A<4>
-----
A_D2<6>
-----
-----
A<9>
A_D2<10>
-----
-----
A<13>
Just care about the number in the angle bracket.
If the number is not continuous ,then add some symbol (or just add newline) until the number continue agian.
In this case, number 5,7,8,11 and 12 are missing.
Can anyone solve this problem by using awk or sed (even grep) command?
I am a beginner in Linux. Please explain the details of the whole command line.
|
Using grep or sed for doing this would not be recommended as grep can't count and sed is really difficult to do any kind of arithmetics in (it would have to be regular expression-based counting, a non-starter for most people except for the dedicated).
$ awk -F '[<>]' '{ while ($2 >= ++nr) print "---"; print }' file
A<0>
A<1>
A_D2<2>
A_D2<3>
A<4>
---
A_D2<6>
---
---
A<9>
A_D2<10>
---
---
A<13>
The awk code assumes that 0 should be the first number, and then maintains the wanted line number for the current line in the variable nr. If a number is read from the input that requires one or several lines to be inserted, this is done by the while loop (which also increments the nr variable).
The number in <...> is parsed out by specifying that < and > should be used as field delimiters. The number is then in $2 (the 2nd field).
| How to add some symbol (or just add newline) if the numbers in the text are not continuous |
1,554,251,819,000 |
Do all different Linux distributions have the same command lines? What I want to know is the same command line works for all kinds of Linux distributions (CentOS, Fedora, Ubuntu, etc.) or whether they all have different command lines?
|
I'm choosing to interpret this question as a question about the portability of commands and shells across various Linux distributions. A "command line" could mean both "a command written at the shell prompt" and "the shell itself". Hopefully this answer addresses both these interpretations of "command line".
Most Unix systems provide the same basic utilities for working at the shell prompt. These utilities are working largely in the same way since they are standardised. Also, the syntax used for writing shell commands is standardised (loops, redirections, pipes, background processes, variable assignments, quoting etc.) The standard is called POSIX and may be found here (see the "Shell & Utilities" section).
On most Unices (especially on Linux for some reason), the standard utilities have been extended with extra functionality, but the functionality described by the POSIX standard should be implemented. If a standard utility does not conform to the POSIX standard, you should probably file a bug report about this.
In particular, the shell itself is often extended to give a more convenient interactive experience, or to be able to provide more advanced shell programming facilities. The shell, being an application like any other, comes in various flavours (implementations) and bash is the most popular on Linux systems (but it's also available as the default shell on e.g. macOS and may be installed on any Unix). The zsh and ksh shells are also popular and provide different sets of extensions, but all should at least be able to do largely what the POSIX standard says using a common syntax (except when using extensions such as special types of arrays and fancier forms of filename pattern matching etc. although some of this happens to be fairly similar between shells too).
As for non-standard tools, such as tools for doing some specific task that is not covered by the POSIX standard (such as talking to a database or adjusting the brightness level of a monitor), or that are specific to a particular Linux distribution (maybe for doing package management), to a version of a particular Linux distribution, or to a particular hardware architecture etc., the portability of the command would depend on the correct variant and version of the tool being installed on a system that supports using that tool.
Across various Linux distributions, the assortment of available tools and utilities is fairly homogenous, and portability is generally good (with the caveat that distribution and architecture specific tools may be different or missing). When looking at using and writing scripts that should work on other types of Unix systems, it becomes more important to know about what extensions are particular to the GNU/Linux variation of tools and utilities, and what can be expected to work on "a generic POSIX/Unix system".
| Do all different Linux distributions have the same command lines? [closed] |
1,554,251,819,000 |
I have a file of genomic data that is approximately 5 million lines long and should have only the characters A, T, C, and G in it. The problem is, I know how large the file should be, but it's slightly larger than that. Which means, something went wrong in an analysis, or there are lines that contain something other than genomic data.
Is there a way to find any line that has something other than an A, T, C, or G? Due to the nature of the file, any other letter, spaces, numbers, symbols shouldn't be present. I've gone through searching symbol by symbol, so I was hoping there would be an easier way.
|
First of all, you definitely do not want to open the file in an editor (it's much too large to edit that way).
Instead, if you just want to identify whether the file contains anything other than A, T, C and G, you may do that with
grep '[^ATCG]' filename
This would return all lines that contain anything other than those four characters.
If you would want to delete these characters from the file, you may do so with
tr -c -d 'ATCG\n' <filename >newfilename
(if this is the correct way to "correct" the file or not, I don't know)
This would remove all characters in the file that are not one of the four, and it would also retain newlines (\n). The edited file would be written to newfilename.
If it's a systematic error that has added something to the file, then this could possibly be corrected by sed or awk, but we don't yet know what your data looks like.
If you have the file open in vi or vim, then the command
/[^ATCG]
will find the next character in the editing buffer that is not a A, T, C or G.
And :%s/[^ATCG]//g will remove them all.
| Find any line in VI that has something other than ATCG |
1,554,251,819,000 |
I there a way to split single line into multiple lines with 3 columns.
New line characters are missing at the end of all the lines in the file.
I tried using awk, but it is splitting each column as one row instead of 3 columns in each row.
awk '{ gsub(",", "\n") } 6' filename
where filename's content looks like:
A,B,C,D,E,F,G,H,I,J,K,L,M,N,O
Desired output has 3 columns in each line:
A,B,C
D,E,F
G,H,I
J,K,L
M,N,O
|
Using awk
$ awk -v RS='[,\n]' '{a=$0;getline b; getline c; print a,b,c}' OFS=, filename
A,B,C
D,E,F
G,H,I
J,K,L
M,N,O
How it works
-v RS='[,\n]'
This tells awk to use any occurrence of either a comma or a newline as a record separator.
a=$0; getline b; getline c
This tells awk to save the current line in variable a, the next line in varaible b, and the next line after that in variable c.
print a,b,c
This tells awk to print a, b, and c
OFS=,
This tells awk to use a comma as the field separator on output.
Using tr and paste
$ tr , '\n' <filename | paste -d, - - -
A,B,C
D,E,F
G,H,I
J,K,L
M,N,O
How it works
tr , '\n' <filename
This reads from filename while converting all commas to newlines.
paste -d, - - -
This paste to read three lines from stdin (one for each -) and paste them together, each separated by a comma (-d,).
Alternate awk
$ awk -v RS='[,\n]' '{printf "%s%s",$0,(NR%3?",":"\n")}' filename
A,B,C
D,E,F
G,H,I
J,K,L
M,N,O
How it works
-v RS='[,\n]'
This tells awk to use any occurrence of either a comma or a newline as a record separator.
printf "%s%s",$0,(NR%3?",":"\n")
This tells awk to print the current line followed by either a comma or a newline depending the value of the current line number, NR, modulo 3.
| Split single line into multiple lines, Newline character missing for all the lines in input file [duplicate] |
1,554,251,819,000 |
I saw many place introduce screen to run background job stably even log out. They use
screen -dmS name
According to screen -h, this option means
-dmS name Start as daemon: Screen session in detached mode.
What is daemon? I don't understand.
I found that if I simply type screen, I can enter automatically into a screen. After I run some command, and press Ctrl+a d, and then log off. The job is still running fine. So is this simple approach OK? Do I really need -dmS to make background job stable?
Let me try to give a summary:
Anything run in screen is safe to logging out (but you should detach the screen, not quit screen when you log out), no matter what the option you have set to screen.
-dmS is just an option convienient for submitting jobs in background noniteractively. That is
screen -dmS nameOfScreen command
|
You would only use -dm if you want to run a command in a screen session and not enter it interactively
-S is just to give the session a usable name so you can reconnect to it again easily later
If you want to use it interactively and don't want to give it a human readable name, you can omit all of those arguments safely.
For example, if you just want to start up screen to run the command, say, /path/to/longTime and you don't want to watch it run you could do it either as
screen -dmS longSession /path/to/longTime
or you could do
screen -S longSession
$ /path/to/longTime
ctrlad
Both would accomplish the same thing, but one is both easier to script and a bit less typing.
| Do I really need -dmS option in screen to run background job stably even log out? |
1,554,251,819,000 |
initramfs archives on Linux can consist of a series of concatenated, gzipped cpio files.
Given such an archive, how can one extract all the embedded archives, as opposed to only the first one?
The following is an example of a pattern which, while it appears to have potential to work, extracts only the first archive:
while gunzip -c | cpio -i; do :; done <input.cgz
I've also tried the skipcpio helper from dracut to move the file pointer past the first cpio image, but the following results in a corrupt stream (not at the correct point in the input) being sent to cpio:
# this isn't ideal -- presumably would need to rerun with an extra skipcpio in the pipeline
# ...until all files in the archive have been reached.
gunzip -c <input.cgz | skipcpio /dev/stdin | cpio -i
|
Debian with the packages amd64-microcode / intel-microcode packages installed seems to use some kind of mess of an uncompressed cpio archive containing the CPU microcode followed by a gzip compressed cpio archive with the actual initrd contents. The only way I've ever been able to extract it is by using binwalk (apt install binwalk), which can both correctly list the structure:
binwalk /path/to/initrd
example output:
host ~ # binwalk /boot/initrd.img-5.10.0-15-amd64
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000"
120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000"
244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000"
376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/.enuineIntel.align.0123456789abc", file name length: "0x00000036", file size: "0x00000000"
540 0x21C ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00455C00"
4546224 0x455EB0 ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000"
4546560 0x456000 gzip compressed data, has original file name: "mkinitramfs-MAIN_dTZaRk", from Unix, last modified: 2022-06-14 14:02:57
37332712 0x239A6E8 MySQL ISAM compressed data file Version 9
and extract the separate parts:
binwalk -e /path/to/initrd
example output:
host ~ # binwalk -e /boot/initrd.img-5.10.0-15-amd64
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000"
120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000"
244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000"
376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/.enuineIntel.align.0123456789abc", file name length: "0x00000036", file size: "0x00000000"
540 0x21C ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00455C00"
4546224 0x455EB0 ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000"
4546560 0x456000 gzip compressed data, has original file name: "mkinitramfs-MAIN_dTZaRk", from Unix, last modified: 2022-06-14 14:02:57
37332712 0x239A6E8 MySQL ISAM compressed data file Version 9
This'll give you the separate parts in separate files, and now you can finally extract the proper cpio archive:
host ~ # ls -l _initrd.img-5.10.0-15-amd64.extracted
insgesamt 187M
drwxr-xr-x 3 root root 4,0K 14. Jun 17:53 cpio-root/
-rw-r--r-- 1 root root 114M 14. Jun 17:53 mkinitramfs-MAIN_dTZaRk
-rw-r--r-- 1 root root 39M 14. Jun 17:53 0.cpio
-rw-r--r-- 1 root root 35M 14. Jun 17:53 mkinitramfs-MAIN_dTZaRk.gz
host ~/_initrd.img-5.10.0-15-amd64.extracted # mkdir extracted
host ~/_initrd.img-5.10.0-15-amd64.extracted # cd extracted
host ~/_initrd.img-5.10.0-15-amd64.extracted/extracted # cat ../mkinitramfs-MAIN_dTZaRk | cpio -idmv --no-absolute-filenames
[...]
host ~/_initrd.img-5.10.0-15-amd64.extracted/extracted # ll
insgesamt 28K
lrwxrwxrwx 1 root root 7 14. Jun 17:55 bin -> usr/bin/
drwxr-xr-x 3 root root 4,0K 14. Jun 17:55 conf/
drwxr-xr-x 7 root root 4,0K 14. Jun 17:55 etc/
lrwxrwxrwx 1 root root 7 14. Jun 17:55 lib -> usr/lib/
lrwxrwxrwx 1 root root 9 14. Jun 17:55 lib32 -> usr/lib32/
lrwxrwxrwx 1 root root 9 14. Jun 17:55 lib64 -> usr/lib64/
lrwxrwxrwx 1 root root 10 14. Jun 17:55 libx32 -> usr/libx32/
drwxr-xr-x 2 root root 4,0K 14. Jun 16:02 run/
lrwxrwxrwx 1 root root 8 14. Jun 17:55 sbin -> usr/sbin/
drwxr-xr-x 8 root root 4,0K 14. Jun 17:55 scripts/
drwxr-xr-x 8 root root 4,0K 14. Jun 17:55 usr/
-rwxr-xr-x 1 root root 6,2K 14. Jan 2021 init*
| Extracting concatenated cpio archives |
1,554,251,819,000 |
I just compiled a new kernel and asked myself: What decides during the compilation process which kernel modules are built in the kernel statically?
I then deleted /lib/modules, rebooted and found that my system works fine, so it appears all essential modules are statically built in the kernel.
Without /lib/modules, the kernel loads 22. With the directory present, it loads 67 modules.
|
You do this as part of the configuration process, usually when you run make config, make menuconfig or similar. You can set the module as built-in (marked as *), or modularised (marked as M).
You can see examples of this in a screenshot of make menuconfig, from here:
| What decides which kernel modules are built in the kernel statically during compilation? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.