date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,375,634,929,000 |
I have a computer with 8G RAM and a 128G SSD. I don't plan hibernating. What swap size would you recommend? Would you change any swappiness?
In the nearest future I'll compile programs (or even kernels), run some virtual machines (leaving at least 5G free for the system), maybe occasionally play some game.
|
You should be fine with just 2 or 4 Gb of swap size, or none at all (since you don't plan hibernating).
An often-quoted rule of thumb says that the swap partition should be twice the size of the RAM. This rule made sense on older systems to cope with the limited amount of RAM; nowadays your system, unless on heavy load, won't swap at all.
It mostly depends whether you're going to do a memory-intensive use of your machine; if this is the case, you might want to increase the amount of RAM instead.
Note that a SSD is subject to more wear and tear than a hard disk, and is limited by a number of rewrite cycles. This makes it not optimal to host a swap partition.
Edit: Also see this question: Linux, SSD and swap
| 8G RAM and SSD - how big should the swap be? [closed] |
1,375,634,929,000 |
I'm planning a backup strategy based on rsnapshot.
I want to do a full system backup excluding files and directories that would be useless for the restore to have a working system again. I already excluded:
# System:
exclude /dev/*
exclude /proc/*
exclude /sys/*
exclude /tmp/*
exclude /run/*
exclude /mnt/*
exclude /media/*
exclude /lost+found
# Application:
exclude /*.pyc
exclude /*.pyo
I wonder which other entries I can add to the exclude list without compromising the restored system. Talking about a "generic" Linux system, can you suggest further glob extensions, temporary directories, caches, etc. I can exclude safely?
|
First off, you should read up a little on rsync's include/exclude syntax. I get the feeling that what you want to do is better done using ** globs than * globs. (** expands to any number of entries, whereas * expands only to a single entry possibly matching multiple directory entries. The details are in man rsync under Include/Exclude Pattern Rules.)
That said, if you want to be able to restore the system to a known working state from the backup with a minimum of hassle, you should be careful with excluding files or directories. I use rsnapshot myself and have actually taken the opposite approach: include everything except for a few carefully selected directories.
So my rsnapshot.conf actually states (with tabs to make rsnapshot's configuration file parser happy):
interval backup NNN # pick your poison
one_fs 0
exclude /backup/**
exclude /dev/**
exclude /proc/**
exclude /run/**
exclude /sys/**
exclude /tmp/**
backup / ./
and very little else. Yes, it means I might copy a bit more than what is strictly needed, but it ensures that anything not intended as ephermal is copied. Because of rsnapshot using rsync's hardlink-to-deduplicate behavior, the only real cost to this is during the first run; after that, assuming you have a reasonably sized (compared to your total data set size) backup target location, it takes very little extra in either time or disk space. I exclude the contents of /backup because that's where I mount the backup target file system; not excluding it would lead to the situation of copying the backup into itself. However, for simplicity if I ever need to restore onto bare metal, I want to keep the mount point!
In my case I also cannot reasonably use one_fs 1; I run ZFS with currently ~40 file systems. Listing all of those explicitly would be a maintenance nightmare and make working with ZFS file systems a lot more involved than it needs to be.
Pretty much anything you want to exclude above and beyond the above is going to depend on the distribution, anyway, so it's virtually impossible to give a generic answer. That said, you're likely to find some candidates under /var.
| Entries I can safely exclude doing backups |
1,375,634,929,000 |
Here's my situation: I have a Raspberry Pi with Raspbian installed on it. I also have RetroArch installed and a simple USB gamepad hooked up. Everything works fine, but I wanted to set it up so that pressing a key combination (ex L1+L2+R1+R2) would gracefully exit the emulator so that I don't have to keep a keyboard around. RetroArch's default key to gracefully exit is ESC, and I can't remap it to a key combination due to a limitation in RetroArch (I could, however, remap it to a single gamepad key).
So I was wondering if there were any utilities out there that could listen to the gamepad's key presses and, when a certain combination is pressed, performs an action (sending the ESC key to the emulator). Or is there an easier way to achieve what I want and I'm just being silly?
EDIT: Now that I think about it, it would also be nice if I could have a different key combination execute a bash script that starts the emulator so that I could start it without a keyboard as well.
|
This looks like a common problem with RetroPie/Emulation station.
They address it in the RetroPie-Setup Wiki:
https://github.com/petrockblog/RetroPie-Setup/wiki/EmulationStation#my-emulator-wont-close-through-my-gamepad
It should just require editing your RetroArch config file to add a line:
input_exit_emulator_btn = “6″
Where "6" is the gamepad button identifier.
If you want to make it work with a key combination, you can instead add the following lines: (from http://forum.themaister.net/viewtopic.php?pid=1065#p1065)
input_enable_hotkey_btn = 1
input_exit_emulator_btn = 2
This makes it so that you need to press button 1 to "unlock" hotkeys, and press 2 at the same time to quit.
| Are there any command line utilities that can capture joystick button presses? |
1,375,634,929,000 |
In a python script, I am creating a bunch of symbolic links chained together.
example: link1->link2->link3->.......->somefile.txt
I was wondering how you can change the max number of symlinks to be greater than 20?
|
On Linux (3.5 at least), it's hardcoded to 40 (see follow_link() in fs/namei.c), and note that it's the number of links followed when resolving all the components of a path, you can only change it by recompiling the kernel.
$ ln -s . 0
$ n=0; repeat 50 ln -s $((n++)) $n
$ ls -LdF 39
39/
$ ls -LdF 40
ls: cannot access 40: Too many levels of symbolic links
$ ls -LdF 20/18 10/10/10/6
10/10/10/6/ 20/18/
$ ls -LdF 20/19 10/10/10/7
ls: cannot access 20/19: Too many levels of symbolic links
ls: cannot access 10/10/10/7: Too many levels of symbolic links
| How do you increase MAXSYMLINKS |
1,375,634,929,000 |
I know that many of the same programs run flawlessly on top of both kernels. I know that historically, the two kernels came from different origins. I know philosophically too that they stood for different things. My question is, today, in 2011, what makes a Unix kernel different from a Linux one, and vice versa?
|
There is no unique thing named "the Unix kernel". There are multiple descendants of the original Unix kernel source code trunk that forked branches from it at different stages and that have evolved separately according to their own needs.
The mainstream ones these days are found in Operating Systems created either from System V source code: AIX, HPUX, Solaris or from BSD source code, OpenBSD, FreeBSD and Mac OS/X.
All of these kernels have their particular strengths and weaknesses, just like Linux and other "from scratch" Unix like kernels (minix, Gnu hurd, ...).
Here is a non exhaustive list of the areas where differences can be observed, in no particular order:
CPU architecture support
Availability of drivers
File systems supported
Virtualization capabilities
Scheduling features, (alternate scheduling classes, real-time, ...)
Modularity
Observability
Tunability
Reliability
Performance
Scalability
API stability between versions
Open/Close source, license used
Security (eg: privilege granularity)
Memory management
| What are the main differences between Unix and Linux kernels today? |
1,375,634,929,000 |
In stream processing and queuing, we have this notion of backpressure, which is that if a producer process is going faster than a consumer process, we should have a mechanism for slowing down the producer to avoid exceeding available memory/storage (without dropping messages, which may or may not be acceptable).
I've been curious for a little while now about whether stdio can be used to exert backpressure in this sense on a producer Unix process (e.g. foo in foo | bar). It would seem that even if writes to stdout were blocked when a buffer reached capacity, it would still be necessary for the producer process to Do The Right Thing (TM), and not accumulate data in memory waiting to be written to stdout. A single threaded, blocking program would seem to pass the test, but an asynchronous program may have to have it's own internal buffering and backpressure mechanism for in order not to explode with data waiting to be written.
So; to what extent is this possible and what are the particulars?
|
A pipe has a limited buffer size. If the producer goes ahead of the consumer, the data progressively fills the pipe's buffer. If the buffer is filled up, the write call in the producer blocks until there is room. So backpressure is built into the system.
The size of the buffer is at least 512 bytes on any POSIX-compliant system, and often larger and potentially configurable on modern unices. See How big is the pipe buffer? for more details.
| Can writing to stdout place backpressure on a process? |
1,375,634,929,000 |
Dnf is already there in fedora which is a test bed for Rhel/centos but why centos 7 is still using yum ?
|
CentOS 7 corresponds to Red Hat Enterprise Linux 7, released in June 2014. dnf was made the replacement for yum in Fedora 23, released in November 2015. There's no provision for making fundamental changes to a release like that. Changes made within a release are incremental, perhaps adding features, but never removing existing features and replacing them by others.
A future release of CentOS likely will have dnf unless Red Hat chooses some other tool by the time Red Hat releases version 8, etc.
Further reading:
Life Cycle and Update Policies (Red Hat)
Red Hat Enterprise Linux Life Cycle
During the three-year Production Phase, qualified Critical and Important Security errata advisories (RHSAs) and Urgent and Selected High Priority Bug Fix errata advisories (RHBAs) may be released, as an update to the Red Hat Enterprise Linux Atomic Host image,as they become available. Other errata advisories may be delivered as appropriate.
If available, new or improved hardware enablement and select enhanced software functionality may be provided at the discretion of Red Hat, as an updated image. Updated Red Hat Enterprise Linux Atomic Host images are cumulative and include the contents of previously released updates.
The Red Hat Enterprise Linux life cycle phases are designed to reduce the level of change within each major releasei over time and make release availability and content more predictable.ii
1. What is CentOS Linux? (FAQ)
CentOS conforms fully with Red Hat, Inc's redistribution policies and aims to be functionally compatible with Red Hat Enterprise Linux. CentOS mainly changes packages to remove trademarked vendor branding and artwork.
| When is Centos going to replace yum with dnf ? [closed] |
1,375,634,929,000 |
I wanted to source a file in a docker container running Ubuntu without going inside the container.
I used to:
docker exec -it CONTAINER_ID bash
source FILE
Now I wanted to do:
docker exec -it CONTAINER_ID source FILE
and was surprised that the error pops up:
exec: "source": executable file not found in $PATH
True enough I realized that source does not seem to be your standard command, as I cannot locate it via which source. ls behaves nicely.
What kind of thing is this source command anyway, and how to execute it via docker exec -it?
|
Source is not an executable (source is a bash shell built-in command that executes the content of the file passed as argument)
You should run source like this:
docker run --rm -ti _image_name_ bash -c 'source FILE'
| How to run `source` with `docker exec`? |
1,375,634,929,000 |
Sounds strange, I have a shell script that it gets trigger by udev rules, to mount attached USB device to the system file tree. The script runs when an usb device attaches to system, so the rules seems to be fine. I monitor how the script progress by syslog, and it also goes fine, and even mount command returns zero, and it says:
root[1023]: mount: /dev/sda1 mounted on /media/partitionlabel.
But at the end the device is not mounted, it is not listed in /etc/mtab - /proc/mounts - findmnt - mount. and if I run umount on device, it also says, the device is not mounted.
However If I run the script manually as a root from terminal, then it works perfect and device gets mount, but not when it runs by udev.
I've added 8 second sleep time to the start of the script, to make sure it's not a timing problem and also removed number from rules file name to make sure udevd would put the new rules at the bottom of rules queue, and script would run after other system rules, but no success.
The syslog:
(right after the device attached)
kernel: usb 1-1.2: new high-speed USB device number 12 using dwc_otg
kernel: usb 1-1.2: New USB device found, idVendor=058f, idProduct=6387
kernel: usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
kernel: usb 1-1.2: Product: Mass Storage
kernel: usb 1-1.2: Manufacturer: Generic
kernel: usb 1-1.2: SerialNumber: 24DCF568
kernel: usb-storage 1-1.2:1.0: USB Mass Storage device detected
kernel: scsi host6: usb-storage 1-1.2:1.0
kernel: scsi 6:0:0:0: Direct-Access Generic Flash Disk 8.07 PQ: 0 ANSI: 4
kernel: sd 6:0:0:0: [sda] 1968128 512-byte logical blocks: (1.00 GB/961 MiB)
kernel: sd 6:0:0:0: [sda] Write Protect is off
kernel: sd 6:0:0:0: [sda] Mode Sense: 23 00 00 00
kernel: sd 6:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
kernel: sda: sda1
kernel: sda: p1 size 1968126 extends beyond EOD, enabling native capacity
kernel: sda: sda1
kernel: sda: p1 size 1968126 extends beyond EOD, truncated
kernel: sd 6:0:0:0: [sda] Attached SCSI removable disk
root[1004]: /usr/local/sbin/udev-auto-mount.sh - status: started to automount sda1
root[1019]: /usr/local/sbin/udev-auto-mount.sh - status: Device Label is partitionlabel and Filesystem is vfat.
root[1021]: /usr/local/sbin/udev-auto-mount.sh - status: mounting the device sda1 by filesystem vfat to /media/partitionlabel.
root[1023]: mount: /dev/sda1 mounted on /media/partitionlabel.
root[1024]: /usr/local/sbin/udev-auto-mount.sh status: mount command proceed for vfat, retval is 0
root[1025]: /usr/local/sbin/udev-auto-mount.sh - status: succeed!
Configs:
/etc/udev/rules.d/local-rules:
The defined rule in udev is:
# /etc/udev/rules.d/local-rules
ENV{ID_BUS}=="usb", ACTION=="add", ENV{DEVTYPE}=="partition", \
RUN+="/usr/local/sbin/udev-automounter.sh %k $ENV{ID_FS_LABEL_ENC}"
udev-auto-mount.sh
The script starts by another script which defined in the udev rule.
It so straight, it makes mount point directory and mounts the usb device to the mount point using its file system type and some regular options. I've added "-v" option to the mount command to be more verbose and also redirected all outputs to syslog, so I can see how it runs, but it not says too much.
#!/bin/sh
## /usr/local/sbin/udec-auto-mount.sh
##
logger -s "$0 - status: started to automount ${1}"
DEVICE=$1
sleep 8
#...
#...
# Checking inputs, getting filesystem type (ID_FS_TYPE), partition label
# (ID_FS_LABEL) and ...
mkdir "/media/${ID_FS_LABEL}"
logger -s "$0 - status: mounting the device ${DEVICE} by filesystem ${ID_FS_TYPE} to /media/${ID_FS_LABEL}."
case $ID_FS_TYPE in
vfat) mount -v -t vfat -o sync,noatime,nosuid,nodev /dev/${DEVICE} "/media/${ID_FS_LABEL}" 2>&1 | logger
let retVal=$?
logger -s "$0 status: mount command proceed for vfat, retval is ${retVal}"
;;
*) mount -v -t auto -o sync,noatime /dev/${DEVICE} "/media/${ID_FS_LABEL}"
;;
esac
if [ ${retVal} -eq 0 ]; then
logger -s "$0 - status: succeed!"
exit 0
else
logger -s "$0 Error: unable to mount the device ${DEVICE}, retval is ${retVal}"
rmdir "/media/${ID_FS_LABEL}"
fi
exit 0
Maybe it helps:
Sometimes, after the script fails to mount the USB device, when I detach the device, some error come to syslog like:
kernel: usb 1-1.2: USB disconnect, device number 11
systemd-udevd[143]: error: /dev/sda: No such file or directory
systemd-udevd[977]: inotify_add_watch(7, /dev/sda, 10) failed: No such file or directory
Edit:
This is 'mount' version:
$ mount -V:
mount from util-linux 2.27.1 (libmount 2.27.0: assert, debug)
|
Finally found the answer here.
Actually the problem were came from systemd-udevd which succeeded original udev. The systemd-udevd creates it's own mirror of root file system, when the 'udev' rule mount the device, it gets mounted and is accessible from:
/proc/{PID of systemd-udevd service}/root/{path to mount point}
but it is not visible from main root filesystem /.
archlinux's wiki (here) suggests:
Warning: To mount removable drives, do not call mount from udev rules. In case of FUSE filesystems, you will get Transport endpoint not connected errors. Instead, you could use udisks that handles automount correctly or to make mount work inside udev rules, copy /usr/lib/systemd/system/systemd-udevd.service to /etc/systemd/system/systemd-udevd.service and replace MountFlags=slave to MountFlags=shared.[3] Keep in mind though that udev is not intended to invoke long-running processes.
Solution:
I copied /usr/lib/systemd/system/systemd-udevd.service to the /etc/systemd/sytem/ directory and replaced MountFlags=slave by MountFlags=shared. then restarted the system, and now everything works fine.
| Linux - mount command returns zero/0 but not working |
1,375,634,929,000 |
What is the between likely and unlikely calls in Kernel. While searching through the kernel source i found these statements.
# define likely(x) __builtin_expect(!!(x), 1)
# define unlikely(x) __builtin_expect(!!(x), 0)
Could somebody shed some light into it?
|
They are compiler hints for GCC. They're used in conditionals to tell the compiler if a branch is likely to be taken or not. It can help the compiler laying down the code in such a way that's optimal for the most frequent outcome.
They are used like this:
if (likely(some_condition)) {
// the compiler will try and make the code layout optimal for the case
// where some_condition is true, i.e. where this block is run
most_likely_action();
} else {
// this block is less frequently used
corner_case();
}
It should be used with great care (i.e. based on actual branch profiling results). A wrong hint can degrade performance (obviously).
Some examples of how the code can be optimized are easily found by searching for GCC __builtin_expect. This blog post gcc optimisation: __builtin_expect for example details a disassembly with it.
The kind of optimizations that can be done is very processor-specific. The general idea is that often, processors will run code faster if it does not branch/jump all over the place. The more linear it is, and the more predictable the branches are, the faster it will run. (This is especially true for processors with deep pipelines for example.)
So the compiler will emit the code such that the most likely branch will not involve a jump if that's what the target CPU prefers, for instance.
| What is the difference between likely and unlikely calls in Kernel? |
1,375,634,929,000 |
I've got a few processes with a known name that all write to files in a single directory. I'd like to log the number of disk block reads and writes over a period (not just file access) to test whether a parameter change reduces the amount of I/O significantly. I'm currently using iostat -d -p, but that is limited to the whole partition.
|
I realize this is going to sound both simplistic and absurd, but if you have
control over the apps in question (maybe in a test environment) you could
mount ONLY that directory on a partition of its own, then iostat, etc. would
tell you only about it, and nothing else on that spot.
If there are physical drives involved you could fake it up with a loopback
mount à la
dd if=/dev/zero of=/bigdisk/LOOPFILE bs=1024m count=1024m # 1gb loopback file
mke2fs -j /bigdisk/LOOPFILE
mkdir /tmpcopy
mount -o loop /tmpcopy /bigdisk/LOOPFILE
cp -r -p $SPECIALDIR2MONITOR /tmpcopy
umount /tmpcopy
mount -o loop $SPECIALDIR2MONITOR /bigdisk/LOOPFILE,
That would not completely remove all competing disk I/O, but
I'm pretty sure iostat's output would be more specific to your need.
| How can I monitor disk I/O in a particular directory? |
1,375,634,929,000 |
When I used Windows (a very very long time ago!), and Mac OS X, there was always a central "repository" of fonts shared and accessed by the OS and all programs (the font folder in Windows, and Font Book in Mac OS X).
How are fonts managed in Linux? Is there also a central store for fonts that all programs (the shell with no X, with X, window managers, other GUI software) can use? Or are fonts managed separately? What can I do to efficiently and easily manage my fonts in Linux?
|
There are two mechanisms for fonts in X land: server-side and client-side.
The traditional way to render fonts is for the client to tell the server “render foo at position (x,y) in font F” (where a font specification includes a face, size, encoding and other attributes). Either the X server itself, or a specialized program called a font server, opens the font file to build the description of each glyph. The fonts can be bitmap or vector fonts, but the vector fonts are converted to bitmaps before rendering.
Most modern programs use client-side font rendering, often through xft and fontconfig. A new mechanism was needed because the server-side font rendering didn't support anti-aliasing.
Outside X (i.e. on a VGA console), there are VGA fonts, which are bitmap fonts of specific sizes. But compared to X11, no one uses the VGA console, so not much effort is spent on them.
In practice, you'll want to configure fonts in two ways:
For older-style programs: the font directories are listed via FontPath directives in xorg.conf and can be manipulated with xset fp commands by the user running X. If you install new fonts, you may need to run mkfontdir.
For newer-style programs, including all Gtk (Gnome, etc.) and Qt (KDE, etc.) programs: fonts are in the directories indicated by <dir> directives in /etc/fonts/fonts.conf, ~/.fonts.conf and a few other places. See the fontconfig documentation for more information. If you install new fonts, you may need to run fc-cache.
| How does Linux manage fonts? |
1,375,634,929,000 |
I used to use yum to list all installed packages:
yum list installed
Now I need to list top level packages only i.e. if a package is a dependency of another, it should not be shown. GNOME's Add/Remove Software can do this but I need the output in a terminal.
I checked yum manual but I could not find a switch for that.
|
You can use the tool package-cleanup which is part of yum-utils. Besides finding packages which are not available from the current yum repositories, finding packages with broken dependencies, pruning old kernels and finding duplicate packages it can also find packages on which no other packages depend. Those are called leaves. Do
yum install yum-utils
to get package-cleanup and
package-cleanup --leaves --all
to get a list of all leaves.
| List installed, top-level packages in Fedora |
1,375,634,929,000 |
What was the first file system in Linux distribution/s?
The "parallel" to FAT (?) in Windows at the early nineties?
I want to know the name to read information about this; perhaps it was the same one utilized in Unix (Unix filesystem).
|
Linux initially used the MINIX file system, and very early distributions relied on that. The extended file system quickly took over, followed by Ext2 and xiafs (which was never much developed, and ultimately disappeared in favour of Ext2).
It was also possible to run Linux on FAT, using umsdos.
| What was the first file system in Linux distribution/s? |
1,375,634,929,000 |
The pernicious USB-stick stall problem - LWN.net, November 2013.
Artem S. Tashkinov recently encountered a problem that will be familiar to at least some LWN readers. Plug a slow storage device (a USB stick, say, or a media player) into a Linux machine and write a lot of data to it. The entire system proceeds to just hang, possibly for minutes.
This time around, though, Artem made an interesting observation: the system would stall when running with a 64-bit kernel, but no such problem was experienced when using a 32-bit kernel on the same hardware.
The article explains that with a 64-bit kernel, the dirty page cache (writeback cache) was allowed to grow to 20% of memory by default. With a 32-bit kernel, it was effectively limited to ~180MB.
Linus suggested limiting it to ~180MB on 64-bit as well, however current Linux (v4.18) does not do this. Compare Linus's suggested patch, to the current function in Linux 4.18. The biggest argument against such changes came from Dave Chinner. He pointed out that reducing buffering too much would cause filesystems to suffer from fragmentation. He also explained that "for streaming IO we typically need at least
5s of cached dirty data to even out delays."
I am confused. Why did the USB-stick stall cause the entire system to hang?
I am confused because I read an earlier article describing code merged in 2011 (Linux 3.2). It shows the kernel should have been controlling the dirty page cache on a per-device basis:
No-I/O dirty throttling - LWN.net, 2011
That is where Fengguang's patch set comes in. He is attempting to create a control loop capable of determining how many pages each process should be allowed to dirty at any given time. Processes exceeding their limit are simply put to sleep for a while to allow the writeback system to catch up with them.
[...]
The goal of the system is to keep the number of dirty pages at the setpoint; if things get out of line, increasing amounts of force will be applied to bring things back to where they should be.
[...]
This ratio cannot really be calculated, though, without taking the backing device (BDI) into account. A process may be dirtying pages stored on a given BDI, and the system may have a surfeit of dirty pages at the moment, but the wisdom of throttling that process depends also on how many dirty pages exist for that BDI. [...] A BDI with few dirty pages can clear its backlog quickly, so it can probably afford to have a few more, even if the system is somewhat more dirty than one might like. So the patch set tweaks the calculated pos_ratio for a specific BDI using a complicated formula looking at how far that specific BDI is from its own setpoint and its observed bandwidth. The end result is a modified pos_ratio describing whether the system should be dirtying more or fewer pages backed by the given BDI, and by how much.
Per-device control was added even earlier than this: Smarter write throttling, 2007 LWN.net. [PATCH 0/23] per device dirty throttling -v10. It was merged in Linux version 2.6.24.
|
The 2013 article is wrong
A mistake in LWN? Are you sure?
Long queues in I/O devices, created by "background" writeback
Limitations of "no-I/O dirty throttling"?
Genuine reports of "USB-stick stall" problems
The dirty limit was calculated incorrectly [2014]
Huge page allocations blocking on IO [2011]
"Dirty pages reaching the end of the LRU"? [pre-2013]
1. The 2013 article is wrong
The "USB-stick stall" article gives you a very misleading impression. It misrepresents both the original report, and the series of responses.
Artem did not report the entire system hanging when it flushed cached writes to a USB stick. His original report only complained that running the command "sync" could take up to "dozens of minutes". This distinction is made explicit in a response by Linus Torvalds:
It's actually really easy to reproduce by just taking your average
USB key and trying to write to it. I just did it with a random ISO
image, and it's painful. And it's not that it's painful for doing
most other things in the background, but if you just happen to run
anything that does "sync" (and it happens in scripts), the thing just
comes to a screeching halt. For minutes.
2. A mistake in LWN? Are you sure?
Jon Corbet had fifteen years of experience, reporting Linux kernel development on a weekly basis. I expected the article was at least close to getting it right, in some sense. So I want to process the two different records, and look out for detailed points where they agree or disagree.
I read all the the original discussion, using the archives at lore.kernel.org. I think the messages are pretty clear.
I am 100% certain the article misinterprets the discussion. In comments underneath the article, at least two readers repeated the false claim in their own words, and no-one corrected them. The article continues this confusion in the third paragraph:
All that data clogs up the I/O queues, possibly delaying other operations. And, as soon as somebody calls sync(), things stop until that entire queue is written.
This could be confusion from Linus saying "the thing just comes to a screeching halt". "The thing" refers to "anything that does sync". But Corbet writes as if "the thing" meant "the entire system".
As per Linus, this is a real-world problem. But the vast majority of "things" do not call into the system-wide sync() operation.[1]
Why might Corbet confuse this with "the entire system"? I guess there have been a number of problems, and after a while it gets hard to keep them all separate in your head :-). And although LWN has described the development of per-device (and per-process) dirty throttling, in general I suppose there is not much written about such details. A lot of documents only describe the global dirty limit settings.
3. Long queues in I/O devices, created by "background" writeback
Artem posted a second report in the thread, where "the server almost stalls and other IO requests take a lot more time to complete".
This second report does not match claims about a USB-stick hang. It happened after creating a 10GB file on an internal disk. This is a different problem.
The report did not confirm whether this could be improved by changing the dirty limits. And there is a more recent analysis of cases like this. There is a significant problem when it clogs up the I/O queue of your main disk. You can suffer long delays on a disk that you constantly rely on, to load program code on-demand, save documents and app data using write() + fsync(), etc.
Toward less-annoying background writeback -- LWN.net, 2016
When the memory-management code decides to write a range of dirty data, the result is an I/O request submitted to the block subsystem. That request may spend some time in the I/O scheduler, but it is eventually dispatched to the driver for the destination device.
The problem is that, if there is a lot of dirty data to write, there may end up being vast numbers (as in thousands) of requests queued for the device. Even a reasonably fast drive can take some time to work through that many requests. If some other activity (clicking a link in a web browser, say, or launching an application) generates I/O requests on the same block device, those requests go to the back of that long queue and may not be serviced for some time. If multiple, synchronous requests are generated — page faults from a newly launched application, for example — each of those requests may, in turn, have to pass through this long queue. That is the point where things appear to just stop.
[...]
Most block drivers also maintain queues of their own internally. Those lower-level queues can be especially problematic since, by the time a request gets there, it is no longer subject to the I/O scheduler's control (if there is an I/O scheduler at all).
The patches were merged to improve this in late 2016 (Linux 4.10). This code is referred to as "writeback throttling" or WBT. Searching the web for wbt_lat_usec also finds a few more stories about this. (The initial doc writes about wb_lat_usec, but it is out of date). Be aware that writeback throttling does not work with the CFQ or BFQ I/O schedulers. CFQ has been popular as a default I/O scheduler, including in default kernel builds up until Linux v4.20. CFQ is removed in kernel v5.0.
There were tests to illustrate the problem (and prototype solution) on both an SSD (which looked like NVMe) and a "regular hard drive". The hard drive was "not quite as bad as deeper queue depth devices, where we have hugely bursty IO".
I'm not sure about the "thousands" of queued requests, but there are at least NVMe devices which can queue hundreds of requests. Most SATA hard drives allow 32 requests to be queued ("NCQ"). Of course the hard drive would take longer to complete each request.
4. Limitations of "no-I/O dirty throttling"?
"No-I/O dirty throttling" is quite a complex engineered system. It has also been tweaked over time. I am sure there were, and still are, some limitations inside this code.
The LWN writeup, code/patch comments, and the slides from the detailed presentation show that a large number of scenarios have been considered. This includes the notorious slow USB stick v.s. fast main drive. The test cases include the phrase "1000 concurrent dd's" (i.e. sequential writers).
So far, I do not know how to demonstrate and reproduce any limitation inside the dirty throttling code.
I have seen several descriptions of problem fixes which were outside of the dirty throttling code. The most recent fix I found was in 2014 - see subsequent sections. In the thread that LWN is reporting on, we learn:
In last few releases problems like this were
caused by problems in reclaim which got fed up by seeing lots of dirty
/ under writeback pages and ended up stuck waiting for IO to finish.
[...] The systemtap script caught those type of areas and I
believe they are fixed up.
Mel Gorman also said there were some "outstanding issues".
There are still problems though. If all dirty pages were backed by a slow device then dirty limiting is still eventually going to cause stalls in dirty page balancing [...]
This passage was the only thing I could find in the reported discussion thread, that comes anywhere near backing up the LWN interpretation. I wish I understood what it was referring to :-(. Or how to demonstrate it, and why it did not seem to come up as a significant issue in the tests that Artem and Linus ran.
5. Genuine reports of "USB-stick stall" problems
Although neither Artem nor Linux reported a "USB-stick stall" that affected the whole system, we can find several reports of this elsewhere. This includes reports in recent years - well after the last known fix.
I do not know what the difference is. Maybe their test conditions were different in some way, or maybe there are some new problem(s) created in the kernel since 2013...
https://utcc.utoronto.ca/~cks/space/blog/linux/USBDrivesKillMyPerformance / https://utcc.utoronto.ca/~cks/space/blog/linux/FixingUSBDriveResponsiveness [2017]
Why is my PC freezing while I'm copying a file to a pendrive? [January 2014]
System lags when doing large R/W operations on external disks [2018]
Linux GUI becomes very unresponsive when doing heavy disk I/O - what to tune? [2019]
6. The dirty limit was calculated incorrectly [2014]
There was an interesting fix in January 2014 (applied in kernel v3.14). In the question, we said the default limit was set to 20% of memory. Actually, it is set to 20% of memory which is available for dirty page cache. For example the kernel buffers sent data for TCP/IP network sockets. The socket buffers cannot be dropped and replaced with dirty page cache :-).
The problem was that the kernel was counting swappable memory, as if it could swap data out in favour of dirty page cache. Although this is possible in theory, the kernel is strongly biased to avoid swapping, and prefer dropping page cache instead. This problem was illustrated by - guess what - a test involving writing to a slow USB stick, and noticing that it caused stalls across the entire system :-).
See Re: [patch 0/2] mm: reduce reclaim stalls with heavy anon and dirty cache
The fix is that dirty_ratio is now treated as a proportion of file cache only.
According to the kernel developer who suffered the problem, "the trigger conditions seem quite plausible - high anon memory usage w/ heavy buffered IO and swap configured - and it's highly likely that this is happening in the wild." So this might account for some user reports around 2013 or earlier.
7. Huge page allocations blocking on IO [2011]
This was another issue: Huge pages, slow drives, and long delays (LWN.net, November 2011). This issue with huge pages should now be fixed.
Also, despite what the article says, I think most current Linux PCs do not really use huge pages. This might be changing starting with Debian 10. However, even as Debian 10 starts allocating huge pages where possible, it seems clear to me that it will not impose any delays, unless you change another setting called defrag to "always".
8. "Dirty pages reaching the end of the LRU" [pre-2013]
I have not looked into this, but I found it interesting:
mgorman 2011: This is a new type of USB-related stall because it is due to synchronous compaction writing where as in the past the big problem was dirty
pages reaching the end of the LRU and being written by reclaim.
mgorman 2013: The work in that general area dealt with
such problems as dirty pages reaching the end of the LRU (excessive CPU
usage)
If these are two different "reaching the end of the LRU" problems, then the first one sounds like it could be very bad. It sounds like when a dirty page become the least recently used page, any attempt to allocate memory would be delayed, until that dirty page finished being written.
Whatever it means, he says the problem is now fixed.
[1] One exception: for a while, the Debian package manager dpkg used sync() to improve performance. This was removed, because of the exact problem that sync() could take an extremely long time. They switched to an approach using sync_file_range() on Linux. See Ubuntu bug #624877, comment 62.
Part of a previous attempt at answering this question - this should mostly be redundant:
I think we can explain both of Artem's reports as being consistent with the "No-I/O dirty throttling" code.
The dirty throttling code aims to allow each backing device a fair share of the "total write-back cache", "that relates to its current average writeout speed in relation to the other devices". This phrasing is from the documentation of /sys/class/bdi/.[2]
In the simplest case, only one backing device is being written to. In that case, the device's fair share is 100%. write() calls are throttled to control the overall writeback cache, and keep it at a "setpoint".
Writes start being throttled half-way between dirty_background_ratio - the point that initiates background writeout - and dirty_ratio - the hard limit on the writeback cache. By default, these are 10% and 20% of available memory.
For example, you could still fill up to 15% writing to your main disk only. You could have gigabytes of cached writes, according to how much RAM you have. At that point, write() calls will start being throttled to match the writeback speed - but that's not a problem. I expect the hang problems are for read() and fsync() calls, which get stuck behind large amounts of unrelated IO. This is the specific problem addressed by the "writeback throttling" code. Some of the WBT patch submissions include problem descriptions, showing the horrific delays this causes.
Similarly, you could fill up the 15% entirely with writes to a USB stick. Further write()s to the USB will be throttled. But the main disk would not be using any of its fair share. If you start calling write() on your main filesystem then it will not be throttled, or will at least be delayed much less. And I think the USB write()s would be throttled even more, to bring the two writers into balance.
I expect the overall writeback cache could temporarily rise above the setpoint. In some more demanding cases, you can hit the hard limit on overall writeback cache. The hard limit defaults to 20% of available memory; the configuration option is dirty_ratio / dirty_bytes. Perhaps you can hit this because a device can slow down (perhaps because of a more random I/O pattern), and dirty throttling does not recognize the change in speed immediately.
[2] You might notice this document suggests you can manually limit the proportion of writeback cache, that can be used for a specific partition/filesystem. The setting is called /sys/class/bdi/*/max_ratio. Be aware that "if the device you'd like to limit is the only one which is currently written to, the limiting doesn't have a big effect."
| Why were "USB-stick stall" problems reported in 2013? Why wasn't this problem solved by the existing "No-I/O dirty throttling" code? |
1,375,634,929,000 |
I'm studying about virtual networking.
I saw the youtube video that makes tap interfaces and adds them to Open Virtual Switch.
From here, I don't know what tap interfaces are.
What's different between normal interface like eth0 and tap interface?
Is tap interface just virtual L2 interface to add it to OVS?
If it is right, what's the purpose of tap interface w/o attaching it to OVS?
|
OpenVSwitch is a virtual switch. It works by attaching to several Ethernet devices in raw packet/Ethernet mode. It switches Ethernet frames between those Ethernet devices by reading/writing raw Ethernet frames to/from those network interfaces.
This is nice if you want to switch between real Ethernet devices. If you want to connect a VM to your Open V Switch instance you need to attach Open V Switch to a virtual Ethernet devices representing your connection to this VM: writing packet to this virtual network interface should send the Ethernet frame to the VM and packet sent by the VM should be sent to this virtual network interface.
TAP network interfaces are designed for this. They represent virtual Ethernet devices. A TAP network interface is managed by some user process:
when an Ethernet frame is sent to the network interface, the user process receives this Ethernet frame;
the user process can send Ethernet frames to this network interface.
This is often used for:
VPNs (such as OpenVPN): When an Ethernet frame is sent to the TAP network interface, the VPN process receives it and forwards it in a tunnel. Conversely when the user process receives an Ethernet frame from the tunnel it forwards them to the TAP interface;
vitual machines: When an Ethernet frame is sent to the TAP interface, the hypervisor/emulator receives it and forwards it to the VM. Conversely when the VM sends a packet to its interface, the hypervisor/emulator forwards it to the TAP interface.
For Openvswitch, you typically create a TAP interface which represents your connection to a VM and can then connect this network interface to OpenVSwitch.
| what is difference between tap interface and normal interface? |
1,375,634,929,000 |
I'm an undergraduate student and I am looking for a spectrum analyzer (or at least a collection of functions) that will output the frequency range of a sound that is played, as an array.
|
If you just need a library, GStreamer might be what you need
Otherwhise these look pretty good:
Sonic Visualiser
Spek
Spectrum3D
| Linux Audio Spectrum Analyser |
1,375,634,929,000 |
Is there any way to view or manipulate the mount namespace for an arbitrary process?
For example, a docker container is running which has a local mount to an NFS server. It can be seen from inside the container, but on the outside, the host has no knowledge of it. With network namespaces this is doable. e.g. pipework
However, I see nothing about this for mount namespaces. Is there an API or sysfs layer exposed to view these mounts and manipulate or create new ones?
|
Yes. You can look at its /proc/$PID/mountinfo or else you can use the findmnt -N switch - about which findmnt --help says:
-N, --task <tid>
use alternative namespace (/proc/<tid>/mountinfo file)
findmnt also tracks the PROPAGATION flag which is a mountinfo field which reports on exactly this information - which processes share which mounts.
Also, you can always nsenter any type of namespace you like - provided you have the correct permissions, of course.
nsenter --help
Usage:
nsenter [options] <program> [args...]
Options:
-t, --target <pid> target process to get namespaces from
-m, --mount [=<file>] enter mount namespace
-u, --uts [=<file>] enter UTS namespace (hostname etc)
-i, --ipc [=<file>] enter System V IPC namespace
-n, --net [=<file>] enter network namespace
-p, --pid [=<file>] enter pid namespace
-U, --user [=<file>] enter user namespace
-S, --setuid <uid> set uid in user namespace
-G, --setgid <gid> set gid in user namespace
-r, --root [=<dir>] set the root directory
-w, --wd [=<dir>] set the working directory
-F, --no-fork do not fork before exec'ing <program>
-h, --help display this help and exit
-V, --version output version information and exit
For more details see nsenter(1).
| View/manipulate mount namespaces in Linux |
1,375,634,929,000 |
What is the relation and the difference between xattr and chattr? I want to know when I set a chattr attribute in Linux what is happening inside the Linux kernel and inode metadata.
|
The attributes as handled by lsattr/chattr on Linux and some of which can be stored by quite a few file systems (ext2/3/4, reiserfs, JFS, OCFS2, btrfs, XFS, nilfs2, hfsplus...) and even queried over CIFS/SMB (when with POSIX extensions) are flags. Just bits than can be turned on or off to disable or enable an attribute (like immutable or archive...). How they are stored is file system specific, but generally as a 16/32/64 bit record in the inode.
The full list of flags is found on Linux native filesystems (ext2/3/4, btrfs...) though not all of the flags apply to all of FS, and for other non-native FS, Linux tries to map them to equivalent features in the corresponding file system. For instance the simmutable flag as stored by OSX on HFS+ file systems is mapped to the corresponding immutable flag in Linux chattr. What flag is supported by what file system is hardly documented at all. Often, reading the kernel source code is the only option.
Extended attributes on the other hand, as set with setfattr or attr on Linux store more than flags. They are attached to a file as well, and are key/value pairs that can be (both key and value) arbitrary arrays of bytes (though with limitation of size on some file systems).
The key can be for instance: system.posix_acl_access or user.rsync.%stat. The system namespace is reserved for the system (you wouldn't change the POSIX ACLs with setfattr, but more with setfacl, POSIX ACLs just happen to be stored as extended attributes at least on some file systems), while the user namespace can be used by applications (here rsync uses it for its --fake-super option, to store information about ownership or permissions when you're not superuser).
Again, how they are stored is filesystem specific. See WikiPedia for more information.
| Difference between xattr and chattr |
1,375,634,929,000 |
On Linux, editing file_B in vim I want to add line 10-25 from file_A. Example: One has a whole set of HTML-Documents with quite the same header and yes, one could create a template and start from there, but still:
I am looking for a simple [and elegant] way to insert a range of lines from one file and add it to the file I am currently editing. In vim.
The solutions I found so far but are not exactly what I'm looking for, are
inside file_B the command :read file_A will add *the whole file_A* into file_B. Too much
there is copy&paste inside vim – but for this one must have opened file_A
inside file_A :10,25w! file_B will 'send' the given range of lines to file_B. This seems quite close to copy&paste
one can open multiple windows in vim But as I work on an 8inch screen, multiple windows make orientation hard
As I wasn't able to find a comfortable solution I wonder if I just misunderstood something or the ways I found so far are commonly used and taken as the standard way.
|
Best solution I can come up with, is to externally call sed and combine that with :read !
:r !sed -n -e '10,25p' fileB
| vi[m] read range of lines from another file |
1,375,634,929,000 |
[root@datacenteronline ~]# ssh [email protected]
Last login: Wed Apr 17 09:55:45 2013 from 192.168.1.187
[root@localhost ~]# ls /proc/ | grep 2266
[root@localhost ~]# cd /proc/2266
[root@localhost 2266]# ls
attr cpuset limits net root statm
autogroup cwd loginuid numa_maps sched status
auxv environ maps oom_adj schedstat syscall
cgroup exe mem oom_score sessionid task
clear_refs fd mountinfo oom_score_adj smaps wchan
cmdline fdinfo mounts pagemap stack
coredump_filter io mountstats personality stat
[root@localhost 2266]# ls -al /proc/2266
total 0
dr-xr-xr-x 7 apache apache 0 Apr 17 09:45 .
dr-xr-xr-x 266 root root 0 Apr 17 09:11 ..
dr-xr-xr-x 2 apache apache 0 Apr 17 09:45 attr
-rw-r--r-- 1 root root 0 Apr 17 09:45 autogroup
-r-------- 1 root root 0 Apr 17 09:45 auxv
-r--r--r-- 1 root root 0 Apr 17 09:45 cgroup
--w------- 1 root root 0 Apr 17 09:45 clear_refs
-r--r--r-- 1 root root 0 Apr 17 09:45 cmdline
-rw-r--r-- 1 root root 0 Apr 17 09:45 coredump_filter
-r--r--r-- 1 root root 0 Apr 17 09:45 cpuset
lrwxrwxrwx 1 root root 0 Apr 17 09:45 cwd -> /
-r-------- 1 root root 0 Apr 17 09:45 environ
lrwxrwxrwx 1 root root 0 Apr 17 09:45 exe -> /usr/local/apache2/bin/httpd
dr-x------ 2 root root 0 Apr 17 09:45 fd
dr-x------ 2 root root 0 Apr 17 09:45 fdinfo
-r-------- 1 root root 0 Apr 17 09:45 io
-rw------- 1 root root 0 Apr 17 09:45 limits
-rw-r--r-- 1 root root 0 Apr 17 09:45 loginuid
-r--r--r-- 1 root root 0 Apr 17 09:45 maps
-rw------- 1 root root 0 Apr 17 09:45 mem
-r--r--r-- 1 root root 0 Apr 17 09:45 mountinfo
-r--r--r-- 1 root root 0 Apr 17 09:45 mounts
-r-------- 1 root root 0 Apr 17 09:45 mountstats
dr-xr-xr-x 6 apache apache 0 Apr 17 09:45 net
-r--r--r-- 1 root root 0 Apr 17 09:45 numa_maps
-rw-r--r-- 1 root root 0 Apr 17 09:45 oom_adj
-r--r--r-- 1 root root 0 Apr 17 09:45 oom_score
-rw-r--r-- 1 root root 0 Apr 17 09:45 oom_score_adj
-r--r--r-- 1 root root 0 Apr 17 09:45 pagemap
-r--r--r-- 1 root root 0 Apr 17 09:45 personality
lrwxrwxrwx 1 root root 0 Apr 17 09:45 root -> /
-rw-r--r-- 1 root root 0 Apr 17 09:45 sched
-r--r--r-- 1 root root 0 Apr 17 09:45 schedstat
-r--r--r-- 1 root root 0 Apr 17 09:45 sessionid
-r--r--r-- 1 root root 0 Apr 17 09:45 smaps
-r--r--r-- 1 root root 0 Apr 17 09:45 stack
-r--r--r-- 1 root root 0 Apr 17 09:45 stat
-r--r--r-- 1 root root 0 Apr 17 09:45 statm
-r--r--r-- 1 root root 0 Apr 17 09:45 status
-r--r--r-- 1 root root 0 Apr 17 09:45 syscall
dr-xr-xr-x 29 apache apache 0 Apr 17 09:45 task
-r--r--r-- 1 root root 0 Apr 17 09:45 wchan
Cound anyone tell me what it is?
|
This is likely to be a thread. In Linux, threads have a different process ID to the other threads in the process. When you look at the PID column in ps, you're actually looking at the thread group ID (TGID), which is common amongst all threads in a process. This is for historical reasons due to the way threads evolved in Linux.
For example, on my system, chromium has a number of threads in a process (multiple processes too):
$ ps -efL | grep chromium
[UID PID PPID LWP C NLWP STIME TTY TIME CMD]
[...]
camh 10927 5182 10927 0 4 11:07 ? 00:00:00 /usr/lib/chromium/chromium ...
camh 10927 5182 10929 0 4 11:07 ? 00:00:00 /usr/lib/chromium/chromium ...
camh 10927 5182 10930 0 4 11:07 ? 00:00:00 /usr/lib/chromium/chromium ...
camh 10927 5182 10933 0 4 11:07 ? 00:00:00 /usr/lib/chromium/chromium ...
The second column is the TGID (although it is labelled as PID) and the forth column is LWP (light-weight process).
$ ls /proc | grep 10927
10927
$ ls /proc | grep 10929
$ cd /proc/10929
$ head -n 5 status
Name: Chrome_ChildIOT
State: S (sleeping)
Tgid: 10927
Pid: 10929
PPid: 5182
You can see that process 10929 does not show up in /proc, but you can cd to it. If you look in the status file, you'll see that it is part of "process" (thread group) 10927, and from the output above, that process does appear in /proc.
| A hidden process, what it is? |
1,375,634,929,000 |
Shorter version of question:
Which C header contains the macro that corresponds to the machine architecture of the system (e.g., __x86_64__, __ia64__, __mips__?)
Longer version of question:
I'd like to run the ganglia monitoring tools on a Tilera-based system that runs Linux.
Ganglia doesn't currently have support for Tilera. To get this to work, I ned to modify a C function called machine_type_func that returns the machine architecture. The body of this function is determined at compile-time, it looks like this:
g_val_t
machine_type_func ( void )
{
g_val_t val;
#ifdef __i386__
snprintf(val.str, MAX_G_STRING_SIZE, "x86");
#endif
#ifdef __x86_64__
snprintf(val.str, MAX_G_STRING_SIZE, "x86_64");
#endif
...
return val;
}
I need to add the appropriate line for Tilera, but I don't know the name of the macro that specifies a Tilera-based system. I'm guessing this macro is defined in one of the standard Linux headers, but I don't know which one to look in.
|
No header file defines it - those macros are predefined by the compiler. To find out the full list of predefined macros do this:
echo | gcc -E -dM -
Then look through the results for likely macros.
| Which header defines the macro that specifies the machine architecture? |
1,375,634,929,000 |
Consider following kern.log snippet:
ata4.00: failed command: WRITE FPDMA QUEUED
ata4.00: cmd 61/00:78:40:1e:6c/04:00:f0:00:00/40 tag 15 ncq 524288 out
res 41/04:00:00:00:00/04:00:00:00:00/00 Emask 0x1 (device error)
ata4.00: status: { DRDY ERR }
ata4.00: error: { ABRT }
ata4: hard resetting link
ata4: nv: skipping hardreset on occupied port
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata4.00: configured for UDMA/133
ata4: EH complete
How can I identify which hard drive the kernel actually means when it talks about ata4.00?
How can I find the corresponding /dev/sdY device name?
|
You can find the corresponding /dev/sdY device via traversing the /sys tree:
$ find /sys/devices | grep '/ata[0-9]\+/.*/block/s[^/]\+$' \
| sed 's@^.\+/\(ata[0-9]\+\)/.\+/block/\(.\+\)$@\1 => /dev/\2@'
With a more efficient /sys traversal (cf. lsata.sh):
$ echo /sys/class/ata_port/ata*/../../host*/target*/*/block/s* | tr ' ' '\n' \
| awk -F/ '{printf("%s => /dev/%s\n", $5, $NF)}'
Example output from a 2 disk system:
ata1 => /dev/sda
ata2 => /dev/sdb
Then, for reliably identifying the actual hardware you need to map /dev/sdY to the serial number, e.g.:
$ ls /dev/disk/by-id -l | grep 'ata.*sd[a-zA-Z]$'
lssci
The lssci utility can also be used to derive the mapping:
$ lsscsi | sed 's@^\[\([^:]\+\).\+\(/dev/.\+\)$@\1,\2@' \
| awk -F, '{ printf("ata%d => %s\n", $1+1, $2) }'
Note that the relevant lsscsi enumeration starts from 0 while the ata enumeration starts from 0.
Syslog
If nothing else works one can look at the syslog/journal to derive the mapping.
The /dev/sdY devices are created in the same order as the ataX identifiers are enumerated in the kern.log while ignoring non-disk devices (ATAPI) and not-connected links.
Thus, following command displays the mapping:
$ grep '^May 28 2' /var/log/kern.log.0 | \
grep 'ata[0-9]\+.[0-9][0-9]: ATA-' | \
sed 's/^.*\] ata//' | \
sort -n | sed 's/:.*//' | \
awk ' { a="ata" $1; printf("%10s is /dev/sd%c\n", a, 96+NR); }'
ata1.00 is /dev/sda
ata3.00 is /dev/sdb
ata5.00 is /dev/sdc
ata7.00 is /dev/sdd
ata8.00 is /dev/sde
ata10.00 is /dev/sdf
(Note that ata4 is not displayed because the above log messages are from another system.)
I am using /var/log/kern.log.0 and not /var/log/kern.log because the boot messages are already rotated. I grep for May 28 2 because this was the last boot time and I want to ignore previous messages.
To verify the mapping you can do some checks via looking at the output of:
$ grep '^May 28 2' /var/log/kern.log.0 | \
grep 'ata[0-9]\+.[0-9][0-9]: ATA-'
May 28 20:43:26 hn kernel: [ 1.260488] ata1.00: ATA-7: SAMSUNG SV0802N, max UDMA/100
May 28 20:43:26 hn kernel: [ 1.676400] ata5.00: ATA-5: ST380021A, 3.19, max UDMA/10
[..]
And you can compare this output with hdparm output, e.g.:
$ hdparm -i /dev/sda
/dev/sda:
Model=SAMSUNG SV0802N [..]
(using Kernel 2.6.32-31)
| How to map ataX.0 identifiers in kern.log error messages to actual /dev/sdY devices? |
1,375,634,929,000 |
I have read:
Resize LUKS Volume(s)
Increase the size of a LUKS encrypted partition
Resizing LVM-on-LUKS
And others.
I'm trying to resize from 250 GB to 500 GB. Previously, the partition /dev/sda2, was 250 GB, I've now resized the partition to 500 GB.
However, what about the LUKS device, which resides at /dev/sda2? How do I resize that?
Well, the manual (for cryptsetup) provides us with "resize". However, when I check with cryptsetup status, my device is already 500 GB.
Furthermore, when I check in parted, it (the encrypted device /dev/dm-0 or /dev/mapper/cryptdevice which is a symlink) also shows up as 500 GB.
It appears my encrypted device is already the correct size!
Why then, when I actually mount the encrypted device (/dev/mapper/cryptdevice), does it show up as 250 GB?
Did I miss a step? What else do I need to do?
I've obviously rebooted many times between doing this. I've started from a bootable USB device, executed cryptsetup, rebooted, etc. It still shows up as 250 GB, when I would expect it to be 500 GB.
Note that I never actually resized anything beyond the partition itself. After resizing the partition, cryptsetup and parted started reporting the encrypted volume as also being resized -- but again, when I mount it, it is still only 250 GB.
I don't have LVM on top of this, I just have:
/dev/sda2, which contains a LUKS encrypted file system. I open this with cryptsetup luksOpen /dev/sda2 cryptdevice, etc.
|
You still have to resize the filesystem using the resized block device. The precise method and possible limitations depend on each filesystem.
Here are two examples to resize the filesystems to the whole available size for EXT4 and XFS. An other filesystem will require an other specific command.
An EXT4 filesystem can be enlarged online or offline (and can also be shrunk, but only offline).
resize2fs /dev/mapper/cryptdevice
An XFS filesystem can only be enlarged, and online (can't be shrunk back once done).
The filesystem must be mounted to be grown. The command requires the mountpoint rather than the block device.
mount -t xfs /dev/mapper/cryptdevice /mnt
xfs_growfs /mnt
You actually missed this step twice from the links:
Resize LUKS Volume(s)
in the question:
# Step 5: Resize encrypted volume (Trying to give it some space)
> resize2fs -p /dev/CryptVolumeGroup/root 101G
but in the answer:
Enlarge the rootfs logical volume. No need to un-mount since ext4 and
be enlarged while mounted: lvresize -r -L +100G archvg/home
lvresize -r resizes automatically the underlying filesystem, hence no specific command in the answer. Resizing the filesystem for your specific case not using LVM wasn't present in this answer.
Increase the size of a LUKS encrypted partition
As a final step, the file-system needs to be expanded to the new size.
With the resize2fs(8) command the file-system is extended to the new
size of the LUKS volume.
$ sudo resize2fs /dev/mapper/sdb1_crypt
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/mapper/sdb1_crypt is mounted on /media/gerhard/Daten; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
The filesystem on /dev/mapper/sdb1_crypt is now 14647925 (4k) blocks long.
Resizing LVM-on-LUKS
Resizing the encrypted volume
Now we are going to resize the encrypted volume itself. By taking in
account the total size of the logical volume minus some safety space:
# resize2fs -p /dev/CryptVolumeGroup/Home 208G
| How to resize a LUKS device, revisited |
1,375,634,929,000 |
Can anyone help ? I have 2 disks spanning my main partitions. 1 is 460Gb and the other is a 1TB. I would like to remove the 1TB - I would like to use it in another machine.
The volume group isn't using a lot of space anyway, I only have docker with a few containers using that disk and my docker container volumes are on a different physical disk anyway.
If I just remove the disk ([physically]), it is going to cause problems right?
Here is some info
pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name ubuntu-vg
PV Size <464.26 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 118850
Free PE 0
Allocated PE 118850
PV UUID DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
--- Physical volume ---
PV Name /dev/sdb1
VG Name ubuntu-vg
PV Size 931.51 GiB / not usable 4.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
LVM confuses me a little :-)
Is there not just a simple case of saying,
"remove yourself from the VG and assing anything you are using the remaining group member" ?
Its worth noting that the 1TB was added afterwards, so assume its easier to remove ?
Any help really appreciated
EDIT
Also some more info
df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.4M 3.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 1.4T 5.1G 1.3T 1% /
It sames its using only 1%
also output of lvs
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 1.36t
EDIT
pvdisplay -m
--- Physical volume ---
PV Name /dev/sda3
VG Name ubuntu-vg
PV Size <464.26 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 118850
Free PE 0
Allocated PE 118850
PV UUID DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
--- Physical Segments ---
Physical extent 0 to 118849:
Logical volume /dev/ubuntu-vg/ubuntu-lv
Logical extents 0 to 118849
--- Physical volume ---
PV Name /dev/sdb1
VG Name ubuntu-vg
PV Size 931.51 GiB / not usable 4.69 MiB
Allocatable NO
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
--- Physical Segments ---
Physical extent 0 to 238465:
Logical volume /dev/ubuntu-vg/ubuntu-lv
Logical extents 118850 to 357315
EDIT
Output of
lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
loop0 squashfs /snap/core/9066
loop2 squashfs /snap/core/9289
sda
├─sda1 vfat E6CC-2695 /boot/efi
├─sda2 ext4 0909ad53-d6a7-48c7-b998-ac36c8f629b7 /boot
└─sda3 LVM2_membe DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
└─ubuntu--vg-ubuntu--lv
ext4 b64f2bf4-cd6c-4c21-9009-76faa2627a6b /
sdb
└─sdb1 LVM2_membe Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
└─ubuntu--vg-ubuntu--lv
ext4 b64f2bf4-cd6c-4c21-9009-76faa2627a6b /
sdc xfs 1a9d0e4e-5cec-49f3-9634-37021f65da38 /gluster/bricks/2
sdc above is a different drive - and not related.
|
Since the filesystem you'll need the disk removed from is your root filesystem, and the filesystem type is ext4, you'll have to boot the system from some live Linux boot media first. Ubuntu Live would probably work just fine for this.
Once booted from the external media, run sudo vgchange -ay ubuntu-vg to activate the volume group so that you'll be able to access the LVs, but don't mount the filesystem: ext2/3/4 filesystems need to be unmounted for shrinking. Then shrink the filesystem to 10G (or whatever size you wish - it can easily be extended again later, even on-line):
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 10G
Pay attention to the messages output by resize2fs - if it says the filesystem cannot be shrunk that far, specify a bigger size and try again.
This is the only step that needs to be done while booted on the external media; for everything after this point, you can boot the system normally.
At this point, the filesystem should have been shrunk to 10G (or whatever size you specified). The next step is to shrink the LV. It is vitally important that the new size of the LV should be exactly the same or greater than the new size of the filesystem! You don't want to cut off the tail end of the filesystem when shrinking the LV. It's safest to specify a slightly bigger size here:
sudo lvreduce -L 15G /dev/mapper/ubuntu--vg-ubuntu--lv
Now, use pvdisplay or pvs to see if LVM now considers /dev/sdb1 totally free or not. In pvdisplay, the Total PE and Free PE values for sdb1 should be equal - in pvs output, the PFree value should equal PSize respectively. If this is not the case, then it will be time to use pvmove:
sudo pvmove /dev/sdb1
After this, the sdb1 PV should definitely be totally free according to LVM and it can be reduced out of the VG.
sudo vgreduce vg-ubuntu /dev/sdb1
If you wish, you can then remove the LVM signature from the ex-PV:
sudo pvremove /dev/sdb1
But if you are going to overwrite it anyway, you can omit this step.
After these steps, the shrunken filesystem will still be sized at 10G (or whatever you specified) even though the LV might be somewhat bigger than that. To fix that:
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
When extending a filesystem, you don't have to specify a size: the tool will automatically extend the filesystem to match the exact size of the innermost device containing it. In this case, the filesystem will be sized according to the size of the LV.
Later, if you wish to extend the LV+filesystem, you can do it with just two commands:
sudo lvextend -L <new size> /dev/mapper/ubuntu--vg-ubuntu--lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
You can do this even while the filesystem is in use and mounted. Because shrinking a filesystem is harder than extending it, it might be useful to hold some amount of unallocated space in reserve at the LVM level - you will be able to use it at a moment's notice to create new LVs and/or to extend existing LVs in the same VG as needed.
| How to remove a disk from an lvm partition? |
1,375,634,929,000 |
I have found a number of questions on various forums where Mac users complain about locale errors when they log in to Linux systems over SSH which complain that the LC_CTYPE=UTF-8 setting is incorrect.
In some more detail, the shell on MacOS seems to set this value, and then (if you have the option enabled in Terminal, or etc) your local LC_* variables get exported to the remote system when you SSH in.
Linux insists that LC_CTYPE needs to be set to a valid locale (sometimes you can fix this with localegen as admin on the Linux system) but UTF-8 is not a locale in the first place.
My primary question is, is this a bug in MacOS? Or is Linux wrong in insisting that the variable needs to be set to a fully specified locale name?
Secondarily, in order to be able to argue which one is correct and why, where is this specified?
Tertiarily, is there something these Mac users (myself included) could or should do differently?
The obvious workaround is to put something like
LC_CTYPE=en_US.UTF-8
in your .bash_profile, but this obviously only solves it for your personal account, and hardcodes a value which may or may not agree with your other locale settings.
|
The basic question is
My primary question is, is this a bug in MacOS? Or is Linux wrong in insisting that the variable needs to be set to a fully specified locale name?
and the POSIX page for environment variables shows the reason why others view the macOS configuration as incorrect:
[XSI] If the locale value has the form:
language[_territory][.codeset]
it refers to an implementation-provided locale, where settings of language, territory, and codeset are implementation-defined.
LC_COLLATE, LC_CTYPE, LC_MESSAGES, LC_MONETARY, LC_NUMERIC, and LC_TIME are defined to accept an additional field @ modifier, which allows the user to select a specific instance of localization data within a single category (for example, for selecting the dictionary as opposed to the character ordering of data). The syntax for these environment variables is thus defined as:
[language[_territory][.codeset][@modifier]]
For example, if a user wanted to interact with the system in French, but required to sort German text files, LANG and LC_COLLATE could be defined as:
LANG=Fr_FR
LC_COLLATE=De_DE
This could be extended to select dictionary collation (say) by use of the @ modifier field; for example:
LC_COLLATE=De_DE@dict
An implementation may support other formats.
If the locale value is not recognized by the implementation, the behavior is unspecified.
That is, they assume that POSIX prescribes a syntax for the locale settings.
An unwary reader would assume that POSIX defines the permissible forms for the environment variables so that the codeset value is optional, and not act as a replacement for the language. But that last "may" opens up a can of worms, in effect blessing this difference in interpretation. Apple can do whatever it wants, if it wants to provide valid locales which don't follow that pattern exactly.
@tripleee suggested that the page on Locale gives better information, but that is almost entirely a discussion of the locale definitions rather than providing guidance for interoperability (i.e., POSIX's ostensible goal).
Neither page addresses differences in the available locale settings (such as ".utf8" versus ".UTF-8"). Those are implementation-dependent, as noted on the POSIX page. That leaves users with the sole solution being to determine for themselves what locale settings are supported on the local and remote systems, and (ssh behavior here) determine how to set those on the remote system "compatibly".
| Valid values for LC_CTYPE? |
1,375,634,929,000 |
When running cat /proc/meminfo, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
|
That view can be very misleading in a number of real-world cases.
The kernel now provides an estimate for available memory, in the MemAvailable field. This value is significantly different from MemFree + Cached.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...
MemAvailable: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
1. MemAvailable details
As it says above, tmpfs and other Shmem memory cannot be freed, only moved to swap. Cached in /proc/meminfo can be very misleading, due to including this swappable Shmem memory. If you have too many files in a tmpfs, it could be occupying a lot of your memory :-). Shmem can also include some graphics memory allocations, which could be very large.
MemAvailable deliberately does not include swappable memory. Swapping too much can cause long delays. You might even have chosen to run without swap space, or allowed only a relatively limited amount.
I had to double-check how MemAvailable works. At first glance, the code did not seem to mention this distinction.
/*
* Not all the page cache can be freed, otherwise the system will
* start swapping. Assume at least half of the page cache, or the
* low watermark worth of cache, needs to stay.
*/
pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];
pagecache -= min(pagecache / 2, wmark_low);
available += pagecache;
However, I found it correctly treats Shmem as "used" memory. I created several 1GB files in a tmpfs. Each 1GB increase in Shmem reduced MemAvailable by 1GB. So the size of the "file LRU lists" does not include shared memory or any other swappable memory. (I noticed these same page counts are also used in the code that calculates the "dirty limit").
This MemAvailable calculation also assumes that you want to keep at least enough file cache to equal the kernel's "low watermark". Or half of the current cache - whichever is smaller. (It makes the same assumption for reclaimable slabs as well). The kernel's "low watermark" can be tuned, but it is usually around 2% of system RAM. So if you only want a rough estimate, you can ignore this part :-).
When you are running firefox with around 100MB of program code mapped in the page cache, you generally want to keep that 100MB in RAM :-). Otherwise, at best you will suffer delays, at worst the system will spend all its time thrashing between different applications. So MemAvailable is allowing a small percentage of RAM for this. It might not allow enough, or it might be over-generous. "The impact of those factors will vary from system to system".
For many PC workloads, the point about "lots of files" might not be relevant. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache (over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort -n, but e.g. Gnome Disk Usage Analyzer would do the same thing.
2. [edit] Memory in control groups
So-called "Linux containers" are built up from namespaces, cgroups, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build containers like this and sell them as "virtual servers" :-).
Hosting servers may also build "virtual servers" using features which are not in mainline Linux. OpenVZ containers pre-date mainline cgroups by two years, and may use "beancounters" to limit memory. So you cannot understand exactly how those memory limits work if you only read documents or ask questions about the mainline Linux kernel. cat /proc/user_beancounters shows current usage and limits. vzubc presents it in a slightly more friendly format. The main page on beancounters documents the row names.
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat. The file shows various fields including total_rss, total_cache, total_shmem. shmem, including tmpfs, counts towards the memory limits. I guess you can look at total_rss as an inverse equivalent of MemFree. And there is also the file memory.kmem.usage_in_bytes, representing kernel memory including slabs. (I assume memory.kmem. also includes memory.kmem.tcp. and any future extensions, although this is not documented explicitly). There are not separate counters to view reclaimable slab memory. The document for cgroup-v1 says hitting the memory limits does not trigger reclaim of any slab memory. (The document also has a disclaimer that it is "hopelessly outdated", and that you should check the current source code).
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat file. All the fields sum over child cgroups, so you don't need to look for total_... fields. There is a file field, which means the same thing cache did. Annoyingly I don't see an overall field like rss inside memory.stat; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo (or any other file in /proc), so that would show the values for the entire machine. This would confuse VPS customers. However it is possible to use namespaces to replace /proc/meminfo with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, hopefully you will be delegated permission so you can enable memory accounting in systemd (or equivalent).
| Is "Cached" memory de-facto free? |
1,375,634,929,000 |
When we say that a process has a controlling terminal, do we mean that the process itself has a controlling terminal, or is it the session that the process belongs to that has a controlling terminal?
I used to think that it is the session that has a controlling terminal, but then I have read the following (from here) which implies that it is the process that has a controlling terminal:
One of the attributes of a process is its controlling terminal. Child
processes created with fork inherit the controlling terminal from
their parent process. In this way, all the processes in a session
inherit the controlling terminal from the session leader. A session
leader that has control of a terminal is called the controlling
process of that terminal.
|
It is indeed the session that has a controlling terminal
The Single UNIX Specification describes the relationship in terms of the controlling terminal being "associated with a session". As it goes on to specify, a controlling terminal has a 1:1 relationship with a session. There is "at most one controlling terminal" associated with a session, and "a controlling terminal is associated with exactly one session".
The FreeBSD Design and Implementation book approaches this slightly differently, but reaches the same place. It is not possible for processes that share the same session to have different controlling terminals, nor is it possible for a single terminal to be the controlling terminal of multiple sessions.
Internally in FreeBSD that is how the data structures actually work. The process structure has a pointer to the pgrp structure representing the process group that the process belongs to, which in turn points to the session structure representing the session that the process group belongs to, which in turn points to the tty structure of the controlling terminal for the session.
Internally in Linux, things are slightly more complex. Each task_struct has a set of pointers to pid structures for its process group ID and session ID; and has another pointer to a per-process signal_struct structure that in turn directly points to the tty structure of the controlling terminal.
Further reading
George V. Neville-Neil, Marshall Kirk McKusick, and Robert N.M. Watson (2014-09-25). "Process Management". The Design and Implementation of the FreeBSD Operating System. Addison-Wesley Professional. ISBN 9780133761832.
Donald Lewine (1991). "Terminal I/O". POSIX Programmers Guide. O'Reilly Media, Inc. ISBN 9780937175736.
Daniel P. Bovet and Marco Cesati (2005). "Processes". Understanding the Linux Kernel: From I/O Ports to Process Management. 3rd edition. O'Reilly Media, Inc. ISBN 9780596554910.
"Definitions". The Open Group Base Specifications. Issue 7. 2016. IEEE 1003.1:2008.
"General Terminal Interface". The Open Group Base Specifications. Issue 7. 2016. IEEE 1003.1:2008.
| Is it the process that has a controlling terminal, or is it the session that has a controlling terminal? |
1,375,634,929,000 |
I read the man pages on both, and they seems to be interchangeable and to be doing the same job.
So can someone explain when I should use partx, and when kpartx ?
|
partx asks the kernel to probe a given device and re-read the partition table. The kernel is doing the work here.
kpartx creates device mapper entries and so can be used by devices that the kernel does not natively support partitioning, such as multipath device mapper devices ("kpartx" is part of multipath-tools) or files.
| What's the difference between partx and kpartx? |
1,375,634,929,000 |
Do the following paths point to the same disk location?
/home/username
/app/home/username
|
Change directory to each one in turn and look at the output of pwd -P. With the -P flag, pwd will display the physical current working directory with all symbolic links resolved.
| How to find out whether two directories point to the same location |
1,375,634,929,000 |
Out of curiosity, I would like to know is there a way to find out the source of mounted partition?
For example, output of df -h is:
/dev/loop1 3M 3M 0 100% /media/loop
From this output, I know a loop device of 3M is mounted at /media/loop, but I have no clue to determine the exact location of the /dev/loop1 device.
root@SHW:~# mount -o loop /home/SHW/Downloads/TinyCore-current.iso
/mnt/loop mount: block device /home/SHW/Downloads/TinyCore-current.iso is write-protected, mounting read-only
root@SHW:~# tail -n1 /proc/mounts
/dev/loop1 /mnt/loop iso9660 ro,relatime 0 0
How do I find out the absolute path of /dev/loop1 f I don't know who mounted those partitions? (In this case the path is /home/SHW/Downloads/TinyCore-current.iso.)
|
Use losetup's --list option:
$ losetup --list /dev/loop0
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 0 0 /tmp/backing-file
If you only want the file, use the -O option to pick the column:
$ losetup --list --noheadings -O BACK-FILE /dev/loop0
/tmp/backing-file
This option is part of recent versions of util-linux. Earlier versions support only the -a option, which lists all active devices in a harder-to-process format:
$ losetup -a
/dev/loop0: []: (/tmp/backing-file)
Either way, it's not overly onerous to process however you want.
| Source path of loop-device |
1,375,634,929,000 |
On many different machines, when I come across a lockup, I often use Alt+SysRq+REISUB to reboot without too much losses. But I often noticed that although REISU commands work even when I just hold Alt+SysRq and enter them without releasing Alt+SysRq, the last one, B, seems to be very "lazy": I have to repeat it many times, and in fact it doesn't trigger until I do the cycle "press Alt+SysRq, type B, release Alt+SysRq" multiple times (and not always the same number of times).
At first occurrences of this problem I thought it's the kernel which has locked too hard that it doesn't "see" my B command, but when I realized that multiple repetitions of it do allow me to trigger reboot, it now seems that it's something general. Even on a working system (be it Debian, Ubuntu, LFS etc.), I can easily reproduce this. In fact, I can even load the kernel with init=/bin/bash and reproduce this from this bash prompt.
Looking at serial console output, I see all the feedback on REISU, but only one feedback print on multiple B commands — when the kernel finally is convinced to do a reboot.
Why is this? Is it some kernel feature which prevents unintended reset, or maybe it's just a bug (quite strange one)?
Note that I'm using plain keyboard with no Fn or multimedia keys, so this question isn't a duplicate of this one.
|
The problem is not in software, it's in hardware.
Keyboard keys are not independent: there're about 100 keys, but only about 26 wires going into keyboard's internal controller:
(Image from dreamstime.com)
This means that not all keys can be detected when simultaneously pressed. Because RAlt is much closer to SysRq than LAlt, I always use it to free one hand for entering chars. But it appears that on most (all?) PC keyboards RAlt+SysRq+B doesn't send scancode of B!*
Why do I then finally get the reboot? It's simple: when I'm very annoyed that I can't reboot the machine with this semi-working command, I press the combo many times without too much attention, sometimes mixing presses and releases of keys — and it appears that SysRq isn't a normal modifier for linux, like e.g. Alt is: the magic-SysRq mode is active even after I release SysRq but still hold Alt. Then the sequence which appears to work is:
Press RAlt
Press SysRq
Release SysRq
Press B
See reboot
For LAlt things appear much simpler: the keyboard is able to detect B when LAlt+SysRq is held, so there's no problem, but I never noticed it before because I always used RAlt.
Funnily, it appears that this issue is long known, and the workaround is that same which I've discovered empirically. From kernel source tree, Documentation/sysrq.txt (emphasis mine):
On x86 - You press the key combo 'ALT-SysRq-<command key>'. Note - Some
keyboards may not have a key labeled 'SysRq'. The 'SysRq' key is
also known as the 'Print Screen' key. Also some keyboards cannot
handle so many keys being pressed at the same time, so you might
have better luck with "press Alt", "press SysRq", "release SysRq",
"press <command key>", release everything.
So, looks like this trick is an official recommendation, not an unreliable side effect of implementation.
*I've actually checked this with a simple DOS program which reports every scan code the i8042 gives on each IRQ1
| Why does Alt+SysRq+B not always trigger while REISU do? |
1,375,634,929,000 |
Possible Duplicate:
Why are shared libraries executable?
Why do the .so files in /lib have permission 755 and not 644 in Linux? That seems strange
According to http://www.gossamer-threads.com/lists/gentoo/user/231943, it seems for old glibc. On my embedded system /lib/libc.so.6 it works even permission is 644.
|
I suspect it's probably just for historical reasons.
BlueBomber's answer is probably historically correct, but it's not actually necessary for shared objects to be executable.
On my Ubuntu system, they aren't. Of the 30 /lib/*.so* and 600 /usr/lib/*.so* files, only one has execute permissions, and that's probably just a glitch.
Execute permission permits a file to be executed via one of the exec*() functions; shared object files contain executable code, but they're not executed in that way.
On the other hand, on a CentOS 5.7 system I have access to, those files are executable; the same is true on a SPARC Solaris 9 system. (It would be interesting to try turning off executable permissions on some of those files to see if it breaks anything, but I'm not able to do so.)
(What Linux distribution are you using?)
UPDATE:
This answer to this question shows an example (HP-UX) of a system where the execute bit is actually required. That doesn't appear to be the case on Linux, where some distributions set the execute bit (probably out of historical inertia) and others don't. Or maybe some Linuxes actually require it.
Another data point: On my Ubuntu system, I just tried creating my own shared object file. The generated "libfoo.so" file was created with execute permissions, but if I manually chmod -x it, the program that uses it still works.
In any case, setting execute permission on *.so files is Mostly Harmless (and certainly less annoying that, say, setting execute permission on source files).
UPDATE 2:
As fwyzard points out in a comment, some *.so files can actually be executed. For example, on my current system, executing /lib/x86_64-linux-gnu/libc-2.27.so prints version information for the GNU C library. /lib/x86_64-linux-gnu/libpthread-2.27.so behaves similarly.
| Why are .so files executable? [duplicate] |
1,375,634,929,000 |
From what I gather, the idea of a "trash can" is of Windows descent, and was to make a user's life easier. However, when I go to delete a file, I don't hit delete unless I know I don't need it and will never need it ever again. Period. I'm currently running OpenSuse and the trash can is a confusing "feature" (as I can't seem to find WHERE it is) that sometimes even creates cute little directories on my flash drives for trash.
Basically put, I don't like the trash can idea. Is there a way I can "turn it off"? I'm assuming it's a filesystem thing, so it might be harder to do then I predict. Basically, I would like to perform a rm -rf on the file that is selected (-r in case it is a directory). Is this at all possible?
|
Under KDE you can hit shift+del to directly delete selected files (or directories). Or you can press shift while chosing 'move to trash ...' in the context menu, which has the same effect.
IIRC this also works under Windows.
Probably there is some trash-properties dialog under KDE to globally disable the trash feature. It is possible to configure it in Dolphin, but perhaps there is also a more general solution in KDE available.
| How to disable the trash can in KDE |
1,375,634,929,000 |
I have been having an overheating issue which makes my laptop shutdown immediately. Is there anyway to monitor the temperature from the sensor and scale down the CPU frequency to avoid that problem? Is there any existing software or shell script that can handle that job?
|
You should have a look at cpufreq-set and cpufreq-info. On Debian and derived distros they are in the cpufrequtils package. For example, on an old laptop with a bad fan that I use as a file server at home I have made these settings:
sudo cpufreq-set -c 0 -g ondemand -u 800000
sudo cpufreq-set -c 1 -g ondemand -u 800000
| Overheating results in system shutdown |
1,639,359,725,000 |
Linux documents the default buffer size for tcp, but not for AF_UNIX ("local") sockets. The value can be read (or written) at runtime.
cat /proc/sys/net/core/[rw]mem_default
Is this value always set the same across different Linux kernels, or is there a range of possible values it could be?
|
The default is not configurable, but it is different between 32-bit and 64-bit Linux. The value appears to written so as to allow 256 packets of 256 bytes each, accounting for the different per-packet overhead (structs with 32-bit v.s. 64-bit pointers or integers).
On 64-bit Linux 4.14.18: 212992 bytes
On 32-bit Linux 4.4.92: 163840 bytes
The default buffer sizes are the same for both the read and write buffers. The per-packet overhead is a combination of struct sk_buff and struct skb_shared_info, so it depends on the exact size of these structures (rounded up slightly for alignment). E.g. in the 64-bit kernel above, the overhead is 576 bytes per packet.
http://elixir.free-electrons.com/linux/v4.5/source/net/core/sock.c#L265
/* Take into consideration the size of the struct sk_buff overhead in the
* determination of these values, since that is non-constant across
* platforms. This makes socket queueing behavior and performance
* not depend upon such differences.
*/
#define _SK_MEM_PACKETS 256
#define _SK_MEM_OVERHEAD SKB_TRUESIZE(256)
#define SK_WMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS)
#define SK_RMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS)
/* Run time adjustable parameters. */
__u32 sysctl_wmem_max __read_mostly = SK_WMEM_MAX;
EXPORT_SYMBOL(sysctl_wmem_max);
__u32 sysctl_rmem_max __read_mostly = SK_RMEM_MAX;
EXPORT_SYMBOL(sysctl_rmem_max);
__u32 sysctl_wmem_default __read_mostly = SK_WMEM_MAX;
__u32 sysctl_rmem_default __read_mostly = SK_RMEM_MAX;
Interestingly, if you set a non-default socket buffer size, Linux doubles it to provide for the overheads. This means that if you send smaller packets (e.g. less than the 576 bytes above), you won't be able to fit as many bytes of user data in the buffer, as you had specified for its size.
| What values may Linux use for the default unix socket buffer size? |
1,639,359,725,000 |
How to prevent chrome to take more than for example 4GB of ram. From time to time he decides to take something like 7GB (with 8GB RAM total) and makes my computer unusable.
Do you have any help.
PS: I even didn't have more than 10 tabs opened.
Edit: maybe I did ... something like 15. Anyway I want chrome to freeze or shutdown not to freeze the whole system.
|
I believe you would want to use something like cgroups to limit resource usage for a individual process.
So you might want to do something like this except with
cgcreate -g memory,cpu:chromegroup
cgset -r memory.limit_in_bytes=2048 chromegroup
to create chromegroup and restrict the memory usage for the group to 2048 bytes
cgclassify -g memory,cpu:chromegroup $(pidof chrome)
to move the current chrome processes into the group and restrict their memory usage to the set limit
or just launch chrome within the group like
cgexec -g memory,cpu:chromegroup chrome
However, it's pretty insane that chrome is using that much memory in the first place. Try purging reinstalling / recompiling first to see if that doesn't fix the issue, because it really should not be using that much memory to begin with, and this solution is only a band-aid over the real problem.
| Chrome eats all RAM and freezes system |
1,639,359,725,000 |
Hopefully this question is not too generic. I am very new to shell scripting and I come from a computer architecture/non-scripting programming background. I have noticed on the scripts at my work that rarely the scripts are written by making a sub-shell around the entire script. In the scripts I am writing, when I can envelop it with a sub-shell I am since it keeps it from messing with other scripts that call mine (Just in case). Is this not a common-practice because of some overhead associated with this approach? I am having a hard time finding this online.
Example:
#!/bin/bash
( #Start of subshell
echo "Some stuff here"
) #End of subshell
|
Subshells do have overhead.
On my system, the minimal fork-exec cost (when you run a program from disk when the file ins't cold) is about 2ms and the minimal forking cost is about 1ms.
With subshells, you're talking the forking cost alone, as no file needs to be execed. If the subshells are kept reasonable low, 1ms is quite negligible in human-facing programs. I believe humans can't notice anything that happens faster than 50ms (and that's how long it tends to take for modern scripting language interpreters to even start (I'm talking python and ruby in rvm here) with the newest nodejs taking up around 100ms).
However, it does add up with loops, and then you might want to replace for example the rather common bactick or $() pattern where you return something from a function by printing it to stdout for the parent shell to catpure with bashisms like printf -v (or use a fast external program to process the whole batch).
The bash-completion package specifically avoid this subshell cost by
returning via passed variable names using a technique described at http://fvue.nl/wiki/Bash:_Passing_variables_by_reference
Comparing
time for((i=0;i<10000;i++)); do echo "$(echo hello)"; done >/dev/null
with
time for((i=0;i<10000;i++)); do echo hello; done >/dev/null
should give you a good estimate of what your systems fork-ing overhead is.
| What is the overhead of using subshells? |
1,639,359,725,000 |
I have accumulated, from my time using Windows a good quantity of held-over filesystem copies and archives of system and data drives. I am trying to distill them down to the usable parts while discarding everything that is likely to be valueless.
From watching a bunch of said files scroll by while copying, once again, from a holding drive to a work drive I think I've got a starter list of "good" and "useless" files started, but I was wondering if there is any authoritative kind of list of files (coming from a previously Windows environment) that should be discarded immediately as unuseful?
Winners: (I know this list would likely turn into a mess if any effort were made to make it comprehensive, so these aren't what I'm looking for, unless they would likely be surrounded by crap that might get them destroyed inadvertently) (edit: If the ONLY way is a super-comprehensive white-list-based method, so be it. I'd prefer if that weren't the case, but beggars can't be choosers... most of the time.)
*.tar.*, *.rar, *.zip
*.mp(e)g, *.avi, *.mkv, *.wmv, *.asf
Losers: (These are what I'm really looking for)
*.exe, *.bat, *.dll, *.com, *.lnk
I also know there will be exceptions. Like installer .exe files, used to install something in Wine. For purposes of this question, this concern isn't one. All the files in question are copies (of copies, possibly of even more copies), so the installers I really want to keep are somewhere nice, safe, and probably write-protected.
|
Probably the simplest way to weed out the trash would be by the created or last-modified date (you might need to experiment to determine which one's better) - just use the date the system was installed as a starting point.
According to Pareto principle that simple filter will probably get you 80% of the effect you are seeking.
(Of course, you may, or even should, combine this one with the black & white lists you have started to assemble.)
| What Windows-related files are valueless to *nix users? |
1,639,359,725,000 |
I did find this book,
but I was wondering whether there's a newer book on the market, because this one was released in 2004. I'm sure a lot has changed in the past 7 years?
I need a book that helps me understand linux, instead of just giving me console recipes to reach certain goals. I need an overview picture of all its important components and how they work together, without going too much into kernel internals.
Any help would be appreciated on finding an appropriate book for an administrator who is interested in this wonderful OS called Linux. I'm not a newbie to Linux, I just want to get the bigger picture, without turning myself into a kernel hacker. I already know what the Linux kernel is and does, I just want to know how it is leveraged in the larger scheme of things, and how everything that sits on top of it is organized, and the reason why certain things are organized the way they are.
|
A lot has changed in 7 years. However Unix systems are accretive; their history pervades their entire current structure. Given that, focusing just on Linux, IMHO, narrows your focus too quickly. You'll learn much about Linux in the course of understanding Unix and Unix-like systems.
Since you noted you're looking for material from the viewpoint of an administrator I'll start by recommending Essential System Administration (Frisch) along with Practical Unix and Internet Security (Garfinkle & Spafford). Although these books are circa 2002/3 they're still extremely useful and highly practical in orientation (and your question reads as if getting oriented is what you want). (Garfinkel/Spafford) still has the best explanation of file ownership and permissions I've ever read (Linux and BSD) and (Frisch) is comprehensive in scope (even covers AIX in good detail). Combine those with Unix and Linux System Administration Handbook (Nemeth), which is current, and you'll get a solid, practical grounding in things Unix/Linux.
Another book that I found highly useful was The Art of Unix Programming (Raymond). The title, to me, is misleading; (Raymond) focuses on the philosophy of the Unix systems and how and why things are organized (again, that pervasive history as noted above). I had more than a few "Aha!" moments when I read it.
Finally, if you've settled on a Debian-based distribution than I second the recommendation of The Debian System (Krafft). That along with a copy of the Debian Policy and you'll be able to understand why Ubuntu and various Debian spin-offs organize things the way they do.
| Book on how Linux works from an administrator's point of view? [closed] |
1,639,359,725,000 |
I have installed Arch Linux on 40GB HDD on ga-g4mt-s2p1 Motherboard ( Intel Core 2 multi-core,2Gb of Ram)
I have made 4 Partitions:
/boot 100Mib
Swap 4Gib
/ 20Gib
/home The rest of the disk
It runs well without any problems but when I try the hard on and older Motherboard p4p800-mx(Pentium 4,512 Mb of Ram)
the booting stops at:
loading linux
loading initial ramdisk
Edit:Before the Grub menu I have this message
CMOS Settings Wrong
CMOS Date/Time Not Set
Press F1 to Run Setup
Press F2 to load default value and continue
|
I solved it by changing my version from x86_64 to i686
In the installation menu, there are two choices one for x86_64 and one for i686. My problem was with x86_64 but when I reinstalled it choosing i686, it worked fine.
The CMOS problem was solved by changing the CMOS battery.
| Booting stops at Loading initial ramdisk |
1,639,359,725,000 |
In FreeBSD 4.9 it was very easy to accomplish with just a single command like
jail [-u username] path hostname ip-number command
if path was / you had running just the same program as usual but all its network communication was restricted to use only given IP-address as the source. Sometimes it's very handy.
Now in Linux there's LXC, which does look very similar to FreeBSD's jail (or Solaris' zones) — can you think of similar way to execute a program?
|
Starting the process inside a network namespace that can only see the desired IP address can accomplish something similar. For instance, supposed I only wanted localhost available to a particular program.
First, I create the network namespace:
ip netns add limitednet
Namespaces have a loopback interface by default, so next I just need to bring it up:
sudo ip netns exec limitednet ip link set lo up
Now, I can run a program using ip netns exec limitednet and it will only be able to see the loopback interface:
sudo ip netns exec limitednet ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
If I wanted to limit it to an address other than localhost, I could add other interfaces into the namespace using:
ip link set DEVICE_NAME netns NAMESPACE
I'd have to experiment a bit more to figure out how to add a single IP address into a namespace in the case where an interface might have more than one IP address
The LWN article on namespaces is also helpful.
| Linux: Is there handy way to exec a program binding it to IP-address of choice? |
1,639,359,725,000 |
I have a sparse file, in which only some blocks are allocated:
~% du -h --apparent-size example
100K example
~% du -h example
52K example
I would like to know which blocks of the file are actually allocated. Is there a system call or kernel interface that could be used to get a list of either the allocations, or the holes of file?
Simply checking for a long enough string of zeros (the approach used by GNU cp, rsync, etc) does not work correctly:
~% cp example example1
~% du -h example1
32K example1
It detected other sequences of zeros that were actually allocated.
|
There is a similar question on SO. The currently accepted answer by @ephemient suggests using an ioctl called fiemap which is documented in linux/Documentation/filesystems/fiemap.txt. Quoting from that file:
The fiemap ioctl is an efficient method for userspace to get file
extent mappings. Instead of block-by-block mapping (such as bmap),
fiemap returns a list of extents.
Sounds like this is the kind of information you're looking for. Support by filesystems is again optional:
File systems wishing to support fiemap must implement a ->fiemap
callback on their inode_operations structure.
Support for the SEEK_DATA and SEEK_HOLE arguments to lseek you mentioned from Solaris was added in Linux 3.1 according to the man page, so you might use that as well. The fiemap ioctl appears to be older, so it might be more portable across different Linux versions for now, whereas lseek might be more portable across operating systems if Solaris has the same.
| Detailed sparse file information on Linux |
1,639,359,725,000 |
I have been using Linux after almost 5 years and observed that boot process has been almost abstracted. I mean, not much is visible to the user what is going on behind the scenes (Due to splash screens etc). Now, this might be good for the end users but not for the geek :)
I want to bring back the verboseness of old times. Here is what I have done:
I have been able to get rid of some of it by removing the "splash" and "quiet" parameters from the command line. However, I still cannot see the services being started one by one (like the ones in init.d).
I assume its because of init daemon being replaced by upstart. Are there some configs file which I can tweak to bring back the verboseness of what is going on.
Also, as soon as the login screen comes, it erases the boot log history. Is there a way to disable that ?
Note: I know I can do that by simply switching the distro to Arch or Slackware. But I don't want to do that.
|
You can pass --verbose on the kernel command line (replacing quiet splash) to make upstart more verbose. See Upstart debugging.
You can put console output in the global configuration file /etc/init.conf so that every job has its stdout and stderr connected to the console (by default, they're connected to /dev/null). (I'm not sure whether this in fact works; /etc/init.conf is not actually documented, I haven't tested if it's read in this way and this thread is not conclusive. Please test and report.) This directive can go into individual jobs' descriptions (/etc/init/*.conf) if you want to be selective (some already have it).
| Removing abstraction from Ubuntu boot process |
1,639,359,725,000 |
There used to be a kernel config option called sched_user or similar under cgroups. This allowed (to my knowledge) all users to fairly share system resources. In 2.6.35 it is not available. Is there a way I can configure my system to automatically share io/cpu/memory resources between all users (including root?). I have never set up a cgroup before, is there a good tutorial for doing so? Thank you very much.
|
The kernel documentation provides a general coverage of cgroups with examples.
The cgroups-bin package (which depends on libcgroup1) already provided by the distribution should be fine.
Configuration is done by editing the following two files:
/etc/cgconfig.conf
Used by libcgroup to define control groups, their parameters and mount points.
/etc/cgrules.conf
Used by libcgroup to define the control groups to which the process belongs to.
Those configuration files already have examples in it, so try adjusting them to your requirements. The man pages cover their configuration quite well.
Afterwards, start the workload manager and rules daemon:
service cgconfig restart
service cgred restart
The workload manager (cgconfig) is responsible for allocating the ressources.
Adding a new process to the manager:
cgexec [-g <controllers>:<path>] command [args]
Adding a already running process to the manager:
cgclassify [-g <controllers>:<path>] <pidlist>
Or automatically over the cgrules.conf file and the CGroup Rules Daemon (cgred), which forces every newly spawned process into the specified group.
Example /etc/cgconfig.conf :
group group1 {
perm {
task {
uid = alice;
gid = alice;
}
admin {
uid = root;
gid = root;
}
}
cpu {
cpu.shares = 500;
}
}
group group2 {
perm {
task {
uid = bob;
gid = bob;
}
admin {
uid = root;
gid = root;
}
}
cpu {
cpu.shares = 500;
}
}
mount {
cpu = /dev/cgroups/cpu;
cpuacct = /dev/cgroups/cpuacct;
}
Example /etc/cgrules.conf :
alice cpu group1/
bob cpu group2/
This will share the CPU ressources about 50-50 between the user 'alice' and 'bob'
| How can I configure cgroups to fairly share resources between users? |
1,639,359,725,000 |
I tried to use one of my Git aliases and accidentally opened the mg editor. I tried a lof of different keys and nothing seemed to close it. I eventually just used ctrlz to to send it to the background and then killed it with kill %1.
After that I went to check the manpages https://linux.die.net/man/1/mg, but the only quit related commands don't seem to do much:
C-g' keyboard-quit
C-x C-g' keyboard-quit
I'm probably missing something obvious, but how can it be closed?
|
Ok, so I'm not sure what the ' means here, but pressing ctrlx and ctrlc actually worked:
C-x C-c' save-buffers-kill-emacs
save-buffers-kill-emacs
Offer to save modified buffers and quit mg.
| How to quit the `mg` editor? |
1,639,359,725,000 |
From the manpage for ln:
-d, -F, --directory
allow the superuser to attempt to hard link directories (note: will
probably fail due to system restrictions, even for the superuser)
Are there any filesystem drivers that actually allow this, or is the only option mount --bind <src> <dest>?
Or is this kind of behavior blocked by the kernel before it even gets to the filesystem-specific driver?
NOTE: I'm not actually planning on doing this on any machines, just curious.
|
First a note: the ln command does not have options like -d, -F, --directory, this is a non-portable GNUism.
The feature you are looking for, is implemented by the link(1)command.
Back to your original question:
On a typical UNIX system the decision, whether hard links on directories are possible, is made in the filesystem driver.
The Solaris UFS driver supports hard links on directories, the ZFS driver does not.
The reason why UFS on Solaris supports hard links is that AT&T was interested in this feature - UFS from BSD does not support hard linked directories.
The reason why ZFS does not support hardlinked directories is that Jeff Bonwick does not like that feature.
Regarding Linux, I would guess that Linux blocks attempts to created hard links on directories in the upper kernel layers. The reason for this assumption is that Linus Torvalds wrote code for GIT that did shred directories when git clone was called as root on a platform that supports hard linked directories.
Note that a filesystem that supports to create hard linked directories also needs to support unlink(1) to remove non-empty directories as root.
So if we assume that Torvalds knows how Linux works and if Linux did support hard linked directories, Torvalds should have known that calling unlink(2) on a directory while being root, will not return with an error but shred that directory. IN other words, it is unlikely that Linux permits a file system driver to implement hard linked directories.
| Are there any filesystems for which `ln -d` succeeds? |
1,639,359,725,000 |
Given a distribution and its version, I can find which version of kernel it uses, e.g. for Ubuntu they are listed here, and for currently supported versions of Fedora they are here.
In general, however, I'm interested in a reverse lookup: given kernel version X I'd like to find which distros are still using X or older versions. Is there any easy way to do this, at least for the most popular distributions?
The use case of this is to decide whether I should bother supporting older Linux versions than version X in my software, if the newer one offers some feature I'd like to use.
|
So I'm not sure if you're looking to do this programmatically or not. But the first step you'd need to accomplish this is a database that catalogues all of this sort of information for each distribution and their respective releases.
Luckily… that is exactly what distrowatch.com is.
You can gather this information using their advanced search page, which has a cool feature that allows you to search for distribution releases that include a specific version of a package. In this case, you're interested in the linux package.
Searching for a specific version of that package (which corresponds to the kernel version) will give you a nice list of distributions followed by the releases of that distribution that ship with that package version.
I don't know of any DistroWatch API, so if you need to do this programmatically, you'll probably have to do some html parsing. But the format for the query to generate the results page for a given kernel version would be as follows:
distrowatch.com/search.php?pkg=linux&pkgver=VERSION&distrorange=InAny#pkgsearch
Play around with that, and you might be able to get a nice little tool to do exactly what you're trying to do. If anyone knows of a better way to search DistroWatch's Database, please chime in. It'd be really nice, since they have such a treasure-trove of information.
| How to find out what distros are using particular Linux version? |
1,639,359,725,000 |
Context
I'm automating SD card imaging from an existing dd factory image. The SD card is always connected through an external USB card reader and thus appears in the system as a SCSI block device /dev/sd*.
Currently the syntax of my command is: write-image DEVICE where DEVICE is the SD card block device, eg. /dev/sdd.
Problem
I'm already doing a basic check on DEVICE to verify it is of the form /dev/sd* but this is not enough: I fear the users (production people not used to Linux) make a mistake and specify another seemingly valid device, eg. /dev/sda. You can see the looming catastrophe, especially since my imaging script needs root privileges (not to write the image itself, mind you, but to modify the SD card afterwards, including adjusting a partition's size depending on the SD card's real size)...
Question
I would like to verify that the specified device actually is some USB mass storage (or at the very least a removable device) so that I can protect the system disks from being trashed accidentally. How can I do that?
I found nothing relevant in /proc or on the web, I'm quite at loss now.
|
Have a look under the /sys/ directory. In particular, /sys/block/ contains symlinks to block devices in /sys/devices/.
/sys/block/sdX/removable looks like it will read as 1 for a removable device, and 0 otherwise. This gives you a basic check for removability.
I'm not sure if there's a better way to check if it's a USB device, but
readlink /sys/block/sde will spit out something like ../devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1.2/6-1.2:1.0/host7/target7:0:0/7:0:0:0/block/sde. Checking if that contains a usb* folder might work as a simple check.
You can get other device details like vendor and model from /sys/block/sdX/device/, which might also come in handy.
| Find out if a specific device is an USB mass storage |
1,639,359,725,000 |
How to know whether a particular patch is applied to the kernel? Especially RT-Preempt Patch.
|
In the case of preempt you can just use uname:
uname -v
#23 SMP PREEMPT RT Fri Oct 16 11:52:29 CET 2012
The string PREEMPT shows that you use a kernel version with the realtime patch.
Some other patches might also changes the uname string. So it might also be a help. If this is not the case you can try to look at your .config. The file could be found in the /boot directory or (if enabled) by using cat /proc/config.gz. Maybe there is also a version in /usr/src/linux (or where you put the kernel sources).
If you found the config file you can grep for specific settings and find out if a patch is used.
| Checking linux Kernel for RT-Preempt Patch |
1,639,359,725,000 |
I'm doing some capacity planning and I was wondering if there is a formula that I could use to predict (from a memory standpoint) how many TCP connections I could handle on my server. At the moment, I'm only concerned about memory requirements.
Some variables that I think will show up in the formula are:
sysctl's net.ipv4.tcp_wmem (min or default value)
sysctl's net.ipv4.tcp_rmem (min or default value)
the size of the sock, sock_common, proto and other per-socket data structures.
I'm not sure how much of the tcp_wmem and tcp_rmem are actually allocated and when that memory is allocated. At socket creation time? On demand?
|
If you can modify the source code, then use rusage data to measure the RSS and record how many TCP connections are in play at the time of the measurement.
If source code cannot be changed, then use the RSS of the network app as reported by top or ps and get the number of network connections at the time of measurement from lsof -i.
Collect this data every minute while your application moves through peak load, and from that data you can come up with a formula that relates number of connections to RAM usage.
Of course there are a lot more things that you could measure, in particular you might want to measure kernel RAM usage, although tcp data structures should be predictable and calculable in advance. In any case, have a look at this question https://serverfault.com/questions/10852/what-limits-the-maximum-number-of-connections-on-a-linux-server for more information on TCP tuning and how to get a clear view of what is happening in the network stack.
| What is the formula to determine how much memory a socket consumes under Linux? |
1,639,359,725,000 |
I'd like to have all my modules built-in, but this fails with iwlagn:
iwlagn 0000:03:00.0: request for firmware file 'iwlwifi-6000-4.ucode' failed.
iwlagn 0000:03:00.0: no suitable firmware found!
The microcode file exists in /lib/firmware and the whole thing works just fine if I compile iwlagn as module. I have no idea where it's looking for the file or what's wrong - any ideas?
|
Have a look at the CONFIG_FIRMWARE_IN_KERNEL, CONFIG_EXTRA_FIRMWARE, and CONFIG_EXTRA_FIRMWARE_DIR configuration options (found at Device Drivers -> Generic Driver Options).
The first option will enable firmware being built into the kernel, the second one should contain the firmware filename (or a space-separated list of names), and the third where to look for the firmware.
So in your example, you would set those options to:
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE='iwlwifi-6000-4.ucode'
CONFIG_EXTRA_FIRMWARE_DIR='/lib/firmware'
A word of advise: Compiling all modules into the kernel is not a good idea. I think I understand your ambition because at some point I was also desperate to do it. The problem with such approach is that you cannot unload the module once it is built-in - and, unfortunately especially the wireless drivers tend to be buggy which leads to a necessity of re-loading their modules. Also, in some cases, a module version of a recent driver will just not work.
| Custom kernel: fails to load firmware when module built-in |
1,639,359,725,000 |
I'm setting my computer (running Debian Buster) up for Hurricane Electric's IPv6 tunnel broker. They provide instructions for several configuration methods, but here's for iproute2, which I've been using for testing purposes:
ip tunnel add he-ipv6 mode sit remote 216.66.22.2 local <local-IP-addr> ttl 255
ip link set he-ipv6 up
ip addr add <IPv6-addr>/64 dev he-ipv6
ip route add ::/0 dev he-ipv6
ip -f inet6 addr
When I do this, a mysterious sit0 interface also appears. Clearly, it's something related to 6in4 tunneling, but I can't find much more about it except that it's somehow special and it exists whenever the sit module is loaded. Out of curiosity, I tried to configure it for the tunnel broker instead of creating a new interface, but it didn't seem to be able to do that.
What is this device?
|
Based on information I've found in this blog post from Red Hat, I think I understand the purpose. (I'm not a Linux networking expert, so if anyone more knowledgeable sees any mistakes, feel free to correct me in the comments or post your own answer.)
First, some background:
A sit device is a type of virtual network device that takes your IPv6 traffic, encapsulates/decapsulates it in IPv4 packets, and sends/receives it over the IPv4 Internet to another host (e.g. your IPv6 tunnel broker). The outer packets have a special protocol number: 41. (This is like a port number, but at the IP layer instead of the TCP/UDP layer.)
Sending is fairly straightforward: The sit device has a local and remote IPv4 address associated with it, and these become the source and destination address in the outer IPv4 header. (The source address can also be "any" or 0.0.0.0, in which case Linux will pick a source address for you, based on the IPv4 interface the encapsulated packet is sent from.)
Receiving is only slightly more complicated: When a protocol 41 packet arrives, Linux needs to determine which sit device it belongs to. It does this by looking at the packet's outer IPv4 source and destination addresses. It compares these against the local and remote addresses for each sit device, and whichever one matches is the device that gets the packet and decapsulates it.
How sit0 comes into play:
You might wonder what happens when Linux receives a protocol 41 packet that doesn't match any of the sit devices (e.g. it came from some random address). In this case, it gets delivered to sit0.
sit0 is a special "fallback" device set up by the sit kernel module to handle just these packets. It has local and remote addresses set to 0.0.0.0 (i.e. "any") and it's not attached to any particular physical device, so it will accept any protocol 41 packet that's not already handled by some other sit device.
Is this useful? Maybe in specific circumstances, but I imagine in the vast majority of cases, you would want to drop such packets. I'm not sure why it's not left up to the administrator whether such a catch-all device should be created. Perhaps there is some historical reason.
| What is this sit0 device? |
1,639,359,725,000 |
Almost every page I've found is about to automatically start Xorg after login wihout explianation, take ~/.bash_profile for example:
if [[ ! $DISPLAY && $XDG_VTNR -eq 1 ]]; then
exec xinit
fi
I suppose $XDG_VTNR could be a variable for obtaining the current TTY number, however, there is already a command called tty, which can meet the same purpose.
My questions:
What is $XDG_VTNR? Where and when is it being set?
Where can I find the official documentation about this variable?
tty is a built-in command while $XDG_VTNR is provided by Xorg, why people choose to use $XDG_VTNR instead of built-in tty?
|
What is $XDG_VTNR? Where and when is it being set?
It's set by the pam_systemd PAM module, and is only set on machines which are using systemd, which means that you should not rely on it in your scripts, unless you want to make them depend on systemd.
On systems which are using systemd, $XDG_VTNR will be set both in graphical (by lightdm, gdm, etc) and in text-mode sessions (by /bin/login).
Where can I find the official documentation about this variable?
In the pam_systemd(8) manpage.
tty is a built-in command while $XDG_VTNR is provided by Xorg, why people choose to use $XDG_VTNR instead of built-in tty?
1) tty is a standalone program, not a built-in, and $XDG_VTNR is not provided by Xorg.
2) Because they're completely different things. As clearly stated in its manpage, tty(1) will tell you the name of the terminal connected to its standard input, not the name of the virtual terminal your GUI session or such may be running on[1]. Consider this:
$ script -q /dev/null
$ tty
/dev/pts/5
$ script -q /dev/null
$ tty
/dev/pts/6
$ tty </dev/zero
not a tty
[1] for which XDG_VTNR isn't a reliable indicator either.
| What is the environment variable XDG_VTNR? |
1,639,359,725,000 |
Currently working on a project where I'm dealing with an arbitrary group of disks in multiple systems. I've written a suite of software to burn-in these disks. Part of that process was to format the disks. While testing my software, I realized that if at some point during formatting the disks, the process stops/dies, and I want to restart the process, I really don't want to reformat all of the disks in the set, which have already successfully formatted.
I'm running this software from a ramfs with no disks mounted and none of the disks I am working on ever get mounted and they not be used by my software for anything other than testing, so anything goes on these bad boys. There's no data about which to be concerned.
EDIT:
No, I'm not partitioning.
Yes, ext2 fs.
This is the command I'm using to format:
(/sbin/mke2fs -q -O sparse_super,large_file -m 0 -T largefile -T xfs -FF $drive >> /tmp/mke2fs_drive.log 2>&1 & echo $? > $status_file &)
SOLUTION:
Thanks to Jan's suggestion below:
# lsblk -f /dev/<drv> I concocted the following shell function, which works as expected.
SOURCE
is_formatted()
{
drive=$1
fs_type=$2
if [[ ! -z $drive ]]
then
if [[ ! -z $fs_type ]]
then
current_fs=$(lsblk -no KNAME,FSTYPE $drive)
if [[ $(echo $current_fs | wc -w) == 1 ]]
then
echo "[INFO] '$drive' is not formatted. Formatting."
return 0
else
current_fs=$(echo $current_fs | awk '{print $2}')
if [[ $current_fs == $fs_type ]]
then
echo "[INFO] '$drive' is formatted with correct fs type. Moving on."
return 1
else
echo "[WARN] '$drive' is formatted, but with wrong fs type '$current_fs'. Formatting."
return 0
fi
fi
else
echo "[WARN] is_formatted() was called without specifying fs_type. Formatting."
return 0
fi
else
echo "[FATAL] is_formatted() was called without specifying a drive. Quitting."
return -1
fi
}
DATA
sdca ext2 46b669fa-0c78-4b37-8fc5-a26368924b8c
sdce ext2 1a375f80-a08c-4889-b759-363841b615b1
sdck ext2 f4f43e8c-a5c6-495f-a731-2fcd6eb6683f
sdcn
sdby ext2 cf276cce-56b1-4027-a795-62ef62d761fa
sdcd ext2 42fdccb8-e9bc-441e-a43a-0b0f8d409c71
sdci ext2 d6e7dc60-286d-41e2-9e1b-a64d42072253
sdbw ext2 c3986491-b83f-4001-a3bd-439feb769d6a
sdch ext2 3e7dba24-e3ec-471a-9fae-3fee91f988bd
sdcq
sdcf ext2 8fd2a6fd-d1ae-449b-ad48-b2f9df997e5f
sdcs
sdco
sdcw ext2 27bf220e-6cb3-4953-bee4-aff27c491721
sdcp ext2 133d9474-e696-49a7-9deb-78d79c246844
sdcx
sdct
sdcu
sdcy
sdcr
sdcv
sdde
sddc ext2 0b22bcf1-97ea-4d97-9ab5-c14a33c71e5c
sddi ext2 3d95fbcb-c669-4eda-8b57-387518ca0b81
sddj
sddb
sdda ext2 204bd088-7c48-4d61-8297-256e94feb264
sdcz
sddk ext2 ed5c8bd8-5168-487f-8fee-4b7c671ef2cb
sddl
sddn
sdds ext2 647d2dea-f71d-4e87-bbe5-30f6424b36c9
sddf ext2 47128162-bcb7-4eab-802d-221e8eb36074
sddo
sddh ext2 b7f41e1a-216d-4580-97e6-f2df917754a8
sddg ext2 39b838e0-f0ae-447c-8876-2d36f9099568
Which yielded:
[INFO] '/dev/sdca' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdce' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdck' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcn' is not formatted. Formatting.
[INFO] '/dev/sdby' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcd' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdci' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdbw' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdch' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcq' is not formatted. Formatting.
[INFO] '/dev/sdcf' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcs' is not formatted. Formatting.
[INFO] '/dev/sdco' is not formatted. Formatting.
[INFO] '/dev/sdcw' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcp' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcx' is not formatted. Formatting.
[INFO] '/dev/sdct' is not formatted. Formatting.
[INFO] '/dev/sdcu' is not formatted. Formatting.
[INFO] '/dev/sdcy' is not formatted. Formatting.
[INFO] '/dev/sdcr' is not formatted. Formatting.
[INFO] '/dev/sdcv' is not formatted. Formatting.
[INFO] '/dev/sdde' is not formatted. Formatting.
[INFO] '/dev/sddc' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddi' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddj' is not formatted. Formatting.
[INFO] '/dev/sddb' is not formatted. Formatting.
[INFO] '/dev/sdda' is formatted with correct fs type. Moving on.
[INFO] '/dev/sdcz' is not formatted. Formatting.
[INFO] '/dev/sddk' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddl' is not formatted. Formatting.
[INFO] '/dev/sddn' is not formatted. Formatting.
[INFO] '/dev/sdds' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddf' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddo' is not formatted. Formatting.
[INFO] '/dev/sddh' is formatted with correct fs type. Moving on.
[INFO] '/dev/sddg' is formatted with correct fs type. Moving on.
Do note that the magic potion was extending Jan's suggestion to simply output what I cared about: lsblk -no KNAME,FSTYPE $drive
|
Depending on how you access the drives, you could use blkid -o list (deprecated) on them and then parse the output.
The command outputs, among other things, a fs_type label column, that shows the filesystem.
blkid -o list has been superseded be lsblk -f.
| Method to test if disks in system are formatted |
1,639,359,725,000 |
I am trying to do something like
ls -t | head -n 3 | xargs -I {} tar -cf t.tar {}
to archive the 3 last modified files but it ends up running the tar command separately for each of the files and at the end I am left with one tar file containing the last of the 3 files (in their whatever order). I know I am not using 'xargs' correctly but searching did not help; I find examples that do not work either. Even the simpler command
ls | xargs -I {} tar -cf t.tar {}
ends up with a tar file that contains only one of the files in that directory.
|
ls -t | head -n 3 | xargs tar -cf t.tar
Works for me. Is there a reason you need the -I flag set?
| passing variables from 'ls' to 'tar' via 'xargs' |
1,639,359,725,000 |
If I understand correctly, the Linux Kernel is licensed under the GPL, which means that if anyone bases anything on it, they have to also license the entire derivative work under the GPL, making anyone free to modify and/or redistribute their derivative work.
For example, all Android releases are based on the LK. Does that not mean that the whole release and all its components also have to be released under the GPL?
For example, most Android releases ship with proprietary components. Does that not violate the GPL? Doesn't the whole derivative work need to be released under the GPL?
With Ubuntu, for example, you have to download MPEG codecs post-installation. I assume that this is because MPEG is proprietary, and that MPEG's license is therefore incompatible with the GPL, so they can't be included in the same release?
How do Android releases get around this?
|
First, you must be clear that Google's Android code and Linux kernel code are separate. Android itself is licensed under Apache License 2.0, which is permissive, and in Wikipedia's words:
The Apache License is permissive in that it does not require a derivative work of the software, or modifications to the original, to be distributed using the same license.
As such, none of Android modification by the vendors are normally made available.
Second, in case of Linux kernel, being licensed under GPLv2, the code is released to the public, either buried somewhere within the device (less likely) or available to download on some obscure page in the vendor's website dedicated to open-source codes in their products.
However, there is a major caveat to the Linux kernel code made publicly available—proprietary drivers and kernel modules of similar functionality. Linux kernel can load binary blobs distributed under proprietary license, and the source code of such blobs necessary to run your device is naturally not distributed. Bottom line being, even if you do manage to get your hands on the kernel source specific to your device, you won't necessarily be able to use it to compile your own functioning Linux-based OS.
Permissive, in context
What exactly do we mean by a permissive license? From your comments, I think you've taken it to mean the ability to run other permissively or proprietarily licensed software. But that is wrong.
Permissive in this context means, how permissible it is to let you do as you want with the source code.
GPL is not permissive in the sense that you're legally bound to publicize any modification you make to a GPL-licensed code. It does not permit you take everybody's contribution, make changes to it (regardless of making it better or worse) and hide it away. If you're going to distribute the binary, you've to distribute the source code as well. Since it does not permit you to make private changes it is therefore not permissive.
Apache License and the BSD licenses are examples of permissive licenses. Opposite of strictly non-permissive GPL, it lets you make any modification to the codes licensed under them and keep it to yourself, in other words, it is permissive. That is to say, you can take the Android code, even if you change it enough to make it unrecognizable, you're free to keep it yourself. And that is exactly what Android device vendors does.
| How can semi-proprietary software be based on the Linux Kernel? |
1,639,359,725,000 |
While reading the man page for fdisk I came across this interesting text:
There are several *fdisk programs around. Each has its problems and strengths. Try them in the order
cfdisk, fdisk, sfdisk. (Indeed, cfdisk is a beautiful program that has strict requirements on the parti‐
tion tables it accepts, and produces high quality partition tables. Use it if you can. fdisk is a buggy
program that does fuzzy things - usually it happens to produce reasonable results. Its single advantage is
that it has some support for BSD disk labels and other non-DOS partition tables. Avoid it if you can.
sfdisk is for hackers only -- the user interface is terrible, but it is more correct than fdisk and more
powerful than both fdisk and cfdisk. Moreover, it can be used noninteractively.)
I notice that the option formats are not the same for the two applications:
- melancholy():/$ sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00036f1b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 39063551 19530752 83 Linux
/dev/sda2 39063552 976771071 468853760 83 Linux
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00085251
Device Boot Start End Blocks Id System
/dev/sdc1 2048 15624191 7811072 82 Linux swap / Solaris
/dev/sdc2 * 15624192 64452607 24414208 83 Linux
/dev/sdc3 64454654 1953523711 944534529 5 Extended
Partition 3 does not start on physical sector boundary.
/dev/sdc5 64454656 1953523711 944534528 83 Linux
- melancholy():/$ sudo cfdisk -l
cfdisk: invalid option -- 'l'
Usage:
Print version:
cfdisk -v
Print partition table:
cfdisk -P {r|s|t} [options] device
Interactive use:
cfdisk [options] device
Options:
-a: Use arrow instead of highlighting;
-z: Start with a zero partition table, instead of reading the pt from disk;
-c C -h H -s S: Override the kernel's idea of the number of cylinders,
the number of heads and the number of sectors/track.
Copyright (C) 1994-2002 Kevin E. Martin & aeb
Is there real merit in switching to using cfdisk instead of fdisk, or is the man page note a historical observation that is no longer valid? Since fdisk works fine for listing the disks info (-l flag) should I use cfdisk only for altering the partitions and partition tables? What are the real strengths and weaknesses of each application?
|
cfdisk is a graphical application designed to be more friendly to the novice. If you are comfortable with fdisk, then by all means, use it. If you prefer a bit more hand holding and fewer ( advanced ) options, use cfdisk. Another good alternative is GNU parted.
| cfdisk or fdisk? |
1,639,359,725,000 |
I keep my digital music and digital photos in directories in a Windows partition, mounted at /media/win_c on my dual-boot box.
I'd like to include those directories—but only those directories—in the locate database.
However, as far as I can make out, updatedb.conf only offers options to exclude directories, not add them.
Of course, I could remove /media from PRUNEPATHS, and then add a whole bunch of subdirectories (/media/win_c/Drivers, /media/win_c/ProgramData...) but this seems a very clunky way of doing it—surely there's a more elegant solution?
(I tried just creating soft links to the Windows directories from an indexed linux partition, but that doesn't seem to help.)
|
There's no option for that in updatedb.conf. You'll have to arrange to pass options to updatedb manually.
With updatedb from GNU findutils, pass --localpaths.
updatedb --localpaths '/ /media/win_c/somewhere/Music /media/win_c/somewhere/Photos'
With updatedb from mlocate, there doesn't appear a way to specify multiple roots or exclude a directory from pruning, so I think you're stuck with one database per directory. Set the environment variable LOCATE_PATH to the list of databases:
updatedb --output ~/.media.mlocate.db --database-root /media/win_c/somewhere --prunepaths '/media/win_c/somewhere/Videos'
export LOCATE_PATH="$LOCATE_PATH:$HOME/.media.mlocate.db"
| How to add specific directories to "updatedb" (locate) search path? |
1,639,359,725,000 |
Here's something that kept me wondering for a while:
[15:40:50][/tmp]$ mkdir a
[15:40:52][/tmp]$ strace rmdir a
execve("/usr/bin/rmdir", ["rmdir", "a"], [/* 78 vars */]) = 0
brk(0) = 0x11bb000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff3772c3000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=245801, ...}) = 0
mmap(NULL, 245801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff377286000
close(3) = 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\36\3428<\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2100672, ...}) = 0
mmap(0x3c38e00000, 3924576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3c38e00000
mprotect(0x3c38fb4000, 2097152, PROT_NONE) = 0
mmap(0x3c391b4000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x3c391b4000
mmap(0x3c391ba000, 16992, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3c391ba000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff377285000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff377283000
arch_prctl(ARCH_SET_FS, 0x7ff377283740) = 0
mprotect(0x609000, 4096, PROT_READ) = 0
mprotect(0x3c391b4000, 16384, PROT_READ) = 0
mprotect(0x3c38c1f000, 4096, PROT_READ) = 0
munmap(0x7ff377286000, 245801) = 0
brk(0) = 0x11bb000
brk(0x11dc000) = 0x11dc000
brk(0) = 0x11dc000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=106070960, ...}) = 0
mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff370d5a000
close(3) = 0
rmdir("a") = 0
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
[15:40:55][/tmp]$ touch a
[15:41:16][/tmp]$ strace rm a
execve("/usr/bin/rm", ["rm", "a"], [/* 78 vars */]) = 0
brk(0) = 0xfa8000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2388a000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=245801, ...}) = 0
mmap(NULL, 245801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3b2384d000
close(3) = 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\36\3428<\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2100672, ...}) = 0
mmap(0x3c38e00000, 3924576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3c38e00000
mprotect(0x3c38fb4000, 2097152, PROT_NONE) = 0
mmap(0x3c391b4000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x3c391b4000
mmap(0x3c391ba000, 16992, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3c391ba000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2384c000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2384a000
arch_prctl(ARCH_SET_FS, 0x7f3b2384a740) = 0
mprotect(0x60d000, 4096, PROT_READ) = 0
mprotect(0x3c391b4000, 16384, PROT_READ) = 0
mprotect(0x3c38c1f000, 4096, PROT_READ) = 0
munmap(0x7f3b2384d000, 245801) = 0
brk(0) = 0xfa8000
brk(0xfc9000) = 0xfc9000
brk(0) = 0xfc9000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=106070960, ...}) = 0
mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3b1d321000
close(3) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
newfstatat(AT_FDCWD, "a", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
geteuid() = 1000
newfstatat(AT_FDCWD, "a", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
faccessat(AT_FDCWD, "a", W_OK) = 0
unlinkat(AT_FDCWD, "a", 0) = 0
lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
close(0) = 0
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
Why are there separate system calls for removing a directory and files? Why would these two operations be semantically distinct?
|
Directories are special in the sense that within a directory you can have references to several files and directories, so, if you remove the parent directory, all those files lose their reference point from where they can be accessed, the same with process. For such cases, rmdir() have different checks, that are different from unlink():
If the directory is not empty. If a directory is not empty it can't remove it until the contents are unlink'd/removed.
ENOTEMPTY
pathname contains entries other than . and .. ; or, pathname has
.. as its final component. POSIX.1-2001 also allows EEXIST for
this condition.
If the directory is in use. If a process losses their current directory, it could lead to problems and undefined behaviors. Is better to prevent them.
EBUSY pathname is currently in use by the system or some process that
prevents its removal. On Linux this means pathname is currently
used as a mount point or is the root directory of the calling
process.
In the case of unlink() these checks doesn't exist. In fact, you can delete the name of a file with unlink() and the process that is still using/making reference to it, can modify it without problems. The file exist until the file descriptor exist, just unaccessible to new process (unless you know where to search). This is part of the rainbow-colored-hands magic of the *NIX file systems.
Now, there's the unlinkat() which behaves as both, unlink() or rmdir(2) depending the path which is what you expect.
| Why are rmdir and unlink two separate system calls? |
1,639,359,725,000 |
Can someone give me a good reference on how to achieve this, or just tell me how its done? Google isn't really helping me here, since it always tries to give me recommendations on touchpad :/
|
As long as your touchscreen is detected as a boring input device, you could do this with xinput. This tool allows you to define new master pointers (the virtual pointer which resembles one mouse pointer) and to detach and attach slave pointers (the actual hardware devices) from and to it.
So all you need to do is
create a new master pointer
reattach your touchscreen to this new master pointer
This is done similar to this:
Create the new master pointer:
$ xinput create-master touchy
This creates a new master keyboard/pointer pair, where the keyboard is called »touch keyboard« and the pointer »touchy pointer«:
$ xinput
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=10 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=11 [slave pointer (2)]
⎜ ↳ My Cool™ Touchscreen id=14 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=12 [slave keyboard (3)]
↳ HID 046a:0011 id=13 [slave keyboard (3)]
⎡ touchy pointer id=15 [master pointer (16)]
⎜ ↳ touchy XTEST pointer id=17 [slave pointer (15)]
⎣ touchy keyboard id=16 [master keyboard (15)]
↳ touchy XTEST keyboard id=18 [slave keyboard (16)]
Retach your touch screen slave pointer to the new master
In this example I'll assume »My Cool™ Touchscreen« to be the device to use (id=14):
$ xinput reattach 14 15
This will result in the following:
$ xinput
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=10 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=11 [slave pointer (15)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=12 [slave keyboard (3)]
↳ HID 046a:0011 id=13 [slave keyboard (3)]
⎡ touchy pointer id=15 [master pointer (16)]
⎜ ↳ My Cool™ Touchscreen id=14 [slave pointer (2)]
⎜ ↳ touchy XTEST pointer id=17 [slave pointer (15)]
⎣ touchy keyboard id=16 [master keyboard (15)]
↳ touchy XTEST keyboard id=18 [slave keyboard (16)]
Now your touchscreen should act like an individual pointing device.
Edit: To get rid of the second mouse pointer, which is rather superfluous for a touchscreen, you could use this tool, which utilizes the XInput2 extension to change the pointers individually.
| Touchscreen and mouse as separate inputs? |
1,639,359,725,000 |
I'm trying to see if id -r will print out the UID or username of the user who logged into the system despite any su's or sudo's. I'm interested in doing this so I can keep people a little more accountable and to tailor script functioning accordingly (i.e: they issue a sudo on a script and it pulls information from the logged in user's home directory).
I realize sudo sets SUDO_USER but I don't want to rely on this because it's a variable that can be modified by the user, and it just has the username of the user who issued the most recent sudo (i.e: sudo -i ; sudo -iu randomUser ; echo $SUDO_USER prints out "root" instead of the actual user).
Nothing in the man pages or that I can find online seems to indicate what the proper use of this command is and the obvious permutations aren't working:
[root@ditirlns03 ~]# id -r
id: cannot print only names or real IDs in default format
[root@ditirlns03 ~]# id -r jadavis6
id: cannot print only names or real IDs in default format
[root@ditirlns03 ~]# id -r root
id: cannot print only names or real IDs in default format
At this point, I'm still not sure id -r is going to print out what I want, mostly because I can't figure out how to get it to print out anything at all.
|
-r must be used in conjunction with another option. For example:
$ id -Gr
1000 4 24 27 30 46 109 124
Quoting the man page:
-r, --real
print the real ID instead of the effective ID, with -ugG
| Proper syntax for "id -r" command |
1,639,359,725,000 |
I log into a linux server (installed as a virtual machine) using a graphical ssh client (securessh). This server runs a tomcat5.5 server where nexus is installed.
When I type commands or delete/copy small files (around 5-6 MB), the shell takes a long time to respond (from 10 seconds to almost a minute). I have tried to run top to see if any processes use a lot of memory/cpu time:
top - 13:34:41 up 86 days, 16:04, 1 user, load average: 2.13, 0.99, 1.94
Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.0%us, 1.5%sy, 0.0%ni, 96.2%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 3896416k total, 3097824k used, 798592k free, 167180k buffers
Swap: 915664k total, 84k used, 915580k free, 2409236k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20436 tomcat55 20 0 359m 217m 13m S 18 5.7 2713:04 jsvc
Only the tomcat55 user use a significant amount of resources. Based on the above output is seems that this user only spends 5.7% of the mem and only 5.7% of the cpu. Am I misreading top's output? Why is the machine performing so poorly if the CPU and memory are so underutilized?
Edit: I have now tried to run atop and get:
ATOP - repository 2011/09/20 16:08:48 10 seconds elapsed
PRC | sys 0.17s | user 0.03s | #proc 64 | #zombie 0 | #exit 4 |
CPU | sys 2% | user 1% | irq 0% | idle 198% | wait 0% |
cpu | sys 1% | user 1% | irq 0% | idle 98% | cpu001 w 0% |
cpu | sys 0% | user 0% | irq 0% | idle 99% | cpu000 w 0% |
CPL | avg1 0.05 | avg5 0.92 | avg15 1.29 | csw 976 | intr 61 |
MEM | tot 3.7G | free 656.7M | cache 2.4G | buff 170.9M | slab 241.3M |
SWP | tot 894.2M | free 894.1M | | vmcom 781.9M | vmlim 2.7G |
DSK | sda | busy 0% | read 0 | write 9 | avio 0 ms |
NET | transport | tcpi 18 | tcpo 26 | udpi 0 | udpo 0 |
NET | network | ipi 22 | ipo 26 | ipfrw 0 | deliv 22 |
NET | eth1 0% | pcki 34 | pcko 26 | si 2 Kbps | so 11 Kbps |
PID SYSCPU USRCPU VGROW RGROW RDDSK WRDSK ST EXC S CPU CMD 1/1
4687 0.06s 0.02s 0K 0K - - NE 0 E 1% <lsb_release>
4689 0.04s 0.01s 0K 0K - - NE 0 E 1% <apt-cache>
4684 0.04s 0.00s 132K 132K 0K 0K -- - R 0% atop
4673 0.02s 0.00s 0K 0K 0K 0K -- - S 0% sshd
4152 0.01s 0.00s 0K 0K 0K 0K -- - S 0% vmware-guestd
2302 0.00s 0.00s 0K 0K 0K 4K -- - S 0% kjournald
4688 0.00s 0.00s 0K 0K - - NE 0 E 0% <sh>
4686 0.00s 0.00s 0K 0K - - NE 0 E 0% <sh>
If I understand this correct there a no 'zombies' but still they take up most of the cpu time (it jumps from 199% to 200%). Is this expected behavior?
|
In addition to iostat you should also consider atop ( http://www.atoptool.nl/ ) to identify non-cpu bottlenecks.
| How to determine why a machine is running slowly? |
1,639,359,725,000 |
I compiled a small C program (2 lines of codes) with gcc to try to understand ELF file format.
Doing a readelf -h on the object file, I have in the header :
OS/ABI: UNIX - System V
I am using Fedora, so why isn't it Linux instead ?
Edit: I compiled
int main(){
int x = 0;
x++;
}
with gcc -o main.o -c main.c. My gcc version is
gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4)
|
There are few differences between ELF executables on different platforms. “UNIX - System V” is the common ground; System V is where the ELF format came from. The corresponding numerical value is 0. This value indicates that the executable doesn't use any OS-specific extension. Debian GNU/Linux, at least, configures GCC/binutils to generate executables with this field set to 0 by default.
| Why does readelf show "System V" as my OS instead of Linux? |
1,639,359,725,000 |
I have an Intel wireless card driven by iwlwifi, and I can see the following message in dmesg:
iwlwifi 0000:03:00.0: loaded firmware version 17.168.5.3 build 42301
Given that I know which blob is loaded, how I can find out the version of this blob (.ucode file)?
If you look at the below where the ucode is loaded, it doesn't tell me the version information just that a blob was loaded. But I know Intel versions these.
$ sudo dmesg | grep ucode
[ 26.132487] iwlwifi 0000:03:00.0: firmware: direct-loading firmware iwlwifi-6000g2a-6.ucode
[40428.475015] (NULL device *): firmware: direct-loading firmware iwlwifi-6000g2a-6.ucode
|
The iwlwifi driver loads the microcode file for your wifi adapter at startup. If you want to know the version of the blobs you have on your machine, try Andrew Brampton's script. Run:
## Note the firmware may stored in `/usr/lib`
./ucode.py /lib/firmware/iwlwifi-*.ucode
And compare the output to your journal (dmesg output).
Note that the script works with python2.
| How can I parse the microcode (ucode) in iwlwifi to get the version numbers? |
1,639,359,725,000 |
My situation is that from time to time a specific process (in this case, it's Thunderbird) doesn't react to user input for a minute or so. I found out using iotop that during this time, it writes quite a lot to the disk, and now I want to find out which file it writes to, but unfortunately iotop gives only stats per process and not per open file(-descriptor).
I know that I can use lsof to find out which files the process has currently open, but of course Thunderbird has a lot of them open, so this is not that helpful. iostat only shows statistics per device.
The problem occurs only randomly and it might take quite some time for it to appear, so I hope I don't have to strace Thunderbird and wade through long logs to find out which file has the most writes.
|
If you attach strace to the process just when it's hung (you can get the pid and queue the command up in advance, in a spare terminal), it'll show the file descriptor of the blocking write.
Trivial example:
$ mkfifo tmp
$ cat /dev/urandom > tmp &
[1] 636226
# this will block on open until someone opens for reading
$ exec 4<tmp
# now it should be blocked trying to write
$ strace -p 636226
Process 636226 attached - interrupt to quit
write(1, "L!\f\335\330\27\374\360\212\244c\326\0\356j\374`\310C\30Z\362W\307\365Rv\244?o\225N"..., 4096 <unfinished ...>
^C
Process 636226 detached
| How to find out which file is currently written by a process |
1,639,359,725,000 |
I have found that when running into an out-of-memory OOM situation, my linux box UI freezes completely for a very long time.
I have setup the magic-sysrq-key then using echo 1 | tee /proc/sys/kernel/sysrq and encountering a OOM->UI-unresponsive situation was able to press Alt-Sysrq-f which as dmesg log showed causes the OOM to terminate/kill a process and by this resolve the OOM situation.
My question is now. Why does linux become unresponsive as much as the GUI froze, however did seem not to trigger the same OOM-Killer, which I did trigger manually via Alt-Sysrq-f key combination?
Considering that in the OOM "frozen" situation the system is so unresponsive as to not even allow a timely (< 10sec) response to hitting the Ctrl-Alt-F3(switch to tty3), I would have to assume the kernel must be aware its unresponsiveness, but still did not by itself invoke the Alt-Sysrq-f OOM-Killer , why?
These are some settings that might have an impact on the described behaviour.
$> mount | grep memory
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
$> cat /sys/fs/cgroup/memory/memory.oom_control
oom_kill_disable 0
under_oom 0
oom_kill 0
which while as I understand states that the memory cgroup does not have OOM neithe activated nor disabled (evidently there must be a good reason to have the OOM_kill active and disabled, or maybe I cannot interpret correctly the output, also the under_oom 0 is somewhat unclear, still)
|
The reason the OOM-killer is not automatically called is, because the system, albeit completely slowed down and unresponsive already when close to out-of-memoryy, has not actually reached the out-of-memory situation.
Oversimplified the almost full ram contains 3 type of data:
kernel data, that is essential
pages of essential process data (e.g. any data the process created in ram only)
pages of non-essential process data (e.g. data such as the code of executables, for which there is a copy on disk/ in the filesystem, and which while being currently mapped to memory could be "reread" from disk upon usage)
In a memory starved situation the linux kernel as far as I can tell it is kswapd0 kernel thread, to prevent data loss and functionality loss, cannot throw away 1. and 2. , but is at liberty to at least temporarily remove those mapped-into-memory-files data from ram that is form processes that are not currently running.
While this is behaviour which involves disk-thrashing, to constantly throw away data and reread it from disk, can be seen as helpful as it avoids, or at least postpones the necessariry removing/killing of a process and the freeing-but-also-loosing of its memory, it has a high price: performance.
[load pages from disk to ram with code of executable of process 1]
[ run process 1 ]
[evict pages with binary of process 1 from ram]
[load pages from disk to ram with code of executable of process 2]
[ run process 2 ]
[evict pages with binary of process 2 from ram]
[load pages from disk to ram with code of executable of process 3]
[ run process 3 ]
[evict pages with binary of process 3 from ram]
....
[load pages from disk to ram with code of executable of process 1]
[ run process 1 ]
[evict pages with binary of process 1 from ram]
is clearly IO expensive and the system is likely to become unresponsive, event though technically it has not yet run out completely of memory.
From a user persepective however it seems, to be hung/frozen and the resulting unresponsive UI might not be really preferable, over simply killing the process (e.g. of a browser tab, whose memory usage might have very well been the root cause/culprit to begin with.)
This is where as the question indicated the Magic SysRq key trigger to start the OOM manually seems great, as the Magic SysRq is less impacted by the unresponsiveness of the system.
While there might be use-cases where it is important to preserve the processes at all (performance) costs, for a desktop, it is likely that uses would prefere the OOM-killer over the frozen UI. There is patch that claims to exempt clean mapped fs backed files from memory in such situation in this answer on stackoverflow.
| Why does linux out-of-memory (OOM) killer not run automatically, but works upon sysrq-key? |
1,639,359,725,000 |
I am reading about mount namespaces and see:
in a mount namespace you can mount and unmount filesystems without it affecting the host filesystem. So you can have a totally different set of devices mounted (usually less).
I am trying to understand linux namespaces, and LXC and such, but I don't quite understand what that statement above means.
What I'm trying to understand is how a container (1) can have files like this:
/foo/a.txt
/foo/bar/b.txt
And another container (2) can have files like this:
/foo/a.txt
/foo/x.txt
/foo/bar/b.txt
/foo/bar/y.txt
Where /foo/a.txt and /foo/bar/b.txt on containers (1) and (2) are the same path, but perhaps they have different content:
# container (1)
cat /foo/a.txt #=> Hello from (1)
# container (2)
cat /foo/a.txt #=> Hello from (2)
This would mean that the files on the physical system (which I don't know anything about) are stored in one way, perhaps scattered all around. But then there is a centralized database of "virtual" files in the operating system, like this:
db:
container1:
foo:
a.txt: Hello from a from (1)
bar:
b.txt: Hello from b from (1)
container2:
foo:
a.txt: Hello from a from (2)
x.txt: Hello from x from (2)
bar:
b.txt: Hello from b from (2)
y.txt: Hello from y from (2)
Then there is another OS database for the physical files which might look like this:
drive1:
dir1:
foo:
a.txt
bar:
b.txt
dir2:
foo:
a.txt
x.txt
bar:
b.txt
y.txt
So when you create a file in the container, you are actually creating 2 new records:
1 for the drive-level physical files map
1 for the container-level virtual files map
This is how I imagine it to work. This is how I can see there being a way to (1) present the user (in an LXC container or cgroup (which I don't know much about)) with what feels like a complete "file system", in which they can (2) create their own fully-customizable directory structure (that may have the same named files/directories/paths as a completely different virtual file system), such that (3) the files from multiple different virtual file systems / containers don't override each other.
Wondering if this is how it works, or if not, how it actually works (or an outline of how it works).
|
mount namespaces differ in the arrangement of mounted filesystems.
This is very flexible, because mounts can be bind mounts of a sub-directory within a filesystem.
# unshare --mount # run a shell in a new mount namespace
# mount --bind /usr/bin/ /mnt/
# ls /mnt/cp
/mnt/cp
# exit # exit the shell, and hence the mount namespace
# ls /mnt/cp
ls: cannot access '/mnt/cp': No such file or directory
You can list your current set of mounts with the findmnt command.
In a full container, the root mount is replaced and you work with an entirely separate tree of mounts. This involves some extra details, such as the pivot_root() system call. You probably don't need to know exactly how to do that. Some details are available here: How to perform chroot with Linux namespaces?
| Understanding how mount namespaces work in Linux |
1,639,359,725,000 |
Is there a way to prefer 5GHz band to 2.4GHz band (without disabling 2.4GHz altogether) for a specific wifi network (SSID) without setting BSSID or failing that for all networks?
I am using Mint 18.3 with Cinnamon 3.6.7's network manager.
I have a new Dell XPS 13. Kernel is 4.13.0-26-generic.
lspci | grep -i wireless
3a:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 32)
Background:
The routers are in an office so I cannot alter their setup. I can only do things on my laptop. Also it is the initial connection more than roaming that I am thinking of. When I connect to an SSID in the office at my desk, I would like it to prefer 5GHz to 2.4GHz (because the 5GHz signal is stronger), but right now it connects to 2.4GHz by default. Although I can set the BSSID to a specific 5GHz one which works well at my desk, this is not a good solution because if I do move around the building, I have to unset it again to allow roaming to work. I was advised by IT support to choose the prefer 5GHz option in Windows, but I am running Linux.
For Windows:
On right side of taskbar > Right-click on network icon
Choose Open Network and Sharing Center
Select Change adapter settings
Find your Wi-Fi connection (your connections will differ)
Right-click, Properties > Configure > Preferred Band to 5.2GHz
Click OK
|
You are dealing with limitations of the Wifi protocol.
Several corporate wifi vendors like Meru have non-standard technologic mitigation measures in place for dealing with that kind of limitations, because usually leaving that kind of decisions for the client is not the ideal situation, and even then they are not free of problems.
With some solutions, for instance OpenWRT or other corporate solutions, you can define threshold points/distance where the AP will force a drop of the client device from 5GHz to 2.4GHz; however a roaming client that already dropped to the 2.4GHz band, coming back again near the AP is not guaranteed to go back to 5GHz without user intervention. Meru controllers/APs have a measure/hack to try to force client devices moving back to 5GHz in those situations, and it creates a lot of problems.
As for the Dell XPS 13, it has known wifi problems because the USB C interface is right next to the wifi. Dell has not done anything about it in 3 or 4 generations of the hardware besides putting aluminum foil between the two devices in latter models, and deploying an unfortunate firmware patch for decreasing the maximum allowed potency of the wifi interface in all models.
At most on the linux side, you can monitor the wifi interface every couple of minutes in whitelisted SSIDs for when the connection is in 2.4GHz mode and force a drop of the connection when the quality is better; it is an ugly hack and I would not go down that road.
Often the solution that offers the more control for the client to select between 2.4GHz and 5GHz is having different SSIDs for the two technologies.
Saying that, at home I have both of them with the same SSID, and on the OpenWRT AP side, a threshold of 7 meters where 5GHz clients are forced to drop to the 2.4GHz band (because of walls).
To specify priorities for the initial scanning between 5GHz and 2.4GHz band, you can add to your /etc/wpa_supplicant.conf file the following directive:
scan_freq=5500 5520 ...
or
freq_list=....
However, I actually have no idea of the adqueate frequency values for your infra-structure to put in there. Ask for the advice of your IT team. I would fudge the normal values putting their active frequencies first.
| Prefer 5GHz to 2.4Ghz band for specific wifi network (SSID) without using BSSID or for all networks? |
1,639,359,725,000 |
I'm trying to understand how network drivers work under Linux. This Q&A showed that the network device in Linux isn't represented by a device file. It states that network drivers work with sockets.
For example, this references how to setup the network devices through ioctl calls. ioctl however needs a file descriptor, given that there are no device files for network drivers, the only file descriptor that can be passed is the one from the socket.
This brings me to the point of the question. So far it seems like the network interface, which would be a software representation of a physical network card, is actually an inferior object to a socket.
But what is a socket in this abstract sense, is it just another name for a device file that supports push notifications? I understand TCP sockets in term of connection points binded by a userspace app to a address:port pair on a network interface. I don't understand a socket as a prerequisite to set up a network interface.
Can a network interface on Linux (like eth0 listed by ifconfig) exist without a socket?
Does ifconfig or some network manager daemon keep a socket open to allow us set the network interface options?
|
Let's quickly review device files: In Linux, application programs communicate read and write operations to the kernel through file descriptors. That works great for files, and it turned out that the same API could be used for character devices that produce and consume streams of characters, and block devices that read and write blocks of fixed size at a random access address, just by pretending that these are also files.
But a way was needed to configure those devices (set baud rates etc.), and for that, the ioctl call was invented. It just passes a data structure that's specific to the device and the kind of I/O control used to the kernel, and gets back the results in the same data structure, so it's a very generic extensible API and can be used for lots of things.
Now, how do network operations fit in? A typical network server application wants to bind to some network address, listen on a certain port (e.g. 80 for HTTP, or 22 for ssh), and if a client connects, it wants to send data to and receive data from this client. And the dual operations for the client.
It's not obvious how to fit this in with file operations (though it can be done, see Plan 9), that's why the UNIX designers invented a new API: sockets. You can find details in the section 2 man pages for socket, bind, listen, connect, send and recv. Note that while it is distinct from the file I/O API, the socket call nevertheless also returns a file descriptor. There are numerous tutorials on how to use sockets on the web, google a bit.
So far this is all pure UNIX, nobody was talking about network interfaces at the time sockets were invented. And because this API is really old, it is defined for a variety of network protocols beyond the Internet protocol (look at the AF_* constants), though only a few of those are supported in Linux.
But as computers started to get multiple network cards, some abstraction for this was needed. In Linux, that is the network interface (NI). It's not only used for a piece of hardware, but also for various tunnels, and user-application endpoints that serve as tunnels like OpenVPN, etc. As explained, the socket API isn't based on (special) files and is independent of the filesystem. In the same way, network interfaces don't show up in the file system, either. However, the NIs are made available in the /proc and /sys filesystem (as well as other networking tunables).
A NI is simple a kernel abstraction of an endpoint where network packets enter and leave the kernel. Sockets, on the other hand, are used to communicate packets with applications. No socket needs to be involved with the processing of a packet. For example, when forwarding is enabled, a packet may enter on one NI and leave on another.
In that sense, sockets and network interfaces are totally independent.
But there had to be a way to configure NIs, just like you needed a way to configure block and character devices. And since sockets already returned a file descriptor, it was somewhat logical to just allow an ioctl on that file descriptor. That's the netdevice interface you linked.
There are quite a few other abuses of system calls in a similar way, for example for packet filtering, packet capture etc.
All of this has grown piece after piece, and is not particularly logical in many places. If it had be designed all at once, one could probably have made a more orthogonal API.
| What is a generic socket and how does it relate to a network device? |
1,639,359,725,000 |
All. Forgive me I not familiar with the Linux. I am trying to install CentOS in the VMWare. As I knew, Linux can only create three kinds of partitions. they are primary, extended, and logical, For MBR, the max numbers of primary and extended partition are 4. and The unlimited numbers of logical partitions can be created under the extended partition. (If I was wrong. Please correct me. Thanks.)
But As to the CentOS. I got the options like below when creating the partitions. Compare to the concept of primary, extended, and logical, I can't understand Standard partition and LVM physical volume and didn't know what is the difference between them. What does it mean creating an LVM physical volume? Could anyone please tell me more about it ?
Thanks.
|
As I knew, Linux can only create three kinds of partitions. they are primary, extended, and logical
No, that's wrong. What you're describing here is PC old-style “MBR” partitions. This was the standard partition type on PC-type computers (and some others) since the 1980s but these days it's being replaced by GUID partitions. Logical vs primary partition is a hack due to the limitations of this 1980s system which you can ignore if you don't have to deal with older systems.
Using a standard partition system is essential if you have multiple operating systems installed on the same disk. Otherwise, you don't have to. Furthermore, even with multiple operating systems, you can use a single standard partition for Linux, and use Linux's own partitioning system inside it.
LVM is Linux's native partitioning system. It has many advantages over MBR or GUID partitions, in particular the ability to move or even spread partitions between disks (without unmounting anything), and to resize partitions easily. Use LVM for Linux by preference.
LVM achieves its flexibility by combining several levels of abstraction. A physical storage area, typically a PC-style partition, is a physical volume. The space of one or more physical volume makes up a volume group. In a volume group, you create logical volumes, each containing a filesystem (or a swap volume, etc.).
| Create Partition (Standard partition vs LVM physical volume) in CentOS installation |
1,429,617,368,000 |
When looking at the limits of a running process, I see
Max pending signals 15725
What is this?
How can I determine a sensible value for a busy service?
Generally, I can't seem to find a page that explains what each limit is. Some are pretty self-explanatory (max open files), some less so (max msgqueue size).
|
According to the manual page of sigpending:
sigpending() returns the set of signals that are pending for delivery
to the calling thread (i.e., the signals which have been raised while
blocked).
So, it is meant the signals (sigterm, sigkill, sigstop, ...) that are waiting until the process comes out of the D (uninterruptible sleep) state. Usually a process is in that state when it is waiting for I/O. That sleep can't be interrupted. Even sigkill (kill -9) can't and the kernel waits until the process wakes up (the signal is pending for delivery so long).
For the other unclear values, I would take a look in the manual page of limits.conf.
| What are pending signals? |
1,429,617,368,000 |
I am using BusyBox on a small embedded ARM system. I'm trying to read the "top" output, in particular for the Python process listed. How much real memory is this process using? Also what does VSZ stand for? The system only has 64MB of RAM.
Mem: 41444K used, 20572K free, 0K shrd, 0K buff, 18728K cached
CPU: 3% usr 3% sys 0% nic 92% idle 0% io 0% irq 0% sirq
Load average: 0.00 0.04 0.05 1/112 31667
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
777 775 python S 146m 241% 3% /usr/bin/python -u -- dpdsrv.py
|
VSZ (or VIRT, depending on the version of top) is the amount of memory mapped into the address space of the process. It includes pages backed by the process' executable file and shared libraries, its heap and stack, as well as anything else it has mapped.
In the case of the sample output you show, the virtual size is larger than the amount of physical memory on the system, so necessarily some (most!) of the pages in the process' address space aren't physically present in RAM. That's not a problem: many programs contain large amounts of code and maps lots of shared libraries but they only actually use certain portions of that code, or at least only use certain portions of the code at the same time, which allows the kernel to drop the unused portions from memory whenever they are not used, or even to never load them in the first place.
Your version of top doesn't seem to show a RES column, which would tell you how much of the memory in the process' address space is currently resident in RAM.
| How to interpret busybox "top" output? |
1,429,617,368,000 |
I've been wondering for last few days how exactly does it work. We can set kernel runtimes parameters using sysctl or echo boolen 1> /proc/sys/module/exactParameter but in /sys/modules/module/parameters/parameter we can also set values.
Are parameters for modules in /proc/sys/ related only to hard complied into kernel? or there could be parameters for Loadable Kernel Modules also?
LKM after being loaded into running Kernel reveal their parameters in /sys/modules/module/paraeter/params. Does it mean, that there aren't parameters for modules compiled into Kernel?
What is difference between both directories.
|
There is little relation between /proc/sys and /sys other than the fact that both are kernel interfaces and a coincidence of names.
/proc/sys is an interface to sysctl, which are kernel configuration parameters. Reading or modifying /proc/sys/foo/bar is equivalent to getting or setting the foo.bar sysctl. Sysctl values are organized by semantic categories, they are not intrinsically related to the structure of the kernel. Many sysctl values are settings that are present on every Linux system regardless of what drivers or features are compiled in; some are related to optional features (e.g. certain network protocols) but never to specific hardware devices.
/sys/module is, as the name indicates, an interface to kernel modules. Each directory corresponds to one kernel module. You can read, and sometimes modify, the parameters of the module foo by writing to /sys/module/foo/parameters/*.
Components that are loaded in the kernel read their parameters from the kernel command line. These parameters cannot be set at runtime (at least not through an automatically-generated interface like /sys/module: the component can expose a custom interface for this).
| /proc/sys vs /sys/modules/mod/parameter |
1,429,617,368,000 |
I have some directories with over 100Gb of data. I'm trying to archive them into smaller volumes i.e. 10Gb each that are independent / standalone.
The problem is if I use tar + split, it results in multiple tar parts that are not independent. I cannot just extract files from one of the parts, unless I cat / combine all back into a single large file first.
I've also tried using tar -c -L1000M ... to split volumes, but that doesn't work either and there's a problem with long filenames getting truncated.
Tried star as well, but seems like its split volumes are not independent either; while 7zip does not preserve permissions in unix.
The reason I wish to have independent split archives is for safety purposes, in case one of the split files are corrupted, I can still retrieve data from the other archives. It is also much faster if I wish to only extract specific files/folders, without needing to combine all the archives back into a single large volume.
How best do I achieve this? Thank you.
SOLUTION FOUND
I have found a solution using tar, as suggested by @Haxiel's answer. The answer has been posted below.
Note that there may still be a file or two that lost if it crosses the boundary of a volume and you don't have the next volume available, but at least the separate volumes could be independently extracted even if the other parts are missing.
|
I have found a solution using tar, as suggested by @Haxiel's answer. The command used is like this:
tar -c -L1G -H posix -f /backup/somearchive.tar -F '/usr/bin/tar-volume.sh' somefolder
-L: Defines the archive size limit, i.e. 1 Gb
-H: Must use posix format, else long filenames are truncated
-F: Volume script is needed to generate sequential archive file names for tar
This command will create a multi-volume archive in the format of somearchive.tar, somearchive.tar-2, somearchive.tar-3...
Below is my tar-volume.sh, adapted from this tutorial.
#!/bin/bash
echo Preparing volume $TAR_VOLUME of $TAR_ARCHIVE
name=`expr $TAR_ARCHIVE : '\(.*\)\(-[0-9]*\)$'`
case $TAR_SUBCOMMAND in
-c) ;;
-d|-x|-t) test -r ${name:-$TAR_ARCHIVE}-$TAR_VOLUME || exit 1
;;
*) exit 1
esac
echo ${name:-$TAR_ARCHIVE}-$TAR_VOLUME >&$TAR_FD
To list the contents of say the 3rd archive volume:
tar -tf /backup/somearchive.tar-3
To extract a specific archive volume:
tar -xf /backup/somearchive.tar-3
Note that if you just extract 1 single volume, there may be incomplete files which were split at the beginning or end of the archive to another volume. Tar will create a subfolder called GNUFileParts.xxxx/filename which contain the incomplete file(s).
To extract the entire set of volumes in Unix, you'll need to run it through the volume script again:
tar -xf /backup/somearchive.tar -F '/usr/bin/tar-volume.sh'
If you are extracting them in Windows, the tar command cannot properly run the volume script as that requires a bash shell. You'll need to manually feed the volume file names at the command line, by first running this command:
tar -xf somearchive.tar -M
-M indicates that this is a multi-volume archive. When tar finishes extracting the first volume, it'll prompt you to enter the name of the next volume, until all volumes are extracted.
If there are many volumes, you could potentially just type all the volume name sequences first, then copy and paste the entire batch into tar's command line prompt once the first volume has been extracted:
n somearchive.tar-2
n somearchive.tar-3
n somearchive.tar-4
Note the n in front, which is a tar command to indicate that the following parameter is a new volume file name.
There may still be a file or two that may be lost if it crosses the boundary of a volume and you don't have the next volume available, but at least the separate volumes could be independently extracted even if the other parts are missing.
For more information, please refer to the tar documentation. H
| Tar Splitting Into Standalone Volumes |
1,429,617,368,000 |
I created an NTFS logical volume on my Linux system for Windows file storage because I want to retain the creation date of my files (I would probably zip them into an archive and then unzip them, though I have no idea if that would work). Does NTFS-3G save the creation date of files on Linux? If so, how do I access it?
Reading this thread, the OP links documentation on NTFS that provides a shell script for finding the creation date. I modified it in an attempt to get the seconds from the hex value, but I believe that I am doing something wrong:
#!/bin/sh
CRTIME=`getfattr -h -e hex -n system.ntfs_times $1 | \
grep '=' | sed -e 's/^.*=\(0x................\).*$/\1/'`
SECONDS=$(($CRTIME / 10000000))
echo `date --date=$SECONDS`
|
From https://github.com/tuxera/ntfs-3g/wiki/Using-Extended-Attributes#file-times,
An NTFS file is qualified by a set of four time stamps “representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)”, though UTC has not been defined for years before 1961 because of unknown variations of the earth rotation.
You'll find even more information in there including:
Newer versions of ntfs-3g expose a ntfs.ntfs_crtime and ntfs.ntfs_crtime_be attribute.
So:
getfattr --only-values -n system.ntfs_crtime_be /some/file |
perl -MPOSIX -0777 -ne '$t = unpack("Q>");
print ctime $t/10000000-11644473600'
See also:
ntfsinfo -F /file/in/ntfs /dev/fs-device
With older ntfs-3g, this should work:
getfattr --only-values -n system.ntfs_times /some/file |
perl -MPOSIX -0777 -ne 'print ctime unpack(Q)/10000000-11644473600'
Or with GNU tools and sub-second precision:
date '+%F %T.%N' -d "@$({ echo 7k
getfattr --only-values -n system.ntfs_times /some/file |
od -A n -N 8 -vt u8; echo '10000000/ 11644473600-p'; } |dc)"
| How do I get the creation date of a file on an NTFS logical volume? |
1,429,617,368,000 |
This is a selection from my dmesg:
rtl8192cu 1-3:1.0 wlx10bef501e1cd: renamed from wlan0
wlx10bef501e1cd: authenticate with 90:94:e4:e7:99:cc
wlx10bef501e1cd: send auth to 90:94:e4:e7:99:cc (try 1/3)
wlx10bef501e1cd: authenticated
wlx10bef501e1cd: aborting authentication with 90:94:e4:e7:99:cc by local choice (Reason: 3=DEAUTH_LEAVING)
Where that particular error is linked to device names being too long a string, as probably emerges by this attempt at connection:
___@DESK:~$ sudo iwconfig wlx10bef501e1cd essid dlink_DIR-506L key s:___{pass}___
Error for wireless request "Set Encode" (8B2A) :
SET failed on device wlx10bef501e1cd ; Invalid argument.
The dongle as in the output of lsusb is a:
Bus 001 Device 002: ID 2001:3308 D-Link Corp. DWA-121 802.11n Wireless N 150 Pico Adapter [Realtek RTL8188CUS]
At the same time, that device (a wifi usb dongle) was used to successfully install Debian as a net install, with the same long string being the id showed by the installation GUI during download of the packages.
I tried to rename the device by creating a rule in /etc/udev/rules.d/70-persistent-net.rules without success (I used at the output of udevadm info and saved the attempt below amongst others, where "1-3" is the third device on bus one).
SUBSYSTEM=="usb",ACTION=="add",DRIVERS=="usb",ATTRS{product}=="802.11n WLAN Adapter",ATTR{dev_id}=="0x0",ATTR{type}=="1",KERNEL=="1-3",NAME="wlan1"
Why is that device being renamed to such a problematic id in the first place? Instead of trying to patch the situation later down the line is there a way I can just avoid wlan0 to be renamed?
After accepting an answer for this question I realized the answers for How can I change the default “ens33” network device to old “eth0” on Fedora 19? include the solution for this problem, but the process described there also include steps that are not strictly required to solve this question.
I'm not sure if this qualifies as a duplicate or not.
|
It's being renamed so that it has a consistent name regardless of what order network devices are probed/connected (otherwise, if you had two USB wifi devices, which is wlan0 and wlan1 would potentially change every boot — or when doing it by MAC address, replacing a NIC suddenly made eth0 become eth1 causing all kinds of failure). More details & rationale can be found at https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
To disable it, Debian provides two methods. Quoting from /usr/share/doc/udev/README.Debian.gz):
Put "net.ifnames=0" into the kernel command line (e. g. in
/etc/default/grub's GRUB_CMDLINE_LINUX_DEFAULT, then run "update-grub").
Disable the default *.link rules with
"ln -s /dev/null /etc/systemd/network/99-default.link"
and rebuild the initrd with "update-initramfs -u".
The name shouldn't have anything to do with iwconfig failing.
| Why is my wlan device being renamed? [duplicate] |
1,429,617,368,000 |
Let's say we are in a blank directory. Then, the following commands:
mkdir dir1
cp -r dir1 dir2
Yield two (blank) directories, dir1 and dir2, where dir2 has been created as a copy of dir1. However, if we do this:
mkdir dir1
mkdir dir2
cp -r dir1 dir2
Then we instead find that dir1 has now been put inside dir2. This means that the exact same cp command behaves differently depending on whether the destination directory exists. If it does, then the cp command is doing the same as this:
mkdir dir1
mkdir dir2
cp -r dir1 dir2/.
This seems extremely counter-intuitive to me. I would have expected that cp -r dir1 dir2 (when dir2 already exists) would remove the existing dir2 (and any contents) and replace it with dir1, since this is the behavior when cp is used for two files. I understand that recursive copies are themselves a bit different because of how directories exist in Linux (and more broadly in Unix-like systems), but I'm looking for some more explanation on why this behavior was chosen. Bonus points if you can point me to a way to ensure cp behaves as I had expected (without having to, say, test for and remove the destination directory beforehand). I tried a few cp options without any luck. And I suppose I'll accept rsync solutions for the sake of others that happen upon this question who don't know that command.
In case this behavior is not universal, I'm on CentOS, using bash.
|
The behaviour you're looking for is a special case:
cp -R [-H|-L|-P] [-fip] source_file... target
[This] form is denoted by two or more operands where the -R option is specified. The cp utility shall copy each file in the file hierarchy rooted in each source_file to a destination path named as follows:
If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target, a single <slash> character if target did not end in a <slash>, and the pathname of the file relative to the directory containing source_file.
If target does not exist and two operands are specified, the name of the corresponding destination path for source_file shall be target; the name of the corresponding destination path for all other files in the file hierarchy shall be the concatenation of target, a <slash> character, and the pathname of the file relative to source_file.
It shall be an error if target does not exist and more than two operands are specified ...
Therefore I'd say it's not possible to make cp do what you want.
Since your expected behaviour is "cp -r dir1 dir2 (when dir2 already exists) would remove the existing dir2 (and any contents) and replace it with dir1":
rm -rf dir2 && cp -r dir1 dir2
You don't even need to check if dir2 exists.
The rsync solution would be adding a trailing / to the source so that it doesn't copy dir1 itself into dir2 but copies the content of dir1 to dir2 (it will still keep existing files in dir2):
$ tree dir*
dir1
└── test.txt
dir2
└── test2.txt
0 directories, 2 file
$ rsync -a dir1/ dir2
$ tree dir*
dir1
└── test.txt
dir2
└── test.txt
└── test2.txt
0 directories, 3 files
$ rm -r dir2
$ rsync -a dir1/ dir2
$ tree dir*
dir1
└── test.txt
dir2
└── test.txt
0 directories, 2 files
| Syntactic differences in cp -r and how to overcome them |
1,429,617,368,000 |
I am specifying path to my command in the file /etc/profile:
export PATH=$PATH:/usr/app/cpn/bin
My command is located in:
$ which ydisplay
/usr/app/cpn/bin/ydisplay
So, when I performing "echo $PATH" output is looks like:
$ echo $PATH
...:/usr/app/cpn/bin
And everything is OK, but when I am trying to launch my command via SSH I am getting error:
$ ssh 127.0.0.1 ydisplay
$ bash: ydisplay: command not found
But the my path is still present:
$ ssh 127.0.0.1 echo $PATH
...:/usr/app/cpn/bin
Please explain me why Bash unable to find ydisplay during SSH session and how to properly configurate SSH to avoid this issue.
More over, if I specifying $PATH in local file .bashrc in the current user all works correctly. But I want to modify only one file instead specifying a lot of files for each user. This is why I am asking.
|
tl;dr
Running ssh 127.0.0.1 ydisplay sources ~/.bashrc rather than /etc/profile. Change your path in ~/.bashrc instead.
details
The only time /etc/profile is read is when your shell is a "login shell".
From the Bash Reference Manual:
When bash is invoked as a login shell, ... it first reads and executes commands from the file /etc/profile
But when you run ssh 127.0.0.1 ydisplay, bash is not started as a login shell. Yet it does read a different startup file. The Bash Reference Manual says:
when ... executed by ... sshd. ... it reads and executes commands from ~/.bashrc
So you should put your PATH settings in ~/.bashrc.
On most systems, ~/.bash_profile sources ~/.bashrc, so you can put your settings only in ~/.bashrc rather than putting them in both files.
There's no standard way to change the setting for all users, but most systems have a /etc/bashrc, /etc/bash.bashrc, or similar.
Failing that, set up pam_env and put the PATH setting in /etc/environment.
See also:
What's the conf file reading between login and non-login shell?
Is there a ".bashrc" equivalent file read by all shells?
| Why Bash unable to find command even if $PATH is specified properly? |
1,429,617,368,000 |
I have my terminal always opened (Fedora 22), because all my work I do from there. Sometimes I search some info in browser or just have fun. After 20-30 minutes of browsing (browser starts not from command line) I return to terminal and saw something strange - it was in all tabs of terminal:
Message from syslogd@localhost at Jul 17 23:17:19 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [migration/2:21]
Message from syslogd@localhost at Jul 17 23:17:38 ...
kernel:CPU: 2 PID: 21 Comm: migration/2 Not tainted 4.0.7-300.fc22.i686 #1
Message from syslogd@localhost at Jul 17 23:17:39 ...
kernel:Hardware name: LENOVO 20126/123456789, BIOS 5BCN30WW 10/10/2012
Message from syslogd@localhost at Jul 17 23:17:39 ...
kernel:task: f45f0000 ti: f45ec000 task.ti: f45ec000
Message from syslogd@localhost at Jul 17 23:17:39 ...
kernel:Stack:
Message from syslogd@localhost at Jul 17 23:17:40 ...
kernel:Call Trace:
Message from syslogd@localhost at Jul 17 23:17:40 ...
kernel: <IRQ>
Message from syslogd@localhost at Jul 17 23:17:40 ...
kernel:#000<IRQ> #000868>] do_softirq_own_stack+0x28/0x30#0000xc0 [mac80211]#000c80211]#000014#000es iptable_nat nf_conntrack_localhost#000frag_ipv4 nf_nat_ipv4 nf_kernel#000conntrack#000#000#000#000el:#000_mangle iptable_security#000ul 17 23:17:40#000#000hda_codec_realtek snd_hda_codec_#000eneric#000arc4 s#000d_hda_intel#000rtl8192ce s#000d_hda_co#000#000#000#000�#001#000#000-#000#000#000�s#003�09b3e98>] ip_rcv+0x2e8/0x410#000#000#000#000%#000#000#000localhost.localdomain#000videob#025#000#000#000kernel#000Y#0009#000#000#025#000#000#000_MACHINE_ID#000-#000#000#000#006#000#000#000�'g�p&g�#001#000#000#000#000#000#000#000#020#026#000�#001#000#000#000#000#000#000#000#000#000#000#000#025#000#000#000_TRANSPORT#0001#025#000#000#000PRIORITY#0002#000#000-#000#000#000#006#000#000#000�'g�p&g�#001#000#000#000#000#000#000#000Pw#003�#006#000#000#000#000#000#000#000#000#000#000#000-#000#000#0000r#003��'g�p&g�#000#000#000#000#000#000#000#0008r#003� #000#000#000#000#000#000#000#000#000#000#000#025#000#000#0006036995285#000#0005#000#000#000 k#003�045c0c0>]...
and a bit more stuff like these last long line. Laptop didn't behave like something wrong, it was just this log in all tabs of terminal.
What's this???
|
Seems like a bug in the updated kernel; but, this maybe related to your laptop's battery poor performance. This you can be more affirmative by checking ACPI(Advanced Configuration and Power Interface) modules.
When my kernel was updated, I restarted my system and started the new kernel---however it failed to load and the same error messages were sent to the terminal.
I reverted back to my old kernel usage, which is still working for me.
Maybe,not sure but newer kernel modules might have some enhancements which are unable to be supported by the current power source. Like, it needs more power or something.
Also, my laptop's battery performance has declined severely and it needs to be replaced in my case.
EDIT: (based on Nikos Alexandris's comment)
You may consider replacing your charge source; it may have something to do with power management.
| What does "kernel:NMI watchdog: BUG: soft lockup" followed by other errors mean? |
1,429,617,368,000 |
I'm currently trying to make HSP bluetooth profile works on a custom board based on Atmel SAMA5D2. I'm using a custom Linux made from Buildroot-2017-08.
I'm at the point where I try to configure pulseaudio. The pulseaudio package is the one from buildroot and I ticked "start as system daemon".
When the system starts, pulseaudio seems to be running
# ps aux | grep pulse
174 pulse usr/bin/pulseaudio --system --daemonize --disallow-exit --disallow-module-loading
197 root grep pulse
However when I try to communicate with the daemon it fails
# pacmd
No PulseAudio daemon running, or not running as session daemon.
# pacmd info
No PulseAudio daemon running, or not running as session daemon.
# pactl info
Connection failure: Access denied
I realized that the message change if I export the following environement variable
# export PULSE_RUNTIME_PATH="/run/pulse"
# pacmd info
Daemon not responding.
# pactl info
Connection failure: Access denied
Concerning rights to access this folder here they are
# ls -la /run/pulse/
total 8
drwx------ 3 root root 120 Jan 2 05:09 .
drwxr-xr-x 6 root root 240 Jan 2 05:09 ..
drwxr-xr-x 3 pulse pulse 60 Jan 2 05:09 .config
-rw------- 1 pulse pulse 16 Jan 2 05:09 .esd_auth
srwxrwxrwx 1 pulse pulse 0 Jan 2 05:09 native
-rw------- 1 pulse pulse 4 Jan 2 05:09 pid
From this question Problems with pulseaudio - pavucontrol and pacmd not connecting to pulseaudio I tried to change rights about the directory but it didn't changed anything.
# ls -la /run/pulse/
total 8
drwx------ 3 pulse pulse 120 Jan 2 05:09 .
drwxr-xr-x 6 root root 240 Jan 2 05:09 ..
drwxr-xr-x 3 pulse pulse 60 Jan 2 05:09 .config
-rw------- 1 pulse pulse 16 Jan 2 05:09 .esd_auth
srwxrwxrwx 1 pulse pulse 0 Jan 2 05:09 native
-rw------- 1 pulse pulse 4 Jan 2 05:09 pid
It seems that there are some problems when looking at logs but I can't say if this is a big deal or not.
# cat /var/log/messages | grep pulse
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] caps.c: Normally all extra capabilities would be dropped now, but that's impossible because PulseAudio was built without capabil.
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] main.c: OK, so you are running PA in system mode. Please note that you most likely shouldn't be doing that.
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] main.c: If you do it nonetheless then it's your own fault if things don't work as expected.
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] main.c: Please read http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/WhatIsWrongWithSystemWide/ for an exp.
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] module.c: module-detect is deprecated: Please use module-udev-detect instead of module-detect!
Jan 2 05:43:19 buildroot pulseaudio[174]: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because high-resolution timers are not available from the kernel.
Jan 2 05:43:20 buildroot pulseaudio[174]: [pulseaudio] authkey.c: Failed to open cookie file '/var/run/pulse/.config/pulse/cookie': No such file or directory
Jan 2 05:43:20 buildroot pulseaudio[174]: [pulseaudio] authkey.c: Failed to load authentication key '/var/run/pulse/.config/pulse/cookie': No such file or directory
Jan 2 05:43:20 buildroot pulseaudio[174]: [pulseaudio] authkey.c: Failed to open cookie file '/var/run/pulse/.pulse-cookie': No such file or directory
Jan 2 05:43:20 buildroot pulseaudio[174]: [pulseaudio] authkey.c: Failed to load authentication key '/var/run/pulse/.pulse-cookie': No such file or directory
Also I have no choice to run pulseaudio as root because there is no other user on my target.
EDIT :
After restarting pulseaudio in verbose mode (-vvv) it seems that the problem comes from an invalid connection data
# pactl info -vvv
Connection failure: Access denied
# cat /var/log/messages | grep pulse | tail -n 20
Jan 2 06:27:51 buildroot pulseaudio[250]: [pulseaudio] main.c: Daemon startup successful.
Jan 2 06:27:51 buildroot pulseaudio[252]: [pulseaudio] main.c: Daemon startup complete.
Jan 2 06:27:51 buildroot pulseaudio[252]: [pulseaudio] module.c: Unloading "module-detect" (index: #0).
Jan 2 06:27:51 buildroot pulseaudio[252]: [pulseaudio] module.c: Unloaded "module-detect" (index: #0).
Jan 2 06:27:56 buildroot pulseaudio[252]: [pulseaudio] module-suspend-on-idle.c: Sink alsa_output.0.analog-stereo idle for too long, suspending ...
Jan 2 06:27:56 buildroot pulseaudio[252]: [pulseaudio] sink.c: Suspend cause of sink alsa_output.0.analog-stereo is 0x0004, suspending
Jan 2 06:27:56 buildroot pulseaudio[252]: [alsa-sink-CLASSD PCM atmel-classd-hifi-0] alsa-sink.c: Device suspended...
Jan 2 06:27:56 buildroot pulseaudio[252]: [pulseaudio] core.c: Hmm, no streams around, trying to vacuum.
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] client.c: Created 0 "Native client (UNIX socket client)"
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Protocol version: remote 31, local 31
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Got credentials: uid=0 gid=0 success=0
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Denied access to client with invalid authentication data.
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] client.c: Freed 0 "Native client (UNIX socket client)"
Jan 2 06:28:28 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Connection died.
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] client.c: Created 1 "Native client (UNIX socket client)"
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Protocol version: remote 31, local 31
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Got credentials: uid=0 gid=0 success=0
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Denied access to client with invalid authentication data.
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] client.c: Freed 1 "Native client (UNIX socket client)"
Jan 2 06:28:43 buildroot pulseaudio[252]: [pulseaudio] protocol-native.c: Connection died.
After adding root user to pulse group the problem persists.
EDIT 2 :
Changing the process access rights to allow pulse group's members to read/write and execute files seems to unlock a bit the situation as I can now communicate with the daemon but not act on it.
# pactl info
Server String: /run/pulse/native
Library Protocol Version: 31
Server Protocol Version: 31
Is Local: yes
Client Index: 1
Tile Size: 65496
User Name: pulse
Host Name: buildroot
Server Name: pulseaudio
Server Version: 9.0
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: alsa_output.platform-fc048000.classd.analog-stereo
Default Source: alsa_output.platform-fc048000.classd.analog-stereo.monitor
Cookie: 6ae9:b402
# pacmd info
Daemon not responding.
|
Finally, seems that it was a profile issue.
I just changed the umask line from umask 077 to umask 007 in /etc/init.d/S50pulseaudio so members of group pulse can access files.
Content of /etc/init.d/S50pulseaudio:
#!/bin/sh
#
# Starts pulseaudio.
#
start() {
printf "Starting pulseaudio: "
#umask 077
umask 007
/usr/bin/pulseaudio --system --daemonize --disallow-exit --disallow-module-loading -vvv
echo "OK"
}
stop() {
printf "Stopping pulseaudio: "
killall pulseaudio
echo "OK"
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart|reload)
restart
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit $?
Regarding pacmd it's normal I can't access it as I'm running pulseaudio system-wide.
| Pulseaudio daemon not running for pacmd |
1,429,617,368,000 |
I am wondering if there is a file system equivalent to a round-robin database, which for a fixed size, ages off the oldest files. It is pretty easy to implement with a simple cron job, which I have, but I assume it is a problem many people have and there is perhaps something better. I wish to set a fixed-size partition, or pool, in which older files are automatically removed, or aged-off, when the pool is full. A type of circular-buffer that would use the space left by the oldest file for the new ones, whilst preserving file integrity.
My cron solution compares disk usage to a threshold and recursively removes the oldest file until disk usage is again under the threshold. It is not perfect because one can't guarantee the threshold is low enough that it isn't overtaken between two cron iterations. It also doesn't maximize the use of the storage space because of the threshold value which tends to be predictive in nature (how much can I fill in one minute, between two iterations of crond). Two shortcomings I am hoping to improve upon.
I am looking for a more elegant solution, akin to how the round-robin database (http://linux.die.net/man/1/rrdtool) handles this transparently, but for file systems.
|
There exist many HSM (Hierarchical Storage Management) systems, mainly aimed at SAN systems. These migrate files from faster disks, to slower disks, to tape as their last-access-time becomes older. You might like to seek out one of these, if you have a SAN. Most of the ones I know of are commercial licenses though, such as the IBM Tivoli HSM that we use. You might like to take a look at OHSM though.
If you just want to delete old files, then a simple cron job such as find /data -atime +30 -exec rm {} \; will delete files that have not been accessed in a certain amount of time (but make sure the filesystem is not mounted with the noatime option!) This would be highly risky, though, unless you had a good online backup system.
| Linux file system that ages off older files when partition is full |
1,429,617,368,000 |
ksplice is an open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. (From Wikipedia.)
Is there a downside to using ksplice? Does it introduce any kind of instability? If not, why is it not included by default in more Linux distributions?
|
Technically it's very sound, I think that the fact distributions do provide this method of patching yet is:
It does not integrate with the existing update methods (packaging wise)
It adds to the burden of the distro to provide another method of upgrading.
| Is there a downside to ksplice? |
1,429,617,368,000 |
While I am studying, I saw security file system which is mounted on /sys/kernel/security . It seems like to operate similar to sysfs or proc file system. Security file system keeps data on memory not in disk, so when write something into the file in securityfs it does not actually write to disk just update data in memory.
What I am wondering is why the name of this file system is securityfs?
Is there any security enhance ability in this file system?
|
Here are some links regarding securityfs:
A post from the author of securityfs
Article about PipeFS, SockFS, DebugFS, and SecurityFS.
The author stats:
This filesystem is meant to be used by security modules, some of which
were otherwise creating their own filesystems.
So I guess the name comes from the Linux Security Modules (LSM).
| What is securityfs? |
1,429,617,368,000 |
I've tried following the Ubuntu hotkeys/media keys troubleshooting guide and /usr/share/doc/udev/README.keymap.txt.gz to make the Fn keys work. After copying the map file and modifying /lib/udev/rules.d/95-keymap.rules I get the correct key names from sudo /lib/udev/keymap -i input/event4, but none of them do anything at all.
How do I make sure that at least wlan and kbdillumup/kbdillumdown work?
$ /lib/udev/findkeyboards
AT keyboard: input/event4
$ cat /sys/class/dmi/id/sys_vendor
SAMSUNG ELECTRONICS CO., LTD.
$ cat /sys/class/dmi/id/product_name
90X3A
samsung-90x3a map file:
0xCE prog1 # Fn+F1 Unknown
0x8D prog3 # Fn+F6 Economy mode
0x97 kbdillumdown # Fn+F7 Keyboard background light down
0x96 kbdillumup # Fn+F8 Keyboard background light up
0xD5 wlan # Fn+F12 Wifi on/off
$ udevadm info --export-db
Update: The information below will be from Arch Linux since I no longer have Ubuntu.
xdotool key XF86KbdBrightnessUp prints nothing, but returns with exit code 0. I'm not sure if that means anything.
acpi_listen prints nothing when pressing Fn+F7/Fn+F8.
|
Somebody finally found the next best thing. To turn off the backlight, run this:
sudo chattr -i /sys/firmware/efi/efivars/KBDBacklitLvl-5af56f53-985c-47d5-920c-f1c531d06852
echo 0700000000 | xxd -plain -revert | sudo tee /sys/firmware/efi/efivars/KBDBacklitLvl-5af56f53-985c-47d5-920c-f1c531d06852
sudo chattr +i /sys/firmware/efi/efivars/KBDBacklitLvl-5af56f53-985c-47d5-920c-f1c531d06852
and then reboot. To set the illumination low, medium or high, replace 0700000000 in the above with 0700000001, 0700000002 or 0700000003, respectively.
| Fix Fn-keys for keyboard illumination on Samsung Notebook 9 Spin (NP940X3L) |
1,429,617,368,000 |
On a Linux machine I have a series of commands that offer numerical values of the state of different sensors.
The call of these commands is something similar to the following:
$ command1
5647
$ command2
76
$ command3
8754
These values change in real time, and every time I want to check the status of one of them, I have to re-launch the command... This doesn't do me any good since I need both hands to manipulate the hardware.
My goal is to make a simple Bash Script that calls these commands and keeps the value updated (in real time asynchronously or refreshing the value every x seconds) like this:
$ ./myScript.sh
command1: x
command2: y
command3: z
command4: v
Where x, y, z and v are the changing values.
Bash allows this simply and efficiently? or should I choose to do it in another language, like Python?
UPDATE with more info:
My current script is:
#!/bin/bash
echo "Célula calibrada: " $(npe ?AI1)
echo "Anemómetro: " $(npe ?AI2)
echo "Célula temperatura: " $(npe ?AI3)
echo "Célula temperatura: " $(npe ?AI4)
npe being an example command that returns the numeric value. I expect an output like this:
This output I get with the command watch -n x ./myScript.sh, where x is refresh value of seconds. If I edit my script like this:
#!/bin/bash
while sleep 1; do
clear; # added to keep the information in the same line
echo "Célula calibrada: " $(npe ?AI1);
echo "Anemómetro: " $(npe ?AI2);
echo "Célula temperatura: " $(npe ?AI3);
echo "Célula temperatura: " $(npe ?AI4);
done
I get my output with an annoying flicker:
|
It might be tricky to implement a real time solution in bash.
There are many ways to run script once in X seconds you can use watch.
I assume you already have myScript.sh available. Replace X with number of seconds you need.
watch -n X ./myScript.sh
while sleep X; do ./myScript.sh; done
upd. to emulate watch you might want to clear the screen in between iterations. inside the script it will look this way:
while sleep X; do
clear;
command1;
command2;
done
add one of options above to the script itself.
| Bash script that shows changing real time values from commands |
1,429,617,368,000 |
How can I print a range of ip addresses on linux command line using the "seq" command? For eg: I need seq to print a range of ip from 10.0.0.1 to 10.0.0.23 . Seems like the period in between the octets causes the number to behave like a floating point . I am getting a "invalid floating point argument error" . I tried using the -f option . May be I am not using it correctly. But it still gave me an error. I am trying to something similar to
seq 10.0.0.2 10.0.0.23
Is there another way to print IP addresses in a range in Linux other than switching over to excel ?
|
Use a format:
$ seq -f "10.20.30.%g" 40 50
10.20.30.40
10.20.30.41
10.20.30.42
10.20.30.43
10.20.30.44
10.20.30.45
10.20.30.46
10.20.30.47
10.20.30.48
10.20.30.49
10.20.30.50
Unfortunately this is non-obvious as GNU doesn't like to write man pages.
| How to print a range of IP addresses with Linux seq command |
1,429,617,368,000 |
Is there any program or script available for decrypt Linux shadow file ?
|
Passwords on a linux system are not encrypted, they are hashed which is a huge difference.
It is not possible to reverse a hash function by definition.
For further information see the Hash Wikipedia entry.
Which hash function is used, depends on your system configuration. MD5 and blowfish are common examples for used hash functions.
So the "real" password of a user is never stored on the system.
If you login, the string you enter as the password will be hashed and checked against your /etc/shadow file. If it matches, you obviously entered the correct password.
Anyway there are still some attack vectors against the password hashes. You could keep a dictionary of popular passwords and try them automatically. There are a lot of dictionaries available on the internet. Another approach would be to just try out all possible combinations of characters which will consume a huge amount of time. This is known as bruteforce attack.
Rainbowtables are another nice attack vector against hashes. The idea behind this concept, is to simply pre calculate all possible hashes and then just lookup a hash in the tables to find the corresponding password. There are several distributed computing projects to create such tables, the size differs on the characters used and is mostly several TB.
To minimize the risk of such lookup tables its a common practice and the default behaviour in Unix/Linux to add a so called "salt" to the password hash. You hash your password, add a random salt value to the hash and hash this new string again. You need to save the new hash and the salt to be able to check if a entered value is the correct password. The huge advantage of this method is, that you would have to create new lookup tables for each unique salt.
A popular tool to execute dictionary or brute force attacks against user passwords of different operating systems is John The Ripper (or JTR).
See the project homepage for more details:
John the Ripper is a fast password
cracker, currently available for many
flavors of Unix, Windows, DOS, BeOS,
and OpenVMS. Its primary purpose is to
detect weak Unix passwords.
| Program for decrypt linux shadow file |
1,429,617,368,000 |
I'd like to try some shell codes and I want to disable linux protections.
I know I could compile using flags but I know another way exists to disable these protections in general I just can't remember. Can you help me?
|
Stack protection is done by the compiler (add some extra data to the stack and stash some away on call, check sanity on return). Can't disable that without recompiling. It's part of the point, really...
| Disable stack protection on Ubuntu for buffer overflow without C compiler flags |
1,429,617,368,000 |
This doesn't necessarily have to be a Linux problem but I'll ask it here anyway. I'm using a workstation mainly for training deep learning and machine learning models. I run training codes on both CPU and GPU.
CPU: AMD Ryzen 9 5950X 16-Core Processor
GPU: NVIDIA GeForce RTX 3090
OS: Ubuntu 22.04 LTS
The libraries that I use (PyTorch, XGBoost, LightGBM and etc.) utilize swap memory a lot for data loading. While working on big datasets, swap memory accumulates slowly and exceeds the limit (2GB). When that happens, all of the cores go crazy and CPU overheats. Workstation shuts down itself couple seconds later.
I'm a data scientist and I'm not good with hardware. It took couple weeks for me to figure out why my workstation was keep shutting itself down. I have to find a way to prevent this since I can't progress on my own tasks anymore. What are your suggestions?
To give you more details, this wasn't happening 3-4 months ago. It started very recently.
Edit: Added nvidia-smi and sensors outputs while training two models (UNet and YOLOv6) simultaneously.
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:0A:00.0 Off | N/A |
|100% 79C P2 338W / 350W | 14171MiB / 24576MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1361 G /usr/lib/xorg/Xorg 56MiB |
| 0 N/A N/A 1568 G /usr/bin/gnome-shell 10MiB |
| 0 N/A N/A 27955 C python 2743MiB |
| 0 N/A N/A 31692 C python 11355MiB |
+-----------------------------------------------------------------------------+
sensors
nvme-pci-0300
Adapter: PCI adapter
Composite: +74.8°C (low = -273.1°C, high = +84.8°C)
(crit = +84.8°C)
Sensor 1: +74.8°C (low = -273.1°C, high = +65261.8°C)
Sensor 2: +74.8°C (low = -273.1°C, high = +65261.8°C)
iwlwifi_1-virtual-0
Adapter: Virtual device
temp1: +57.0°C
k10temp-pci-00c3
Adapter: PCI adapter
Tctl: +87.8°C
Tccd1: +89.2°C
Tccd2: +79.5°C
|
First, absolutely make sure your PSU is powerful enough - instant shutdowns like yours could indicate an issue with it. Maybe replace it. RTX 3090 can have spikes up to 500W and that means, along with your CPU, that your PSU must be rated at the very least 850W.
Speaking of your temps.
Your CPU is running close to its rated maximum, which is 90C, which means you'd better improve your case cooling by installing case fans e.g. 120mm (140mm are beter - quieter and more powerful) and probably installing a better cooler on your CPU along with changing thermal paste - my preferred one is Arctic MX-4 (MX-5 in theory provides better performance but it's a lot more cumbersome to apply).
Installing proper case cooling might prove enough since your GPU is definitely increasing your CPU temps.
Don't forget to update your EFI BIOS as well.
You can also use a software only solution: enter your BIOS and
either decrease your CPU PPT (maximum wattage)
or set the maximum temperature for it, e.g. 85C
Both will result in decreased multithreaded performance but not so much. You may get more help here: https://www.reddit.com/r/Amd/
| CPU overheats and PC shuts down when swap is full |
1,429,617,368,000 |
I found some good information about wireless tools in this Q/A. Apparently it was introduced to the Linux kernel in 1997 by Jean Tourrhiles sponsored by Hewlett Packard.
Edit: It seems WE(Wireless Extensions) was added to the Kernel by Tourrhiles, not the wireless tools themselves. The tools are available on most distros as the primary way to communicate with WE. You can see WE in the kernel at /proc/net/wireless.
The last version released was v29 yet Ubuntu 14 & 16 seem to contain the v30 beta (iwconfig -v).
I'm curious about what happened to this package? Why did the "beta" version 30 become the defacto standard version used?
Did HP stop funding Jean Tourrhiles so development stopped? Or maybe it was decided that it was stable enough to stop development, but if that was the case why would 30 still be a beta?
I found this Github page but it seems to be for historical reference only.
Version History
|
Wireless tools is deprecated in favor of iw because the wireless extensions has been deprecated in favor of the new nl80211 interface for wireless devices. The kernel documentation for iw says that.
However, nl80211 is under active development and not all drivers have been migrated to it. Wireless tools is still required for devices that have not been migrated from wireless extensions.
The reason Ubuntu (and pretty much all distros I know of) provide version 30 beta is because that version fixes a critical bug that was in version 29, which caused the iwconfig to fail if there were too many networks in the area due to a buffer overflow. The Github repo for wireless tools does not show this, but here's the relevant patch from Arch
| Why did wireless tools version 30 become a permanent beta? |
1,429,617,368,000 |
I know how to send an email from command line (script)
echo "body" | mail -s "subject" [email protected]
Is it possible to send attachments from commandline (script) as well?
I am using heirloom-mailx on Debian Wheezy.
|
The simple way: to use uuencode (part of sharutils package). Any formatting or body text are unavailable. Just a email with attachement and custom subject.
uuencode /path/to/file file_name.ext | mail -s subject [email protected]
The complex way: to use sendmail and html formatting:
v_mailpart="$(uuidgen)/$(hostname)"
echo "To: [email protected]
Subject: subject
Content-Type: multipart/mixed; boundary=\"$v_mailpart\"
MIME-Version: 1.0
This is a multi-part message in MIME format.
--$v_mailpart
Content-Type: text/html
Content-Disposition: inline
<html><body>Message text itself.</body></html>
--$v_mailpart
Content-Transfer-Encoding: base64
Content-Type: application/octet-stream; name=file_name.ext
Content-Disposition: attachment; filename=file_name.ext
`base64 /path/to/file`
--$v_mailpart--" | /usr/sbin/sendmail -t
in case with several attachments last part may be repeated.
| mail: send email with attachment from commandline [duplicate] |
1,429,617,368,000 |
I have two similar scripts with different names. One works fine but other throws error.
Can anyone please tell me what is the issue?
This is my test.sh scripts which works fine
[nnice@myhost Scripts]$ cat test.sh
#!/bin/bash
function fun {
echo "`hostname`"
}
fun
[nnice@myhost Scripts]$ ./test.sh
myhost.fedora
Here is my another script demo.sh but it throws error
[nnice@myhost Scripts]$ cat demo.sh
#!/bin/bash
function fun {
echo "`hostname`"
}
fun
[nnice@myhost Scripts]$ ./demo.sh
bash: ./demo.sh: cannot execute: required file not found
Both scripts having the same permissions
[nnice@myhost Scripts]$ ll test.sh
-rwxr-xr-x. 1 nnice nnice 65 Oct 21 10:47 test.sh
[nnice@myhost Scripts]$ ll demo.sh
-rwxr-xr-x. 1 nnice nnice 58 Oct 21 10:46 demo.sh
|
Your demo.sh script is a DOS text file. Such files have CRLF line endings, and that extra CR (carriage-return) character at the end of the line is causing you issues.
The specific issue it's causing is that the interpreter pathname on the #!-line now refers to something called /bin/bash\r (with the \r symbolising a carriage-return, which is a space-like character, so it's usually not visible). This file is not found, so this is what causes your error message.
To solve this, convert your script from a DOS text file to a Unix text file. If you are editing scripts on Windows, you can probably do this by configuring the Windows text editor to create Unix text files, but you may also use the dos2unix utility, available for most common Unix variants.
$ ./script
bash: ./script: cannot execute: required file not found
$ dos2unix script
$ ./script
harpo.local
Regarding your code: Please never do echo `some-command` or echo $(some-command) to output the output of some-command. Just use the command directly:
#!/bin/sh
fun () {
hostname
}
fun
(Since the script now does not use anything requiring bash, I also shifted to calling the simpler /bin/sh shell.)
| Linux Bash Shell Script Error: cannot execute: required file not found |
1,429,617,368,000 |
I don't know anything about CPUs. I have a 32-bit version of ubuntu. But I need to install 64-bit applications. I came to know that it is not possible to run 64-bit apps on 32 bit OS. So I decided to upgrade my os. But a friend of mine told me to check CPU specifications before the new upgrade. I run this command as was suggested on a website.
lscpu command gives the following details
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Pentium(R) Dual-Core CPU E5300 @ 2.60GHz
Stepping: 10
CPU MHz: 1315.182
CPU max MHz: 2603.0000
CPU min MHz: 1203.0000
BogoMIPS: 5187.07
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 2048K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm
In one word what does this mean? I want to know whether I can install 64-bit Ubuntu in my pc.
My installed RAM is 2GB. Since my system is more than 10 years old I expect some expert advice on my CPU status. Should I buy a new pc? Or can I stick with my old one?
I already checked this. But I expect some thing easier.
https://unix.stackexchange.com/a/77724/413713
(I can share any information regarding my hardware, only tell me how to collect them).
Thanks in advance. Sorry for bad english
|
Intel’s summary of your CPU’s features confirms that it supports 64-bit mode, as indicated by
CPU op-mode(s): 32-bit, 64-bit
in lscpu’s output.
This isn’t an Atom CPU either, so the rest of your system is, in all likelihood, capable of supporting a 64-bit operating system.
You can re-install a 64-bit variant of your operating system, or you could use Ubuntu’s multiarch support: install a 64-bit kernel, add the amd64 architecture, and you will then be able to install and run 64-bit software without re-installing everything:
sudo dpkg --add-architecture amd64
sudo apt-get update
sudo apt-get install linux-image-generic:amd64
(followed by a reboot).
| Can I run 64 bit ubuntu on my pc (>10 years old) |
1,429,617,368,000 |
When ltrace is used for tracing the system calls, I could see that fork() uses sys_clone() rather than sys_fork(). But I couldn't find the linux source where it is defined.
My program is:
#include<stdio.h>
main()
{
int pid,i=0,j=0;
pid=fork();
if(pid==0)
printf("\nI am child\n");
else
printf("\nI am parent\n");
}
And ltrace output is:
SYS_brk(NULL) = 0x019d0000
SYS_access("/etc/ld.so.nohwcap", 00) = -2
SYS_mmap(0, 8192, 3, 34, 0xffffffff) = 0x7fe3cf84f000
SYS_access("/etc/ld.so.preload", 04) = -2
SYS_open("/etc/ld.so.cache", 0, 01) = 3
SYS_fstat(3, 0x7fff47007890) = 0
SYS_mmap(0, 103967, 1, 2, 3) = 0x7fe3cf835000
SYS_close(3) = 0
SYS_access("/etc/ld.so.nohwcap", 00) = -2
SYS_open("/lib/x86_64-linux-gnu/libc.so.6", 0, 00) = 3
SYS_read(3, "\177ELF\002\001\001", 832) = 832
SYS_fstat(3, 0x7fff470078e0) = 0
SYS_mmap(0, 0x389858, 5, 2050, 3) = 0x7fe3cf2a8000
SYS_mprotect(0x7fe3cf428000, 2097152, 0) = 0
SYS_mmap(0x7fe3cf628000, 20480, 3, 2066, 3) = 0x7fe3cf628000
SYS_mmap(0x7fe3cf62d000, 18520, 3, 50, 0xffffffff) = 0x7fe3cf62d000
SYS_close(3) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf834000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf833000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf832000
SYS_arch_prctl(4098, 0x7fe3cf833700, 0x7fe3cf832000, 34, 0xffffffff) = 0
SYS_mprotect(0x7fe3cf628000, 16384, 1) = 0
SYS_mprotect(0x7fe3cf851000, 4096, 1) = 0
SYS_munmap(0x7fe3cf835000, 103967) = 0
__libc_start_main(0x40054c, 1, 0x7fff47008298, 0x4005a0, 0x400590 <unfinished ...>
fork( <unfinished ...>
SYS_clone(0x1200011, 0, 0, 0x7fe3cf8339d0, 0) = 5967
<... fork resumed> ) = 5967
puts("\nI am parent" <unfinished ...>
SYS_fstat(1, 0x7fff47008060) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff
) = 0x7fe3cf84e000
I am child
SYS_write(1, "\n", 1
) = 1
SYS_write(1, "I am parent\n", 12) = -512
--- SIGCHLD (Child exited) ---
SYS_write(1, "I am parent\n", 12I am parent
) = 12
<... puts resumed> ) = 13
SYS_exit_group(13 <no return ...>
+++ exited (status 13) +++
|
The fork() and vfork() wrappers in glibc are implemented via the clone() system call. To better understand the relationship between fork() and clone(), we must consider the relationship between processes and threads in Linux.
Traditionally, fork() would duplicate all the resources owned by the parent process and assign the copy to the child process. This approach incurs considerable overhead, which all might be for nothing if the child immediately calls exec(). In Linux, fork() utilizes copy-on-write pages to delay or altogether avoid copying the data that can be shared between the parent and child processes. Thus, the only overhead that is incurred during a normal fork() is the copying of the parent's page tables and the assignment of a unique process descriptor struct, task_struct, for the child.
Linux also takes an exceptional approach to threads. In Linux, threads are merely ordinary processes which happen to share some resources with other processes. This is a radically different approach to threads compared to other operating systems such as Windows or Solaris, where processes and threads are entirely different kinds of beasts. In Linux, each thread has an ordinary task_struct of its own that just happens to be setup in such a way that it shares certain resources, such as an address space, with the parent process.
The flags parameter of the clone() system call includes a set of flags which indicate which resources, if any, the parent and child processes should share. Processes and threads are both created via clone(), the only difference is the set of flags that is passed to clone().
A normal fork() could be implemented as:
clone(SIGCHLD, 0);
This creates a task which does not share any resources with its parent, and is set to send the SIGCHLD termination signal to the parent when it exits.
In contrast, a task which shares the address space, filesystem resources, file descriptors and signal handlers with the parent, in other words a thread, could be created with:
clone(CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, 0);
vfork() in turn is implemented via a separate CLONE_VFORK flag, which will cause the parent process to sleep until the child process wakes it via a signal. The child will be the sole thread of execution in the parent's namespace, until it calls exec() or exits. The child is not allowed to write to the memory. The corresponding clone() call could be as follows:
clone(CLONE_VFORK | CLONE_VM | SIGCHLD, 0)
The implementation of sys_clone() is architecture specific, but the bulk of the work happens in kernel_clone() defined in kernel/fork.c. This function calls the static copy_process(), which creates a new process as a copy of the parent, but does not start it yet. copy_process() copies the registers, assigns a PID to the new task, and either duplicates or shares appropriate parts of the process environment as specified by the clone flags. When copy_process() returns, kernel_clone() will wake the newly created process and schedule it to run.
References
kernel/fork.c in Linux v5.19-rc5, 2022-07-03. See line 2606 for kernel_clone(), and line 2727 onward for the definitions of the syscalls fork(), vfork(), clone(), and clone3(), which all more or less just wrap kernel_clone().
| Which file in kernel specifies fork(), vfork()... to use sys_clone() system call |
1,429,617,368,000 |
fstrim requires the Linux block device to be mounted, and it is not very verbose. blkdiscard could tell, but also that would require a write operation.
Can I somehow tell if a block device supports trimming/discarding, without actually trying to trim/discard something on it?
|
You can check the device’s maximum discard sizes, e.g.
$ cat /sys/block/X/queue/discard_max_hw_bytes
(replacing X as appropriate).
If this shows a value greater than 0, the device supports discards:
A
discard_max_hw_bytes value of 0 means that the device does not
support discard functionality.
The maximum supported discard size is indicated by discard_max_bytes in the same directory; this can be smaller than the hardware-supported value to limit discard latencies (and can be written to to change the limit):
While discard_max_hw_bytes is the hardware limit for the
device, this setting is the software limit. Some devices exhibit
large latencies when large discards are issued, setting this
value lower will make Linux issue smaller discards and
potentially help reduce latencies induced by large discard
operations.
This works on many different block devices, not just disks: loop devices, device mapper devices, etc.
| How can I tell if a Linux block device is trimmable or not? |
1,521,302,084,000 |
Recently I found in QNX documentation that it allows to set up message based IPC between processes on separate physical machines by using serial device (dev/serX) and it made me wonder:
Is it possible in Linux to create system-wide special device for TCP/UDP tunnel? Something like nc stdin/stdout exposed publicly under /dev/something.
In the end I'd like to be able to write something to such file on one machine and receive it on the other end for example:
#machine1:
echo "Hello" > /dev/somedev
#machine2:
cat < /dev/somedev
I took a look at nc man but I didn't find any option to specify io source/destination other than stdio.
|
socat can do do this and many other things with things resembling "streams"
Something using this basic idea should do it for you:
Machine1$ socat tcp-l:54321,reuseaddr,fork pty,link=/tmp/netchardev,waitslave
Machine2$ socat pty,link=/tmp/netchardev,waitslave tcp:machine1:54321
(adapted from Examples Page)
If you want to encrypt, you could use a variation of ssl-l:54321,reuseaddr,cert=server.pem,cafile=client.crt,fork on machine1, and something like ssl:server-host:1443,cert=client.pem,cafile=server.crt on machine2
(More about socat ssl)
| Is it possible to expose TCP tunnel in Linux as special character device? |
1,521,302,084,000 |
I made an alias ff and sourced it from ~/.zsh/aliases.zsh.
The aliases run well themselves:
alias ff
ff='firefox --safe-mode'
and it runs as expected.
But when I try to run it under gdb I get:
> gdb ff
GNU gdb (Debian 7.12-6+b1) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
...
For help, type "help".
Type "apropos word" to search for commands related to "word"...
ff: No such file or directory.
(gdb) quit
I tried using gdb firefox --safe-mode but that wouldn't run.
Can somebody identify what is wrong?
|
Aliases are a feature of the shell. Defining an alias creates a new shell command name. It's recognized only by the shell, and only when it appears as a command name.
For example, if you type
> ff
at a shell prompt, it will invoke your alias, but if you type
> echo ff
the ff is just an argument, not a command. (At least in bash, you can play some tricks if the alias definition ends with a space. See Stéphane Chazelas's answer for a possible solution if you're determined to use shell aliases.)
You typed
> gdb ff
so the shell invoked gdb, passing it the string ff as an argument.
You can pass arguments to the debugged program via the gdb command line, but you have to use the --args option. For example:
> gdb firefox --safe-mode
tries (and fails) to treat --safe-mode as an option to gdb. To run the command with an argument, you can do it manually:
> gdb firefox
...
(gdb) run --safe-mode
or, as thrig's answer reminds me, you can use --args:
> gdb --args firefox --safe-mode
...
(gdb) run
(The first argument following --args is the command name; all remaining arguments are passed to the invoked command.)
It's possible to extract the arguments from a shell alias, but I'd recommend just defining a separate alias:
alias ff='firefox --safe-mode'
alias gdbff='gdb --args firefox --safe-mode'
Or, better, use shell functions, which are much more versatile. The bash manual says:
For almost every purpose, shell functions are preferred over aliases.
| why doesn't gdb like aliases [duplicate] |
1,521,302,084,000 |
Me and a friend had a discussion about Linux and Unix today and my friend kept saying that the first version of the Linux kernel was based on [parts] of the Unix kernel.
This really surprised me because I always thought that the architecture of Linux is just similar to Unix — since the very first version. So, is it really true that the first Linux version was based on parts of Unix?
|
Even if Linux was a system written from scratch, first version of Linux was very minix-lookalike, which is a "mini-Unix". It's in Linus' announcement.
Wikipedia provides a short description of Linux history. If you want to know more about this subject, this book is what you need. You'll learn there that Linus Torvalds used Unix man pages in order to know what system calls he has to implement and how they had to work.
| Is it true that the first version of Linux was based on parts of Unix? [duplicate] |
1,521,302,084,000 |
I noticed a problem a over the summer after upgrading from Debian 9 stable to Debian 10 testing: PulseAudio no longer recognized my Intel HDA audio devices. At the time I was able to switch to my monitor's audio connected via nVidia HDMI so I sidestepped the problem hoping that a future update would fix it. They haven't. Fast forward a few months and I've rearranged my workspace and now need to get the Intel HDA working again.
Here's what I've looked at so far...
Debian 10 Testing
The kernel sees it:
# dmesg | grep HDA
[ +0.005509] input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input17
[ +0.000073] input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input18
[ +0.000057] input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input19
[ +0.000054] input: HDA Intel PCH Line Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input20
[ +0.000052] input: HDA Intel PCH Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input21
[ +0.000051] input: HDA Intel PCH Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input22
[ +0.000053] input: HDA Intel PCH Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input23
[ +0.000058] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input24
[followed by NVidia HDMI audio devices that are recognized]
# lspci -nnk | grep -A2 Audio
00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)
Subsystem: Gigabyte Technology Co., Ltd 7 Series/C216 Chipset Family High Definition Audio Controller [1458:a002]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
[followed by NVidia HDMI audio devices that are recognized]
ALSA sees it:
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: Generic Analog [Generic Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: Generic Digital [Generic Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
[followed by NVidia HDMI audio devices that are recognized]
# aplay -L | grep PCH
default:CARD=PCH
HDA Intel PCH, Generic Analog
sysdefault:CARD=PCH
HDA Intel PCH, Generic Analog
front:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround21:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround40:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround41:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround50:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround51:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
surround71:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
iec958:CARD=PCH,DEV=0
HDA Intel PCH, Generic Digital
dmix:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
dmix:CARD=PCH,DEV=1
HDA Intel PCH, Generic Digital
dsnoop:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
dsnoop:CARD=PCH,DEV=1
HDA Intel PCH, Generic Digital
hw:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
hw:CARD=PCH,DEV=1
HDA Intel PCH, Generic Digital
plughw:CARD=PCH,DEV=0
HDA Intel PCH, Generic Analog
plughw:CARD=PCH,DEV=1
HDA Intel PCH, Generic Digital
speaker-test plays audio normally as does aplay -D default:CARD=PCH /usr/share/sounds/alsa/Front_Left.wav
However PulseAudio doesn't see the device at all:
$ pacmd list-sinks
1 sink(s) available.
* index: 0
name: <auto_null>
driver: <module-null-sink.c>
flags: DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY
state: SUSPENDED
suspend cause: IDLE
priority: 1000
volume: front-left: 55705 / 85% / -4.24 dB, front-right: 55705 / 85% / -4.24 dB
balance 0.00
base volume: 65536 / 100% / 0.00 dB
volume steps: 65537
muted: no
current latency: 0.00 ms
max request: 344 KiB
max rewind: 344 KiB
monitor source: 0
sample spec: s16le 2ch 44100Hz
channel map: front-left,front-right
Stereo
used by: 0
linked by: 0
configured latency: 0.00 ms; range is 0.50 .. 2000.00 ms
module: 16
properties:
device.description = "Dummy Output"
device.class = "abstract"
device.icon_name = "audio-card"
When I go to Sound Settings, all I'm seeing is the Dummy Output device. (The nVidia devices are no longer listed here because in rearranging things, I'm using a different monitor without audio so there's no HDMI audio device connected currently.)
I've tried to clearing out the PulseAudio configuration thinking I might have some legacy cruft around via:
rm ~/.pulse/* ~/.config/pulse/*
Debian 9 Stable
I have another partition on this machine which is still running Debian 9 stable, where the Intel HDA works under PulseAudio, and there do appear to be differences in the ALSA drivers vs. Debian 10 so below are the differences I noticed...
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC887-VD Analog [ALC887-VD Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC887-VD Digital [ALC887-VD Digital]
Subdevices: 0/1
Subdevice #0: subdevice #0
# aplay -L | grep PCH
sysdefault:CARD=PCH
HDA Intel PCH, ALC887-VD Analog
front:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround21:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround40:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround41:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround50:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround51:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
surround71:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
iec958:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Digital
dmix:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
dmix:CARD=PCH,DEV=1
HDA Intel PCH, ALC887-VD Digital
dsnoop:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
dsnoop:CARD=PCH,DEV=1
HDA Intel PCH, ALC887-VD Digital
hw:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
hw:CARD=PCH,DEV=1
HDA Intel PCH, ALC887-VD Digital
plughw:CARD=PCH,DEV=0
HDA Intel PCH, ALC887-VD Analog
plughw:CARD=PCH,DEV=1
HDA Intel PCH, ALC887-VD Digital
And of course, Debian 9 sees Intel HDA in PulseAudio:
$ pacmd list-sinks
1 sink(s) available.
* index: 2
name: <alsa_output.pci-0000_00_1b.0.iec958-stereo>
driver: <module-alsa-card.c>
flags: HARDWARE HW_MUTE_CTRL DECIBEL_VOLUME LATENCY FLAT_VOLUME DYNAMIC_LATENCY
state: RUNNING
suspend cause:
priority: 9958
volume: front-left: 65536 / 100% / 0.00 dB, front-right: 65536 / 100% / 0.00 dB
balance 0.00
base volume: 65536 / 100% / 0.00 dB
volume steps: 65537
muted: no
current latency: 24.26 ms
max request: 4 KiB
max rewind: 344 KiB
monitor source: 3
sample spec: s16le 2ch 48000Hz
channel map: front-left,front-right
Stereo
used by: 1
linked by: 1
configured latency: 25.00 ms; range is 0.50 .. 1837.50 ms
card: 1 <alsa_card.pci-0000_00_1b.0>
module: 7
properties:
alsa.resolution_bits = "16"
device.api = "alsa"
device.class = "sound"
alsa.class = "generic"
alsa.subclass = "generic-mix"
alsa.name = "ALC887-VD Digital"
alsa.id = "ALC887-VD Digital"
alsa.subdevice = "0"
alsa.subdevice_name = "subdevice #0"
alsa.device = "1"
alsa.card = "0"
alsa.card_name = "HDA Intel PCH"
alsa.long_card_name = "HDA Intel PCH at 0xf5130000 irq 30"
alsa.driver_name = "snd_hda_intel"
device.bus_path = "pci-0000:00:1b.0"
sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0"
device.bus = "pci"
device.vendor.id = "8086"
device.vendor.name = "Intel Corporation"
device.product.id = "1e20"
device.product.name = "7 Series/C216 Chipset Family High Definition Audio Controller"
device.form_factor = "internal"
device.string = "iec958:0"
device.buffering.buffer_size = "352800"
device.buffering.fragment_size = "176400"
device.access_mode = "mmap+timer"
device.profile.name = "iec958-stereo"
device.profile.description = "Digital Stereo (IEC958)"
device.description = "Built-in Audio Digital Stereo (IEC958)"
alsa.mixer_name = "Realtek ALC887-VD"
alsa.components = "HDA:10ec0887,1458a002,00100302"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-pci"
ports:
iec958-stereo-output: Digital Output (S/PDIF) (priority 0, latency offset 0 usec, available: unknown)
properties:
active port: <iec958-stereo-output>
So the question is obviously: how do I get the Intel HDA audio working again in PulseAudio with Debian 10? Is this something I can fix from a configuration standpoint or is this a driver issue that needs to be fixed either by the ALSA or PulseAudio maintainers?
|
Should anyone else run into this, here's a workaround to force PulseAudio to use the ALSA device...
First, confirm you know the correct sound card and device you want by playing some audio directly via ALSA:
aplay -D plughw:<CARD#>,<DEVICE#> /usr/share/sounds/alsa/Front_Center.wav
In my case I wanted the optical audio output so based on my aplay -l output as seen in my question above it was:
aplay -D plughw:0,1 /usr/share/sounds/alsa/Front_Center.wav
Make a note of the card and device number and add an entry to /etc/pulse/default.pa (replace 0,1 with what worked for you in the previous step):
load-module module-alsa-sink device=plughw:0,1
I added this line immediately before the .ifexists module-udev-detect.so line in the file (i.e. underneath the ### Load audio drivers statically comment)
Then run the following as the user your desktop session is logged in as (i.e. not as root):
pulseaudio --kill
pulseaudio --start
Then you should be able to open Sound Settings to see and select the card:
At this point, you should have audio playback through PulseAudio working again. (Something I noticed is that pacmd list-cards will still not list the card even though it now works) Reminder: this is a workaround and not the long term fix so be sure to make a note to yourself to undo this at some point in the future to see if it's been fixed properly. But it gets audio working for the time being.
| PulseAudio not recognizing Intel HDA after upgrading to Debian testing (Buster) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.