date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,530,741,375,000 |
I regularly use cat to view debugging information in the console from my FPGA development board over the serial connection, but I never have had to tell linux what the baud rate is. How does cat know what the baud rate of the serial connection is?
|
The stty utility sets or reports on terminal I/O characteristics for the device that is its standard input. These characteristics are used when establishing a connection over that particular medium. cat doesn't know the baud rate as such, it rather prints on the screen information received from the particular connection.
As an example stty -F /dev/ttyACM0 gives the current baud rate for the ttyACM0 device.
| How does cat know the baud rate of the serial port? |
1,530,741,375,000 |
I'm using Linux 4.15, and this happens to me many times when the RAM usage reaches its top - The whole OS becomes unresponsive, frozen and useless. The only thing I see it to be working is the disk (main system partition), which is massively in use.
I don't know whether this issue is OS-specific, hardware-specific, or configuration-specific.
Any ideas?
|
What can make Linux so unresponsive?
Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second.
It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap.
You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing. On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal.
After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter.
The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly.
Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R, press M and m, and look in the PSIZE column. You can also use smem. smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs).
| What can make Linux unresponsive for minutes when browsing certain websites? |
1,530,741,375,000 |
I would like to know how to determine which driver (out of those below) is handling my touchpad:
appletouch.ko.gz,
cyapa.ko.gz,
sermouse.ko.gz,
synaptics_usb.ko.gz,
bcm5974.ko.gz,
psmouse.ko.gz,
synaptics_i2c.ko.gz,
vsxxxaa.ko.gz
|
It's likely that none of them are doing it. On my system for example where I'm using Fedora 19 and a Thinkpad 410 with a Synaptic touchpad I have no Kernel driver as well.
$ lsmod|grep -iE "apple|cyapa|sermouse|synap|psmouse|vsxx|bcm"
So then what's taking care of this device? Well it's actually this Kernel module:
$ lsmod|grep -iE "input"
uinput 17672 0
If you want to see more about this module you can use modinfo uinput:
$ modinfo uinput
filename: /lib/modules/3.13.11-100.fc19.x86_64/kernel/drivers/input/misc/uinput.ko
version: 0.3
license: GPL
description: User level driver support for input subsystem
author: Aristeu Sergio Rozanski Filho
alias: devname:uinput
alias: char-major-10-223
...
As it turns out input devices such as these are often dealt with at a higher level, in this case the actual drivers are implemented at the X11 level.
uinput is a linux kernel module that allows to handle the input subsystem from user land. It can be used to create and to handle input devices from an application. It creates a character device in /dev/input directory. The device is a virtual interface, it doesn't belong to a physical device.
SOURCE: Getting started with uinput: the user level input subsystem
So then where's my touchpad drivers?
They're in X11's subsystem. You can see the device using the xinput --list command. For example, Here's the devices on my Thinkpad laptop:
$ xinput --list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Logitech USB Receiver id=9 [slave pointer (2)]
⎜ ↳ Logitech USB Receiver id=10 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=12 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=13 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=14 [slave keyboard (3)]
Notice that my TouchPad shows up in this list. You can find out additional info about these devices through /proc, for example:
$ cat /proc/bus/input/devices
...
I: Bus=0011 Vendor=0002 Product=0007 Version=01b1
N: Name="SynPS/2 Synaptics TouchPad"
P: Phys=isa0060/serio1/input0
S: Sysfs=/devices/platform/i8042/serio1/input/input5
U: Uniq=
H: Handlers=mouse0 event4
B: PROP=9
B: EV=b
B: KEY=6420 30000 0 0 0 0
B: ABS=260800011000003
...
OK but where's the driver?
Digging deeper if your system is using a Synaptic touchpad (which I believe they make ~90% of all touchpads), you can do a locate synaptics | grep xorg which should reveal the following files:
$ locate synaptics | grep xorg
/usr/lib64/xorg/modules/input/synaptics_drv.so
/usr/share/X11/xorg.conf.d/50-synaptics.conf
/usr/share/doc/xorg-x11-drv-synaptics-1.7.1
/usr/share/doc/xorg-x11-drv-synaptics-1.7.1/COPYING
/usr/share/doc/xorg-x11-drv-synaptics-1.7.1/README
The first results there is the actual driver you're asking about. It get's loaded into X.org via the second file here:
Section "InputClass"
Identifier "touchpad catchall"
Driver "synaptics"
MatchIsTouchpad "on"
MatchDevicePath "/dev/input/event*"
EndSection
And this line:
MatchDevicePath "/dev/input/event*"
Is what associates the physical devices with this driver. And you're probably asking yourself, how can this guy be so sure? Using this command shows the device associated with my given Synaptic TouchPad using id=12 from the xinput --list output I showed earlier:
$ xinput --list-props 12 | grep "Device Node"
Device Node (251): "/dev/input/event4"
| Which driver is handling my touchpad? |
1,530,741,375,000 |
I do use iwlist wlan0 scanning and it gives me a fair amount of data, but one part is missing: It is the protocol version. By protocol I mean (a/b/g/n).
It would be very good to have these commands in a standard distro. I am using OpenWRT.
|
iwconfig (and its wireless extension API) is deprecated (it's in "maintenance only mode" and "no new features will be added").
Use iw instead. This requires a moderately recent kernel (e.g. >= 3.0) with support for nl80211.
Using iw dev wlan0 scan, you can figure out the protocol used:
If there are Supported rates below 11 Mbps (except 6), there might be 802.11b support (even APs which allow disabling b support will announce those rates but reject b-only clients).
If there are Supported rates or Extended supported rates above 11 Mbps or 6 Mbps, there might be 802.11g support (even APs which are set to require_mode n will announce those rates but reject b/g clients).
If there are HT capabilities, there is some kind of 802.11n support. The specific high throughput features available are dependent on whether there is a secondary channel (in that case you are using a 40 MHz channel, so you have 150 Mbps per special stream instead of 72.2 Mbps), and the number of special streams supported for TX and RX.
If you are on the bleeding edge and you see a VHT, welcome to the 802.11ac world.
| Linux find WiFi Networks protocol (a/b/g/n) version of all available access points |
1,530,741,375,000 |
I've been tuning my Linux kernel for Intel Core 2 Quad (Yorkfield) processors, and I noticed the following messages from dmesg:
[ 0.019526] cpuidle: using governor menu
[ 0.531691] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.550918] intel_idle: does not run on family 6 model 23
[ 0.554415] tsc: Marking TSC unstable due to TSC halts in idle
PowerTop shows only states C1, C2 and C3 being used for the package and individual cores:
Package | CPU 0
POLL 0.0% | POLL 0.0% 0.1 ms
C1 0.0% | C1 0.0% 0.0 ms
C2 8.2% | C2 9.9% 0.4 ms
C3 84.9% | C3 82.5% 0.9 ms
| CPU 1
| POLL 0.1% 1.6 ms
| C1 0.0% 1.5 ms
| C2 9.6% 0.4 ms
| C3 82.7% 1.0 ms
| CPU 2
| POLL 0.0% 0.1 ms
| C1 0.0% 0.0 ms
| C2 7.2% 0.3 ms
| C3 86.5% 1.0 ms
| CPU 3
| POLL 0.0% 0.1 ms
| C1 0.0% 0.0 ms
| C2 5.9% 0.3 ms
| C3 87.7% 1.0 ms
Curious, I queried sysfs and found that the legacy acpi_idle driver was in use (I expected to see the intel_idle driver):
cat /sys/devices/system/cpu/cpuidle/current_driver
acpi_idle
Looking at the kernel source code, the current intel_idle driver contains a debug message specifically noting that some Intel family 6 models are not supported by the driver:
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 == 6)
pr_debug("does not run on family %d model %d\n", boot_cpu_data.x86, boot_cpu_data.x86_model);
An earlier fork (November 22, 2010) of intel_idle.c shows anticipated support for Core 2 processors (model 23 actually covers both Core 2 Duo and Quad):
#ifdef FUTURE_USE
case 0x17: /* 23 - Core 2 Duo */
lapic_timer_reliable_states = (1 << 2) | (1 << 1); /* C2, C1 */
#endif
The above code was deleted in December 2010 commit.
Unfortunately, there is almost no documentation in the source code, so there is no explanation regarding the lack of support for the idle function in these CPUs.
My current kernel configuration is as follows:
CONFIG_SMP=y
CONFIG_MCORE2=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y
My question is as follows:
Is there a specific hardware reason that Core 2 processors are not supported by intel_idle?
Is there a more appropriate way to configure a kernel for optimal CPU idle support for this family of processors (aside from disabling support for intel_idle)?
|
While researching Core 2 CPU power states ("C-states"), I actually managed to implement support for most of the legacy Intel Core/Core 2 processors. The complete implementation (Linux patch) with all of the background information is documented here.
As I accumulated more information about these processors, it started to become apparent that the C-states supported in the Core 2 model(s) are far more complex than those in both earlier and later processors. These are known as Enhanced C-states (or "CxE"), which involve the package, individual cores and other components on the chipset (e.g., memory). At the time the intel_idle driver was released, the code was not particularly mature and several Core 2 processors had been released that had conflicting C-state support.
Some compelling information on Core 2 Solo/Duo C-state support was found in this article from 2006. This is in relation to support on Windows, however it does indicate the robust hardware C-state support on these processors. The information regarding Kentsfield conflicts with the actual model number, so I believe they are actually referring to a Yorkfield below:
...the quad-core Intel Core 2 Extreme (Kentsfield) processor supports
all five performance and power saving technologies — Enhanced Intel
SpeedStep (EIST), Thermal Monitor 1 (TM1) and Thermal Monitor 2 (TM2),
old On-Demand Clock Modulation (ODCM), as well as Enhanced C States
(CxE). Compared to Intel Pentium 4 and Pentium D 600, 800, and 900
processors, which are characterized only by Enhanced Halt (C1) State,
this function has been expanded in Intel Core 2 processors (as well as
Intel Core Solo/Duo processors) for all possible idle states of a
processor, including Stop Grant (C2), Deep Sleep (C3), and Deeper
Sleep (C4).
This article from 2008 outlines support for per-core C-states on multi-core Intel processors, including Core 2 Duo and Core 2 Quad (additional helpful background reading was found in this white paper from Dell):
A core C-state is a hardware C-state. There are several core idle
states, e.g. CC1 and CC3. As we know, a modern state of the art
processor has multiple cores, such as the recently released Core Duo
T5000/T7000 mobile processors, known as Penryn in some circles. What
we used to think of as a CPU / processor, actually has multiple
general purpose CPUs in side of it. The Intel Core Duo has 2 cores in
the processor chip. The Intel Core-2 Quad has 4 such cores per
processor chip. Each of these cores has its own idle state. This makes
sense as one core might be idle while another is hard at work on a
thread. So a core C-state is the idle state of one of those cores.
I found a 2010 presentation from Intel that provides some additional background about the intel_idle driver, but unfortunately does not explain the lack of support for Core 2:
This EXPERIMENTAL driver supersedes acpi_idle on Intel Atom
Processors, Intel Core i3/i5/i7 Processors and associated Intel Xeon
processors. It does not support the Intel Core2 processor or earlier.
The above presentation does indicate that the intel_idle driver is an implementation of the "menu" CPU governor, which has an impact on Linux kernel configuration (i.e., CONFIG_CPU_IDLE_GOV_LADDER vs. CONFIG_CPU_IDLE_GOV_MENU). The differences between the ladder and menu governors are succinctly described in this answer.
Dell has a helpful article that lists C-state C0 to C6 compatibility:
Modes C1 to C3 work by basically cutting clock signals used inside the
CPU, while modes C4 to C6 work by reducing the CPU voltage. "Enhanced"
modes can do both at the same time.
Mode Name CPUs
C0 Operating State All CPUs
C1 Halt 486DX4 and above
C1E Enhanced Halt All socket LGA775 CPUs
C1E — Turion 64, 65-nm Athlon X2 and Phenom CPUs
C2 Stop Grant 486DX4 and above
C2 Stop Clock Only 486DX4, Pentium, Pentium MMX, K5, K6, K6-2, K6-III
C2E Extended Stop Grant Core 2 Duo and above (Intel only)
C3 Sleep Pentium II, Athlon and above, but not on Core 2 Duo E4000 and E6000
C3 Deep Sleep Pentium II and above, but not on Core 2 Duo E4000 and E6000; Turion 64
C3 AltVID AMD Turion 64
C4 Deeper Sleep Pentium M and above, but not on Core 2 Duo E4000 and E6000 series; AMD Turion 64
C4E/C5 Enhanced Deeper Sleep Core Solo, Core Duo and 45-nm mobile Core 2 Duo only
C6 Deep Power Down 45-nm mobile Core 2 Duo only
From this table (which I later found to be incorrect in some cases), it appears that there were a variety of differences in C-state support with the Core 2 processors (Note that nearly all Core 2 processors are Socket LGA775, except for Core 2 Solo SU3500, which is Socket BGA956 and Merom/Penryn processors. "Intel Core" Solo/Duo processors are one of Socket PBGA479 or PPGA478).
An additional exception to the table was found in this article:
Intel’s Core 2 Duo E8500 supports C-states C2 and C4, while the Core 2
Extreme QX9650 does not.
Interestingly, the QX9650 is a Yorkfield processor (Intel family 6, model 23, stepping 6). For reference, my Q9550S is Intel family 6, model 23 (0x17), stepping 10, which supposedly supports C-state C4 (confirmed through experimentation). Additionally, the Core 2 Solo U3500 has an identical CPUID (family, model, stepping) to the Q9550S but is available in a non-LGA775 socket, which confounds interpretation of the above table.
Clearly, the CPUID must be used at least down to the stepping in order to identify C-state support for this model of processor, and in some cases that may be insufficient (undetermined at this time).
The method signature for assigning CPU idle information is:
#define ICPU(model, cpu) \
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu }
Where model is enumerated in asm/intel-family.h. Examining this header file, I see that Intel CPUs are assigned 8-bit identifiers that appear to match the Intel family 6 model numbers:
#define INTEL_FAM6_CORE2_PENRYN 0x17
From the above, we have Intel Family 6, Model 23 (0x17) defined as INTEL_FAM6_CORE2_PENRYN. This should be sufficient for defining idle states for most of the Model 23 processors, but could potentially cause issues with QX9650 as noted above.
So, minimally, each group of processors that has a distinct C-state set would need to be defined in this list.
Zagacki and Ponnala, Intel Technology Journal 12(3):219-227, 2008 indicate that Yorkfield processors do indeed support C2 and C4. They also seem to indicate that the ACPI 3.0a specification supports transitions only between C-states C0, C1, C2 and C3, which I presume may also limit the Linux acpi_idle driver to transitions between that limited set of C-states. However, this article indicates that may not always be the case:
Bear in mind that is the ACPI C state, not the processor one, so ACPI
C3 might be HW C6, etc.
Also of note:
Beyond the processor itself, since C4 is a synchronized effort between
major silicon components in the platform, the Intel Q45 Express
Chipset achieves a 28-percent power improvement.
The chipset I'm using is indeed an Intel Q45 Express Chipset.
The Intel documentation on MWAIT states is terse but confirms the BIOS-specific ACPI behavior:
The processor-specific C-states defined in MWAIT extensions can map to
ACPI defined C-state types (C0, C1, C2, C3). The mapping relationship
depends on the definition of a C-state by processor implementation and
is exposed to OSPM by the BIOS using the ACPI defined _CST table.
My interpretation of the above table (combined with a table from Wikipedia, asm/intel-family.h and the above articles) is:
Model 9 0x09 (Pentium M and Celeron M):
Banias: C0, C1, C2, C3, C4
Model 13 0x0D (Pentium M and Celeron M):
Dothan, Stealey: C0, C1, C2, C3, C4
Model 14 0x0E INTEL_FAM6_CORE_YONAH (Enhanced Pentium M, Enhanced Celeron M or Intel Core):
Yonah (Core Solo, Core Duo): C0, C1, C2, C3, C4, C4E/C5
Model 15 0x0F INTEL_FAM6_CORE2_MEROM (some Core 2 and Pentium Dual-Core):
Kentsfield, Merom, Conroe, Allendale (E2xxx/E4xxx and Core 2 Duo E6xxx, T7xxxx/T8xxxx, Core 2 Extreme QX6xxx, Core 2 Quad Q6xxx): C0, C1, C1E, C2, C2E
Model 23 0x17 INTEL_FAM6_CORE2_PENRYN (Core 2):
Merom-L/Penryn-L: ?
Penryn (Core 2 Duo 45-nm mobile): C0, C1, C1E, C2, C2E, C3, C4, C4E/C5, C6
Yorkfield (Core 2 Extreme QX9650): C0, C1, C1E, C2E?, C3
Wolfdale/Yorkfield (Core 2 Quad, C2Q Xeon, Core 2 Duo E5xxx/E7xxx/E8xxx, Pentium Dual-Core E6xxx, Celeron Dual-Core): C0, C1, C1E, C2, C2E, C3, C4
From the amount of diversity in C-state support within just the Core 2 line of processors, it appears that a lack of consistent support for C-states may have been the reason for not attempting to fully support them via the intel_idle driver. I would like to fully complete the above list for the entire Core 2 line.
This is not really a satisfying answer, because it makes me wonder how much unnecessary power is used and excess heat has been (and still is) generated by not fully utilizing the robust power-saving MWAIT C-states on these processors.
Chattopadhyay et al. 2018, Energy Efficient High Performance Processors: Recent Approaches for Designing Green High Performance Computing is worth noting for the specific behavior I'm looking for in the Q45 Express Chipset:
Package C-state (PC0-PC10) - When the compute domains, Core and
Graphics (GPU) are idle, the processor has an opportunity for
additional power savings at uncore and platform levels, for example,
flushing the LLC and power-gating the memory controller and DRAM IO,
and at some state, the whole processor can be turned off while its
state is preserved on always-on power domain.
As a test, I inserted the following at linux/drivers/idle/intel_idle.c line 127:
static struct cpuidle_state conroe_cstates[] = {
{
.name = "C1",
.desc = "MWAIT 0x00",
.flags = MWAIT2flg(0x00),
.exit_latency = 3,
.target_residency = 6,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C1E",
.desc = "MWAIT 0x01",
.flags = MWAIT2flg(0x01),
.exit_latency = 10,
.target_residency = 20,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
// {
// .name = "C2",
// .desc = "MWAIT 0x10",
// .flags = MWAIT2flg(0x10),
// .exit_latency = 20,
// .target_residency = 40,
// .enter = &intel_idle,
// .enter_s2idle = intel_idle_s2idle, },
{
.name = "C2E",
.desc = "MWAIT 0x11",
.flags = MWAIT2flg(0x11),
.exit_latency = 40,
.target_residency = 100,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.enter = NULL }
};
static struct cpuidle_state core2_cstates[] = {
{
.name = "C1",
.desc = "MWAIT 0x00",
.flags = MWAIT2flg(0x00),
.exit_latency = 3,
.target_residency = 6,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C1E",
.desc = "MWAIT 0x01",
.flags = MWAIT2flg(0x01),
.exit_latency = 10,
.target_residency = 20,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C2",
.desc = "MWAIT 0x10",
.flags = MWAIT2flg(0x10),
.exit_latency = 20,
.target_residency = 40,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C2E",
.desc = "MWAIT 0x11",
.flags = MWAIT2flg(0x11),
.exit_latency = 40,
.target_residency = 100,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C3",
.desc = "MWAIT 0x20",
.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 85,
.target_residency = 200,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C4",
.desc = "MWAIT 0x30",
.flags = MWAIT2flg(0x30) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 100,
.target_residency = 400,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C4E",
.desc = "MWAIT 0x31",
.flags = MWAIT2flg(0x31) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 100,
.target_residency = 400,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C6",
.desc = "MWAIT 0x40",
.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 200,
.target_residency = 800,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.enter = NULL }
};
at intel_idle.c line 983:
static const struct idle_cpu idle_cpu_conroe = {
.state_table = conroe_cstates,
.disable_promotion_to_c1e = false,
};
static const struct idle_cpu idle_cpu_core2 = {
.state_table = core2_cstates,
.disable_promotion_to_c1e = false,
};
at intel_idle.c line 1073:
ICPU(INTEL_FAM6_CORE2_MEROM, idle_cpu_conroe),
ICPU(INTEL_FAM6_CORE2_PENRYN, idle_cpu_core2),
After a quick compile and reboot of my PXE nodes, dmesg now shows:
[ 0.019845] cpuidle: using governor menu
[ 0.515785] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.543404] intel_idle: MWAIT substates: 0x22220
[ 0.543405] intel_idle: v0.4.1 model 0x17
[ 0.543413] tsc: Marking TSC unstable due to TSC halts in idle states deeper than C2
[ 0.543680] intel_idle: lapic_timer_reliable_states 0x2
And now PowerTOP is showing:
Package | CPU 0
POLL 2.5% | POLL 0.0% 0.0 ms
C1E 2.9% | C1E 5.0% 22.4 ms
C2 0.4% | C2 0.2% 0.2 ms
C3 2.1% | C3 1.9% 0.5 ms
C4E 89.9% | C4E 92.6% 66.5 ms
| CPU 1
| POLL 10.0% 400.8 ms
| C1E 5.1% 6.4 ms
| C2 0.3% 0.1 ms
| C3 1.4% 0.6 ms
| C4E 76.8% 73.6 ms
| CPU 2
| POLL 0.0% 0.2 ms
| C1E 1.1% 3.7 ms
| C2 0.2% 0.2 ms
| C3 3.9% 1.3 ms
| C4E 93.1% 26.4 ms
| CPU 3
| POLL 0.0% 0.7 ms
| C1E 0.3% 0.3 ms
| C2 1.1% 0.4 ms
| C3 1.1% 0.5 ms
| C4E 97.0% 45.2 ms
I've finally accessed the Enhanced Core 2 C-states, and it looks like there is a measurable drop in power consumption - my meter on 8 nodes appears to be averaging at least 5% lower (with one node still running the old kernel), but I'll try swapping the kernels out again as a test.
An interesting note regarding C4E support - My Yorktown Q9550S processor appears to support it (or some other sub-state of C4), as evidenced above! This confuses me, because the Intel datasheet on the Core 2 Q9000 processor (section 6.2) only mentions C-states Normal (C0), HALT (C1 = 0x00), Extended HALT (C1E = 0x01), Stop Grant (C2 = 0x10), Extended Stop Grant (C2E = 0x11), Sleep/Deep Sleep (C3 = 0x20) and Deeper Sleep (C4 = 0x30). What is this additional 0x31 state? If I enable state C2, then C4E is used instead of C4. If I disable state C2 (force state C2E) then C4 is used instead of C4E. I suspect this may have something to do with the MWAIT flags, but I haven't yet found documentation for this behavior.
I'm not certain what to make of this: The C1E state appears to be used in lieu of C1, C2 is used in lieu of C2E and C4E is used in lieu of C4. I'm uncertain if C1/C1E, C2/C2E and C4/C4E can be used together with intel_idle or if they are redundant. I found a note in this 2010 presentation by Intel Labs Pittsburgh that indicates the transitions are C0 - C1 - C0 - C1E - C0, and further states:
C1E is only used when all the cores are in C1E
I believe that is to be interpreted as the C1E state is entered on other components (e.g. memory) only when all cores are in the C1E state. I also take this to apply equivalently to the C2/C2E and C4/C4E states (Although C4E is referred to as "C4E/C5" so I'm uncertain if C4E is a sub-state of C4 or if C5 is a sub-state of C4E. Testing seems to indicate C4/C4E is correct). I can force C2E to be used by commenting out the C2 state - however, this causes the C4 state to be used instead of C4E (more work may be required here). Hopefully there aren't any model 15 or model 23 processors that lack state C2E, because those processors would be limited to C1/C1E with the above code.
Also, the flags, latency and residency values could probably stand to be fine-tuned, but just taking educated guesses based on the Nehalem idle values seems to work fine. More reading will be required to make any improvements.
I tested this on a Core 2 Duo E2220 (Allendale), a Dual Core Pentium E5300 (Wolfdale), Core 2 Duo E7400, Core 2 Duo E8400 (Wolfdale), Core 2 Quad Q9550S (Yorkfield) and Core 2 Extreme QX9650, and I have found no issues beyond the afore-mentioned preference for state C2/C2E and C4/C4E.
Not covered by this driver modification:
The original Core Solo/Core Duo (Yonah, non Core 2) are family 6, model 14. This is good because they supported the C4E/C5 (Enhanced Deep Sleep) C-states but not the C1E/C2E states and would need their own idle definition.
The only issues that I can think of are:
Core 2 Solo SU3300/SU3500 (Penryn-L) are family 6, model 23 and will be detected by this driver. However, they are not Socket LGA775 so they may not support the C1E Enhanced Halt C-state. Likewise for the Core 2 Solo ULV U2100/U2200 (Merom-L). However, the intel_idle driver appears to choose the appropriate C1/C1E based on hardware support of the sub-states.
Core 2 Extreme QX9650 (Yorkfield) reportedly does not support C-state C2 or C4. I have confirmed this by purchasing a used Optiplex 780 and QX9650 Extreme processor on eBay. The processor supports C-states C1 and C1E. With this driver modification, the CPU idles in state C1E instead of C1, so there is presumably some power savings. I expected to see C-state C3, but it is not present when using this driver so I may need to look into this further.
I managed to find a slide from a 2009 Intel presentation on the transitions between C-states (i.e., Deep Power Down):
In conclusion, it turns out that there was no real reason for the lack of Core 2 support in the intel_idle driver. It is clear now that the original stub code for "Core 2 Duo" only handled C-states C1 and C2, which would have been far less efficient than the acpi_idle function which also handles C-state C3. Once I knew where to look, implementing support was easy. The helpful comments and other answers were much appreciated, and if Amazon is listening, you know where to send the check.
This update has been committed to github. I will e-mail a patch to the LKML soon.
Update: I also managed to dig up a Socket T/LGA775 Allendale (Conroe) Core 2 Duo E2220, which is family 6, model 15, so I added support for that as well. This model lacks support for C-state C4, but supports C1/C1E and C2/C2E. This should also work for other Conroe-based chips (E4xxx/E6xxx) and possibly all Kentsfield and Merom (non Merom-L) processors.
Update: I finally found some MWAIT tuning resources. This Power vs. Performance writeup and this Deeper C states and increased latency blog post both contain some useful information on identifying CPU idle latencies. Unfortunately, this only reports those exit latencies that were coded into the kernel (but, interestingly, only those hardware states supported by the processor):
# cd /sys/devices/system/cpu/cpu0/cpuidle
# for state in `ls -d state*` ; do echo c-$state `cat $state/name` `cat $state/latency` ; done
c-state0/ POLL 0
c-state1/ C1 3
c-state2/ C1E 10
c-state3/ C2 20
c-state4/ C2E 40
c-state5/ C3 20
c-state6/ C4 60
c-state7/ C4E 100
Update: An Intel employee recently published an article on intel_idle detailing MWAIT states.
| Why are some Intel family 6 CPU models (Core 2, Pentium M) not supported by intel_idle? |
1,530,741,375,000 |
Can someone explain the difference between the UUID's reported by blkid and mdadm? On one of our CentOS systems, for example:
[root@server ~]# blkid | grep /dev/md1
/dev/md1: UUID="32cb0a6e-8148-44e9-909d-5b23df045bd1" TYPE="ext4"
[root@server ~]# mdadm --detail /dev/md1 | grep UUID
UUID : f204c558:babf732d:85bd7296:bbfebeea
Why are they different and how would we change the UUID used by mdadm?
I understand we would use tune2fs to change the UUID for the partition (which would change what is returned by blkid) but not sure how to change what mdadm uses.
|
The first one reports the UUID of the ext4 filesystem on the md block device. It helps the system identify the file system uniquely among the filesystems available on the system. That is stored in the structure of the filesystem, that is in the data stored on the md device.
The second one is the UUID of the RAID device. It helps the md subsystem identify that particular RAID device uniquely. In particular, it helps identify all the block devices that belong to the RAID array. It is stored in the metadata of the array (on each member). Array members also have their own UUID (in the md system, they may also have partition UUIDs if they are GPT partitions (which itself would be stored in the GPT partition table), or LVM volumes...).
blkid is a bit misleading, as what it returns is the ID of the structure stored on the device (for those kind of structures it knows about like most filesystems, LVM members and swap devices). Also note that it's not uncommon to have block devices with structures with identical UUIDs (for instance LVM snapshots). And a block device can contain anything, including things whose structure doesn't include a UUID.
So, as an example, you could have a system with 3 drives, with GPT partitioning. Those drives could have a World Wide Name which identifies it uniquely. Let's say the 3 drives are partitioned with one partition each (/dev/sd[abc]1). Each partition will have a GPT UUID stored in the GPT partition table.
If those partitions make up a md RAID5 array. Each will get a md UUID as a RAID member, and the array will get a UUID as md RAID device.
That /dev/md0 can be further partitioned with MSDOS or GPT-type partitioning. For instance, we could have a /dev/md0p1 partition with a GPT UUID (stored in the GPT partition table that is stored in the data of /dev/md0).
That could in turn be a physical volume for LVM. As such it will get a PV UUID. The volume group will also have a VG UUID.
In that volume group, you would create logical volumes, each getting a LV UUID.
On one of those LVs (like /dev/VG/LV), you could make an ext4 filesystem. That filesystem would get an ext4 UUID.
blkid /dev/VG/LV would get you the (ext4) UUID of that filesystem. But as a partition inside the VG volume, it would also get a partition UUID (some partitioning scheme like MSDOS/MBR don't have UUIDs). That volume group is made of members PVs which are themselves other block devices. blkid /dev/md0p1 would give you the PV UUID. It also has a partition UUID in the GPT table on /dev/md0. /dev/md0 itself is made off other block devices. blkid /dev/sda1 will return the raid-member UUID. It also has a partition UUID in the GPT table on /dev/sda.
| Difference between UUID from blkid and mdadm? |
1,530,741,375,000 |
Given the file:
$ cat file
1
a
C
B
2
c
3
A
b
By default sort will:
$ sort file
1
2
3
a
A
b
B
c
C
With LC_COLLATE=C so will sort in uppercase letter before lowercase:
$ LC_COLLATE=C sort file
1
2
3
A
B
C
a
b
c
Is it possible to get sort to reverse the case ordering, that is digits, lowercase then uppercase?
|
I don't know of any locales that, by default, sort in that order. The solution is to create a custom locale with a customized sort order. If anyone, four years later, wants to sort in a custom fashion, here's the trick.
The vast majority of locales don't specify their own sort order, but rather copy the sort order defined in /usr/share/i18n/locales/iso14651_t1_common so that is what you will want to edit. Rather than change the sort order for nearly every locale by modifying the original iso14651_t1_common, I suggest you make a copy. Details about how the sort order works and how to create a custom locale in your $HOME directory without root access are found in this answer to a similar question.
Take a look at how a and A are ordered based on their entries in iso14651_t1_common:
<U0061> <a>;<BAS>;<MIN>;IGNORE # 198 a
<U0041> <a>;<BAS>;<CAP>;IGNORE # 517 A
b and B are similar:
<U0062> <b>;<BAS>;<MIN>;IGNORE # 233 b
<U0042> <b>;<BAS>;<CAP>;IGNORE # 550 B
We see that on the first pass, both a and A have the collating symbol <a>, while both b and B have the collating symbol <b>. Since <a> appears before <b> in iso14651_t1_common, a and A are tied before b and B. The second pass doesn't break the ties because all four characters have the collating symbol <BAS>, but during the third pass the ties are resolved because the collating symbol for lowercase letters <MIN> appears on line 3467, before the collating symbol for uppercase letters <CAP> (line 3488). So the sort order ends up as a, A, b, B.
Swapping the first and third collating symbols would sort letters first by case (lower then upper), then by accent (<BAS> means non-accented), then by alphabetical order. However, both <MIN> and <CAP> come before the numeric digits, so this would have the unwanted effect of putting digits after letters.
The easiest way to keep digits first while making all lowercase letters come before all uppercase letters is to force all letters to tie during the first comparison by setting them all equal to <a>. To make sure that they sort alphabetically within case, change the last collating symbol from IGNORE to the current first collating symbol. Following this pattern, a would become:
<U0061> <a>;<BAS>;<MIN>;<a> # 198 a
A would become:
<U0041> <a>;<BAS>;<CAP>;<a> # 517 A
b would become:
<U0062> <a>;<BAS>;<MIN>;<b> # 233 b
B would become:
<U0042> <a>;<BAS>;<CAP>;<b> # 550 B
and so on for the rest of the letters.
Once you have created a customized version of iso14651_t1_common, follow the instructions in the answer linked above to compile your custom locale.
| Specify the sort order with LC_COLLATE so lowercase is before uppercase |
1,530,741,375,000 |
I am searching for an explanation what exactly the output of the commands ip link and ip addr means on a linux box.
# ip link
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:a1:ba:51:4c:11 brd ff:ff:ff:ff:ff:ff
4: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT qlen 1000
link/ether 00:a1:ba:51:4c:12 brd ff:ff:ff:ff:ff:ff
What exactly are the LOWER_UP, NO-CARRIER and other flags? I have found a reference at http://download.vikis.lt/doc/iproute-doc-2.6.32/ip-cref.ps but it does not contain complete information and man pages are not detailed enough.
|
Those are interface's flags. They are documented in the netdevice(7) man-page. Below is the relevant part (reordered alphabetically):
IFF_ALLMULTI Receive all multicast packets.
IFF_AUTOMEDIA Auto media selection active.
IFF_BROADCAST Valid broadcast address set.
IFF_DEBUG Internal debugging flag.
IFF_DORMANT Driver signals dormant (since Linux 2.6.17)
IFF_DYNAMIC The addresses are lost when the interface goes down.
IFF_ECHO Echo sent packets (since Linux 2.6.25)
IFF_LOOPBACK Interface is a loopback interface.
IFF_LOWER_UP Driver signals L1 up (since Linux 2.6.17)
IFF_MASTER Master of a load balancing bundle.
IFF_MULTICAST Supports multicast
IFF_NOARP No arp protocol, L2 destination address not set.
IFF_NOTRAILERS Avoid use of trailers.
IFF_POINTOPOINT Interface is a point-to-point link.
IFF_PORTSEL Is able to select media type via ifmap.
IFF_PROMISC Interface is in promiscuous mode.
IFF_RUNNING Resources allocated.
IFF_SLAVE Slave of a load balancing bundle.
IFF_UP Interface is running.
So, LOWER_UP means there is a signal at the physical level (i.e. something active is plugged in the network interface). NO-CARRIER, is the exact opposite: no signal is detected at the physical level.
| ip link and ip addr output meaning |
1,530,741,375,000 |
I'm not sure if this is more of a SuperUser or UnixLinux question, but I'll try here...
Recently, I found this:
#710689 - aptitude: use unicode character in the trees - Debian Bug report logs
It would be nice when aptitude would use unicode characters for the trees in the
dependency lists, e.g. instead of:
--\ Depends (3)
--- libc-dev-bin (= 2.17-3)
--- libc6 (= 2.17-3)
--- linux-libc-dev
--\ Suggests (2)
--- glibc-doc (UNSATISFIED)
--\ manpages-dev
...
... and I thought - wow, I really like that ASCII-art tree output, wasn't aware that aptitude could do that! So, I start messing for an hour with aptitude command line switches - and I simply cannot get that output? So my initial question was - where does that output come from in the first place?!
After a while, I realized that on my system, aptitude ultimately symlinks to /usr/bin/aptitude-curses; and I finally realized that aptitude has a curses interface! :/
So, I finally run aptitude without any arguments - and so the curses interface starts, and I can see something like this:
... so quite obviously, those ASCII tree characters come from the curses interface.
So I was wondering - is there a Debian/apt tool, which will output such a "visual" ASCII tree - but with actual dependencies of packages?
I know about debtree - Package dependency graphs (also software recommendation - How to visually display dependencies of a package? - Ask Ubuntu); but I'd rather have something in terminal, resembling a directory tree (rather than the "unordered" [in terms of node position] graphs from debtree, generated by graphviz's dot).
I've also seen Is there anything that will show dependencies visually, like a tree?, which recommends:
$ apt-rdepends aptitude
Reading package lists... Done
Building dependency tree
Reading state information... Done
aptitude
Depends: libapt-pkg4.10
Depends: libboost-iostreams1.42.0 (>= 1.42.0-1)
Depends: libc6 (>= 2.4)
Depends: libcwidget3
Depends: libept1
Depends: libgcc1 (>= 1:4.1.1)
Depends: libncursesw5 (>= 5.7+20100313)
Depends: libsigc++-2.0-0c2a (>= 2.0.2)
Depends: libsqlite3-0 (>= 3.7.3)
Depends: libstdc++6 (>= 4.5)
Depends: libxapian22
libapt-pkg4.10
libboost-iostreams1.42.0
Depends: libbz2-1.0
Depends: libc6 (>= 2.3.6-6~)
Depends: libgcc1 (>= 1:4.1.1)
Depends: libstdc++6 (>= 4.2.1)
Depends: zlib1g (>= 1:1.1.4)
...
... which is good, because it lists first the immediate dependencies of the required package; and then the dependencies of the first-level dependency packages, and so on - but it's not visualized as a tree (and actually, aptitude's curses interface simply shows installed info when you expand dependency node; it does not expand to further dependencies).
So, the question is - is there a tool, that would produce a dependency tree graph with terminal characters - like, say, in the following pseudocode:
$ pseudo-deb-graph --show-package="aptitude"
aptitude
--- Depends: libapt-pkg4.10
--\ Depends: libboost-iostreams1.42.0 (>= 1.42.0-1)
--- Depends: libbz2-1.0
--- Depends: libc6 (>= 2.4)
--\ Depends: libc6 (>= 2.3.6-6~)
--\ Depends: libc-bin (= 2.13-0ubuntu13)
--- ...
--\ Depends: libgcc1
--- ...
--\ Depends: tzdata
--- ...
...
|
You can do it with bash script
Source code: "apt-rdepends-tree"
https://gist.github.com/damphat/6214499
Run
# sudo apt-get install apt-rdepends
# save gist, above, as "apt-rdepends-tree"
# chmod +x apt-rdepends-tree
# ./apt-rdepends-tree gcc
Output look like this:
# ./apt-rdepends-tree gcc
├─ gcc
│ ├─ cpp (>= 4:4.7.2-1)
│ └─ gcc-4.7 (>= 4.7.2-1)
└─ package-a
├─ package-b
└─ package-c
| Output visual (ASCII) Debian dependency tree to terminal? |
1,530,741,375,000 |
XFS and Ext4 file system which one is really stable and reliable for long run with heavy disk write and read?
the system will be used in a place where 24/7 is in service, and every second there is read and write in the disk
system need to be 99.95 % uptime for about 1 year run
system need to be maximum downtime in year for about 20 hours maximum
Which file-system is the best choice for such challenge? ( i wanted to use Solaris or FreeBSD but for my project i must have to use Ubuntu or ArchLinux or Fedora or CentOS).
But confused with which file system to choose.
|
XFS was more fragile, but the issue seems to be fixed.
XFS was surely a slow-FS on metadata operations, but it has been fixed recently as well.
EXT4 is still getting quite critical fixes as it follows from commits at kernel.org's git.
"EXT4 does not support concurrent writes, XFS does"
(But) EXT4 is more "mainline"
So, the final answer depends on your precise requirements (as usual).
| XFS vs Ext4 vs Others - which file system is stable, reliable, for long run such as 24/7 case [closed] |
1,530,741,375,000 |
I am a long time linux user and have recently become interested in playing about with BSD-based operating systems. What are the differences between linux and BSD-based systems. I am interested in learning about the functional, practical and also historical differences.
|
It is very tempting to want to define the differences between BSD and Linux. Just like Gilles said in the comments, it is not an easy task since they're so numerous and disparate. Very often, the differences won't even be noticeable at the user's level; everything has been worked out so that the OS behaves as you would expect a Unix to.
Moreover multiple distributions are available for each. No matter what you say about Linux/BSD generally, you'll often find a distribution that contradicts it.
The following is a list of comparisons I found scattered over the web.
Here on U&L, a user has defined the following differences:
Big differences are (in my opinion of course):
Userland (Linux uses GNU while BSD uses BSD)
Integration (Linux is a collection of different efforts, BSD is much more unified at the core)
Packaging (Linux typically manages installed software in binary packages - BSD typically manages a "ports" tree that you use to
build software from sources)
Notice the word typically in his last point. Some Linux distributions will manage source code and conversely some BSDs will manage binary packages.
Matthew D. Fuller has a lengthy comparison between BSDs and Linux you may want to look into. The article will compare both on Design level, Technical differences, Philosophies and finally address common Myths. Here are some excerpts:
BSD is what you get when a bunch of Unix hackers sit down to try to
port a Unix system to the PC. Linux is what you get when a bunch of PC
hackers sit down and try to write a Unix system for the PC.
--
BSD is designed. Linux is grown. Perhaps that's the only succinct way
to describe it, and possibly the most correct.
User vivek on FreeBSD forums writes:
Key differences:
FreeBSD full os. Linux is kernel. Linux distribution is os (100+ majro disrtos).
FreeBSD everything comes from a single source. Linux is like mix of lot of stuff.
BSD License vs GPL
FreeBSD Installer
BSD commands (ls file -l will not work) vs GPL command (ls file -l will work)
FreeBSD better and updated man pages.
BSD rc.d style booting vs Linux SysV style init.d booting
Here are some articles describing the history of each:
Written by Dave Tyson, this article describes the history of many Unix variants (including of course BSD and Linux).
Scott Barman describes how both operating systems came to be and how it forged his opinion:
I will give one "solid" opinion: If I had to choose one system that
would act as my router, DNS, ftp server, e-mail gateway, firewall, web
server, proxy server, etc., that system would run a BSD-based
operating system. If I had to choose one system that would act as my
desktop workstation, run X, all the application I like, etc., that
system would run Linux. HOWEVER, I would have no problem running Linux
as my work horse server or running the BSD-based system on my desktop.
Further reading
This question here on U&L, compares existing BSDs, highlighting what they have in common.
| What are the main differences between BSD- and linux-based operating systems? |
1,530,741,375,000 |
I run free -m on a debian VM running on Hyper-V:
total used free shared buffers cached
Mem: 10017 9475 541 147 34 909
-/+ buffers/cache: 8531 1485
Swap: 1905 0 1905
So out of my 10GB of memory, 8.5GB is in use and only 1500MB is free (excluding cache).
But I struggle to find what is using the memory. The output of ps aux | awk '{sum+=$6} END {print sum / 1024}', which is supposed to add up the RSS utilisation is:
1005.2
In other words, my processes only use 1GB of memory but the system as a whole (excluding cache) uses 8.5GB.
What could be using the other 7.5GB?
ps: I have another server with a similar configuration that shows used mem of 1200 (free mem = 8.8GB) and the sum of RSS usage in ps is 900 which is closer to what I would expect...
EDIT
cat /proc/meminfo on machine 1 (low memory):
MemTotal: 10257656 kB
MemFree: 395840 kB
MemAvailable: 1428508 kB
Buffers: 162640 kB
Cached: 1173040 kB
SwapCached: 176 kB
Active: 1810200 kB
Inactive: 476668 kB
Active(anon): 942816 kB
Inactive(anon): 176184 kB
Active(file): 867384 kB
Inactive(file): 300484 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1951740 kB
SwapFree: 1951528 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 951016 kB
Mapped: 224388 kB
Shmem: 167820 kB
Slab: 86464 kB
SReclaimable: 67488 kB
SUnreclaim: 18976 kB
KernelStack: 6736 kB
PageTables: 13728 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7080568 kB
Committed_AS: 1893156 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 62284 kB
VmallocChunk: 34359672552 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 67520 kB
DirectMap2M: 10418176 kB
cat /proc/meminfo on machine 2 (normal memory usage):
MemTotal: 12326128 kB
MemFree: 8895188 kB
MemAvailable: 10947592 kB
Buffers: 191548 kB
Cached: 2188088 kB
SwapCached: 0 kB
Active: 2890128 kB
Inactive: 350360 kB
Active(anon): 1018116 kB
Inactive(anon): 33320 kB
Active(file): 1872012 kB
Inactive(file): 317040 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 3442684 kB
SwapFree: 3442684 kB
Dirty: 44 kB
Writeback: 0 kB
AnonPages: 860880 kB
Mapped: 204680 kB
Shmem: 190588 kB
Slab: 86812 kB
SReclaimable: 64556 kB
SUnreclaim: 22256 kB
KernelStack: 10576 kB
PageTables: 11924 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 9605748 kB
Committed_AS: 1753476 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 62708 kB
VmallocChunk: 34359671804 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 63424 kB
DirectMap2M: 12519424 kB
|
I understand you're using Hyper-V, but the concepts are similar. Maybe this will set you on the right track.
Your issue is likely due to virtual memory ballooning, a technique the hypervisor uses to optimize memory. See this link for a description
I observed your exact same symptoms with my VMs in vSphere. A 4G machine with nothing running on it would report 30M used by cache, but over 3G "used" in the "-/+ buffers" line.
Here's sample output from VMWare's statistics command. This shows how close to 3G is being tacked on to my "used" amount:
vmware-toolbox-cmd stat balloon
3264 MB
In my case, somewhat obviously, my balloon driver was using ~3G
I'm not sure what the similar command in Hyper-V is to get your balloon stats, but I'm sure you'll get similar results
| High memory usage but no process is using it |
1,530,741,375,000 |
Please suggest me any particular unnecessary file that I can clean to back everything to normal condition(temporarily). (i.e. any log or archieve or anything ). My var/log has only 40MB and Home directory has 3GB of space(so I believe that's not a problem). Other than that what I can clean up to make space.
[user@host]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_inamivm-lv_root
18G 17G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 485M 71M 389M 16% /boot
I am in a debian machine.
UPDATE1:
output of cd /; du -sxh *
6.1M bin
61M boot
156K dev
22M etc
3.3G home
306M lib
18M lib64
16K lost+found
4.0K media
4.0K mnt
408K opt
du: cannot access `proc/18605/task/18605/fd/4': No such file or directory
du: cannot access `proc/18605/task/18605/fdinfo/4': No such file or directory
du: cannot access `proc/18605/fd/4': No such file or directory
du: cannot access `proc/18605/fdinfo/4': No such file or directory
0 proc
208K root
9.7M sbin
0 selinux
4.0K srv
0 sys
8.0K tmp
536M usr
187M var
Update2
Output of ls -la /
dr-xr-xr-x. 22 root root 4096 Aug 7 08:42 .
dr-xr-xr-x. 22 root root 4096 Aug 7 08:42 ..
-rw-r--r--. 1 root root 0 Aug 7 08:42 .autofsck
dr-xr-xr-x. 2 root root 4096 Mar 28 16:53 bin
dr-xr-xr-x. 5 root root 1024 Mar 28 16:54 boot
drwxr-xr-x. 16 root root 3580 Sep 9 03:13 dev
drwxr-xr-x. 69 root root 4096 Aug 23 09:19 etc
drwxr-xr-x. 9 root root 4096 Jun 29 16:10 home
dr-xr-xr-x. 8 root root 4096 Mar 7 2012 lib
dr-xr-xr-x. 9 root root 12288 Mar 28 16:53 lib64
drwx------. 2 root root 16384 Mar 7 2012 lost+found
drwxr-xr-x. 2 root root 4096 Sep 23 2011 media
drwxr-xr-x. 2 root root 4096 Sep 23 2011 mnt
drwxr-xr-x. 3 root root 4096 Mar 7 2012 opt
dr-xr-xr-x. 355 root root 0 Aug 7 08:42 proc
dr-xr-x---. 5 root root 4096 Aug 17 18:27 root
dr-xr-xr-x. 2 root root 4096 May 2 09:13 sbin
drwxr-xr-x. 7 root root 0 Aug 7 08:42 selinux
drwxr-xr-x. 2 root root 4096 Sep 23 2011 srv
drwxr-xr-x. 13 root root 0 Aug 7 08:42 sys
drwxrwxrwt. 3 root root 4096 Sep 13 03:37 tmp
drwxr-xr-x. 13 root root 4096 Mar 28 17:53 usr
drwxr-xr-x. 18 root root 4096 Mar 7 2012 var
|
The best way of finding out disk consuming, is using graphical software like baobab:
Launch it with sudo baobab /
| How to clean up unnecessary files |
1,530,741,375,000 |
I'm creating a small backup script using sshfs:
sshfs backup_user@target_ip:/home /mnt/backup
Is there a way to include the password in this command?
Or is there another file transfer solution where the login password can be included other than FTP/SFTP?
|
-o password_stdin do not seem to be working on all systems, for instance freeBSD. etc.
You can also use expect Interpreter, it should work with sshfs
and should do the trick.
Another solution would be sshpass, for instance, let say your are backing up directory /var/www
Backing up:
name=$(date '+%y-%m-%d')
mkdir /backup/$name && tar -czvf /backup/$name/"$name.tar.gz" /var/www
uploading backup file to backup server
sshpass -p "your_password" scp -r backup_user@target_ip:/home/ /backup/$name
So it will upload directory with today's backup
But still, as it was said higher, best(safe and simple) way would be to use ssh key pair
The only inconvenience would be that you have to go through the key generation process once on every server you need to pair, but it is better than keeping a password in plain text format on all servers you want to back up :),
Generating a Key Pair the Proper way
On Local server
ssh-keygen -t rsa
On remote Server
ssh root@remote_servers_ip "mkdir -p .ssh"
Uploading Generated Public Keys to the Remote Server
cat ~/.ssh/id_rsa.pub | ssh root@remote_servers_ip "cat >> ~/.ssh/authorized_keys"
Set Permissions on Remote server
ssh root@remote_servers_ip "chmod 700 ~/.ssh; chmod 640 ~/.ssh/authorized_keys"
Login
ssh root@remote_servers_ip
Enabling SSH Protocol v2
uncomment "Protocol 2" in /etc/ssh/sshd_config
enabling public key authorization in sshd
uncomment "PubkeyAuthentication yes" in /etc/ssh/sshd_config
If StrictModes is set to yes in /etc/ssh/sshd_config then
restorecon -Rv ~/.ssh
| Username and password in command line with sshfs |
1,530,741,375,000 |
In his autobiography, Just for Fun, Linus mentions the "page-to-disk" feature that was pivotal in making Linux a worthy competitor to Minix and other UNIX clones of the day:
I remember that, in December, there was this guy in Germany who only had 2 megabytes of RAM, and he was trying to compile the kernel and he couldn't run GCC because GCC at the time needed more than a megabyte. He asked me if Linux could be compiled with a smaller compiler that wouldn't need as much memory. So I decided that even though I didn't need the particular feature, I would make it happen for him. It's called page-to-disk, and it means that even though someone only has 2 mgs of RAM, he can make it appear to be more using the disk for memory. This was around Christmas 1991.
Page-to-disk was a fairly big thing because it was something Minix had never done. It was included in version 0.12, which was released in the first week of January 1992. Immediately, people started to compare Linux not only to Minix but to Coherent, which was a small Unix clone developed by Mark Williams Company. From the beginning, the act of adding page-to-disk caused Linux to rise above the competition.
That's when Linux took off. Suddenly there were people switching from Minix to Linux.
Is he essentially talking about swapping here? People with some historical perspective on Linux would probably know.
|
Yes, this is effectively swapping. Quoting the release notes for 0.12:
Virtual memory.
In addition to the "mkfs" program, there is now a "mkswap" program on
the root disk. The syntax is identical: "mkswap -c /dev/hdX nnn", and
again: this writes over the partition, so be careful. Swapping can then
be enabled by changing the word at offset 506 in the bootimage to the
desired device. Use the same program as for setting the root file
system (but change the 508 offset to 506 of course).
NOTE! This has been tested by Robert Blum, who has a 2M machine, and it
allows you to run gcc without much memory. HOWEVER, I had to stop using
it, as my diskspace was eaten up by the beta-gcc-2.0, so I'd like to
hear that it still works: I've been totally unable to make a
swap-partition for even rudimentary testing since about christmastime.
Thus the new changes could possibly just have backfired on the VM, but I
doubt it.
In 0.12, paging is used for a number of features, not just swapping to a device: demand-loading (only loading pages from binaries as they’re used), sharing (sharing common pages between processes).
| Is the "page-to-disk" feature Linus talks about in his autobiography essentially the concept of swapping we use today? |
1,530,741,375,000 |
I'm having a little issue. I've a live system which run on RHEL 6.7 (VM) and have VMware 6.5 (which is not managed by our group) . The issue is, the other group tried to extend the capacity of an existing disk on a VM. After that, I ran a scan command to detect new disk as usual with echo "- - -" > /sys/class/scsi_host/host0/scan, but nothing happened. They added 40G on sdb disk which should be 100G and I saw that is changed on VM but not in Linux. So where is the problem ? As I said, this is a live system, so I don't want to reboot it.
Here is the system :
# df -h /dev/mapper/itsmvg-bmclv
59G 47G 9.1G 84% /opt/bmc
# lsblk sdb 8:16 0 60G 0 disk └─itsmvg-bmclv (dm-2) 253:2 0 60G 0 lvm /opt/bmc
# vgs VG #PV #LV #SN Attr VSize VFree itsmvg 1 1 0 wz--n- 59.94g 0
# pwd /sys/class/scsi_host
# ll lrwxrwxrwx 1 root root 0 Nov 13 16:18 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host2 -> ../../devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/scsi_host/host2
|
Below is the command that you need to run to scan the host devices so it will show the new hard disk connected.
echo "- - -" >> /sys/class/scsi_host/host_$i/scan
$i is the host number
| How to detect new hard disk attached without rebooting? |
1,530,741,375,000 |
How to install VirtualBox Extension Pack to VirtualBox latest version on Linux?
I would also like to be able to verify extension pack has been successfully installed and and uninstall it, if I wish.
|
First, you need to adhere to the VirtualBox Extension Pack Personal Use and Evaluation License.
Second, I advise to only install this package if actually needed, here is the description of the VirtualBox Extension Pack functionality:
Oracle Cloud Infrastructure integration, USB 2.0 and USB 3.0 Host Controller, Host Webcam, VirtualBox RDP, PXE ROM, Disk Encryption, NVMe.
Now, let's download the damn thing:
we need to store the latest VirtualBox version into a variable, let's call it LatestVirtualBoxVersion
download the latest version of the VirtualBox Extension Pack, one-liner follows
LatestVirtualBoxVersion=$(wget -qO - https://download.virtualbox.org/virtualbox/LATEST-STABLE.TXT) && wget "https://download.virtualbox.org/virtualbox/${LatestVirtualBoxVersion}/Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack"
Simplification attribution goes to guntbert. Thank you.
You might want to verify its integrity by comparing its SHA-256 checksum available in file:
https://www.virtualbox.org/download/hashes/${LatestVirtualBoxVersion}/SHA256SUMS
using
sha256sum -c --ignore-missing SHA256SUMS
Then, we install it as follows:
sudo VBoxManage extpack install --replace Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack
To verify if it has been successfully installed, we may list the installed extension packs:
VBoxManage list extpacks
To uninstall the extension pack:
sudo VBoxManage extpack uninstall "Oracle VM VirtualBox Extension Pack"
| How to install VirtualBox Extension Pack to VirtualBox latest version on Linux? |
1,530,741,375,000 |
I want to create random unique numbers (UUIDs) as the following
node.id=ffffffff-ffff-ffff-ffff-ffffffffffff
First I tried this
$ rndnum=` echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM`
$ echo $rndnum
30380-echo 21875-echo 14791-echo 32193-echo 11503
What is the right way to create the following (where f is any number)?
ffffffff-ffff-ffff-ffff-ffffffffffff
|
On Linux, the util-linux/util-linux-ng package offers a command to generate UUIDs: uuidgen.
$ uuidgen
5528f550-6559-4d61-9054-efb5a16a4de0
To quote the manual:
The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid(3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future.
There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options.
Addendum: The OP had provided a link in the comments to the documentation for Presto DB. After a bit of searching, I found this related discussion where it is explicitly mentioned that the node.id property is indeed a UUID.
Adding the information provided by frostschutz in a comment:
As an alternative to the uuidgen/libuuid approach, you can make use of an interface exposed by the Linux kernel itself to generate UUIDs:
$ cat /proc/sys/kernel/random/uuid
00db2531-365c-415c-86f7-503a35fafa58
The UUID is re-generated on each request.
| Create unique random numbers (UUIDs) in bash |
1,530,741,375,000 |
I teach an Intro to UNIX/Linux course at a local college and one of my students asked the following question:
Why are some of the files in my directory colored white and others are gray? Are the white ones the ones I created today and the gray are existing files?
As I looked into this I first thought the answer would be in the LS_COLORS variable, but further investigation revealed that the color listings were different when using the -l switch versus the -al switch with the ls command. See the following screen shots:
Using ls -l the file named '3' shows as white but using the -al switch the same file shows a gray.
Is this a bug in ls or does anyone know why this is happening?
|
It looks as if your prompt-string ($PS1) is setting the bold attribute on characters to make the colors nicer, and not unsetting it. The output from ls doesn't know about this, and does unset bold. So after the first color output of ls, everything looks dimmer.
| Inconsistent color output from `ls` command |
1,378,457,636,000 |
Possible Duplicate:
How to know if /dev/sdX is a connected USB or HDD?
The output of ls /dev/sd* on my system is -
sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sdb sdc sdc1 sdc2
How should I determine which drive is which?
|
Assuming you're on Linux.
Try:
sudo /lib/udev/scsi_id --page=0x80 --whitelisted --device=/dev/sdc
or:
cat /sys/block/sdc/device/{vendor,model}
You can also get information (including labels) from the filesystems on the different partitions with
sudo blkid /dev/sdc1
The pathid will help to determine the type of device:
readlink -f /sys/class/block/sdc/device
See also:
find /dev/disk -ls | grep /sdc
Which with a properly working udev would give you all the information from the other commands above.
The content of /proc/partitions will give you information on size (though not in as a friendly format as lsblk already mentionned by @Max).
sudo blockdev --getsize64 /dev/sdc
Will give you the size in bytes of the corresponding block device.
sudo smartctl -i /dev/sdc
(cross-platform), will also give you a lot of information including make, model, size, serial numbers, firmware revisions...
| How to determine which sd* is usb? [duplicate] |
1,378,457,636,000 |
This is a rather low-level question, and I understand that it might not be the best place to ask. But, it seemed more appropriate than any other SE site, so here goes.
I know that on the Linux filesystem, some files actually exist, for example: /usr/bin/bash is one that exists. However, (as far as I understand it), some also don't actually exist as such and are more virtual files, eg: /dev/sda, /proc/cpuinfo, etc. My questions are (they are two, but too closely related to be separate questions):
How does the Linux kernel work out whether these files are real (and therefore read them from the disk) or not when a read command (or such) is issued?
If the file isn't real: as an example, a read from /dev/random will return random data, and a read from /dev/null will return EOF. How does it work out what data to read from this virtual file (and therefore what to do when/if data written to the virtual file too) - is there some kind of map with pointers to separate read/write commands appropriate for each file, or even for the virtual directory itself? So, an entry for /dev/null could simply return an EOF.
|
So there are basically two different types of thing here:
Normal filesystems, which hold files in directories with data and metadata, in the familiar manner (including soft links, hard links, and so on). These are often, but not always, backed by a block device for persistent storage (a tmpfs lives in RAM only, but is otherwise identical to a normal filesystem). The semantics of these are familiar; read, write, rename, and so forth, all work the way you expect them to.
Virtual filesystems, of various kinds. /proc and /sys are examples here, as are FUSE custom filesystems like sshfs or ifuse. There's much more diversity in these, because really they just refer to a filesystem with semantics that are in some sense 'custom'. Thus, when you read from a file under /proc, you aren't actually accessing a specific piece of data that's been stored by something else writing it earlier, as under a normal filesystem. You're essentially doing a kernel call, requesting some information that's generated on-the-fly. And this code can do anything it likes, since it's just some function somewhere implementing read semantics. Thus, you have the weird behavior of files under /proc, like for instance pretending to be symlinks when they aren't really.
The key is that /dev is actually, usually, one of the first kind. It's normal in modern distributions to have /dev be something like a tmpfs, but in older systems, it was normal to have it be a plain directory on disk, without any special attributes. The key is that the files under /dev are device nodes, a type of special file similar to FIFOs or Unix sockets; a device node has a major and minor number, and reading or writing them is doing a call to a kernel driver, much like reading or writing a FIFO is calling the kernel to buffer your output in a pipe. This driver can do whatever it wants, but it usually touches hardware somehow, e.g. to access a hard disk or play sound in the speakers.
To answer the original questions:
There are two questions relevant to whether the 'file exists' or not; these are whether the device node file literally exists, and whether the kernel code backing it is meaningful. The former is resolved just like anything on a normal filesystem. Modern systems use udev or something like it to watch for hardware events and automatically create and destroy the device nodes under /dev accordingly. But older systems, or light custom builds, can just have all their device nodes literally on the disk, created ahead of time. Meanwhile, when you read these files, you're doing a call to kernel code which is determined by the major and minor device numbers; if these aren't reasonable (for instance, you're trying to read a block device that doesn't exist), you'll just get some kind of I/O error.
The way it works out what kernel code to call for which device file varies. For virtual filesystems like /proc, they implement their own read and write functions; the kernel just calls that code depending on which mount point it's in, and the filesystem implementation takes care of the rest. For device files, it's dispatched based on the major and minor device numbers.
| How does Linux differentiate between real and unexisting (eg: device) files? |
1,378,457,636,000 |
Unlike the similar question, I cannot even connect with smbclient.
The samba share works fine in windows and automagically works, but in Linux I can't mount it at all and the error message is cryptic at best. Here is my samba.conf:
[global]
dos charset = CP437
netbios name = REDACTED
server string = Lab
server role = standalone server
map to guest = Bad User
obey pam restrictions = Yes
smb passwd file = /var/etc/private/smbpasswd
private dir = /var/etc/private
max log size = 51200
server min protocol = SMB2
time server = Yes
deadtime = 15
max open files = 11070
hostname lookups = Yes
load printers = No
printcap name = /dev/null
disable spoolss = Yes
dns proxy = No
pid directory = /var/run/samba
panic action = /usr/local/libexec/samba/samba-backtrace
idmap config * : backend = tdb
acl allow execute always = Yes
create mask = 0666
directory mask = 0777
directory name cache size = 0
kernel change notify = No
map archive = No
map readonly = no
store dos attributes = Yes
strict locking = No
[common]
comment = Lab Common share
path = /mnt/pool/common
read only = No
inherit acls = Yes
hosts allow = XXX.XXX.XX.X/24, XXX.XX.XX.X/24 <-- redacted
hide dot files = No
veto files = /.snap/.windows/.zfs/
vfs objects = zfsacl, streams_xattr, aio_pthread
zfsacl:acesort = dontcare
nfs4:chown = yes
nfs4:acedup = merge
nfs4:mode = special
recycle:subdir_mode = 0700
recycle:directory_mode = 0777
recycle:touch = yes
recycle:versions = yes
recycle:keeptree = yes
recycle:repository = .recycle/%U
The error message is:
[as@localhost ~]$ sudo mount -t cifs -o username=removed,password=removed //server.ip.address/common /media/windowsshare/
mount error(95): Operation not supported
A perfectly useless message.
The debug-enabled dmesg:
[237179.795551] fs/cifs/cifsfs.c: Devname: //132.239.27.172/common flags: 0
[237179.795563] fs/cifs/connect.c: Username: lauria
[237179.795565] fs/cifs/connect.c: file mode: 0x1ed dir mode: 0x1ed
[237179.795600] fs/cifs/connect.c: CIFS VFS: in cifs_mount as Xid: 44 with uid: 0
[237179.795600] fs/cifs/connect.c: UNC: \\132.239.27.172\common
[237179.795605] fs/cifs/connect.c: Socket created
[237179.795606] fs/cifs/connect.c: sndbuf 16384 rcvbuf 87380 rcvtimeo 0x1b58
[237179.795897] fs/cifs/fscache.c: cifs_fscache_get_client_cookie: (0xffff8803e0aa4800/0xffff880035d25580)
[237179.795898] fs/cifs/connect.c: Demultiplex PID: 25817
[237179.795902] fs/cifs/connect.c: CIFS VFS: in cifs_get_smb_ses as Xid: 45 with uid: 0
[237179.795903] fs/cifs/connect.c: Existing smb sess not found
[237179.795907] fs/cifs/cifssmb.c: Requesting extended security.
[237179.795910] fs/cifs/transport.c: For smb_command 114
[237179.795912] fs/cifs/transport.c: Sending smb: smb_len=78
[237179.801062] fs/cifs/connect.c: RFC1002 header 0x25
[237179.801067] fs/cifs/misc.c: checkSMB Length: 0x29, smb_buf_length: 0x25
[237179.801090] fs/cifs/transport.c: cifs_sync_mid_result: cmd=114 mid=1 state=4
[237179.801093] fs/cifs/cifssmb.c: Dialect: 65535
[237179.801094] fs/cifs/cifssmb.c: negprot rc -95
[237179.801097] fs/cifs/connect.c: CIFS VFS: leaving cifs_get_smb_ses (xid = 45) rc = -95
[237179.801100] fs/cifs/fscache.c: cifs_fscache_release_client_cookie: (0xffff8803e0aa4800/0xffff880035d25580)
[237179.801262] fs/cifs/connect.c: CIFS VFS: leaving cifs_mount (xid = 44) rc = -95
[237179.801263] CIFS VFS: cifs_mount failed w/return code = -95
I have tried many different -sec options---they all fail, everything has the same error message. smbclient is not helpful either:
smbclient //132.239.27.172/common -U username%password
protocol negotiation failed: NT_STATUS_INVALID_NETWORK_RESPONSE
How does this work on windows but not at all on linux?
|
OK "I" figured it out-- for some reason, adding "vers=3.0" makes it work. I don't know why it was having issues without this, or why it works.
but for future reference if others are having this issue with their freenas setups.
| Mounting cifs: "Operation not supported" |
1,378,457,636,000 |
Most recent Linux distributions include bash as default shell, although there are other, (arguably) better shells available.
I'm trying to understand if this is some historical leftover that nobody wants to change, or are there some good reasons that make bash the first choice?
|
The short answer is because linux is really GNU/Linux. Only the kernel is linux but the base collection of utilities providing the Unix like environment is provided by GNU and the GNU shell is bash
As I said, that's the short answer ;)
edited to add some additional commentary...
Let me prefix by saying that I'm not a Unix historian, so I can only answer IMHO
A few points, first of all bash is the kitchen sink of shells, as emacs is to editors.
At the time bash was released there were no free ksh implementations, tcsh was a free csh replacement, but Stallman had a rant against csh for shell programming.
As an interactive shell bash had excellent history/command recall, along with the saving of history from session to session. It was a drop in replacement for sh, bsh, ksh for shell programming and made for a decent interactive shell.
Like a snowball rolling downhill, bash has gained momentum and size.
Yes, there are dozens of other shells; shells that are better suited for individual purpose or taste, but for a single all around shell bash does a decent job and has had a lot of eyes on it for over 20 years.
| Why is bash standard on Linux? |
1,378,457,636,000 |
In my laptop:
$ cat /etc/issue
Ubuntu 18.04 LTS \n \l
There are two different folders for libraries x86 and x86_64:
~$ ls -1 /
bin
lib
lib64
sbin
...
Why for binaries exists only one directory?
P.S. I'm also interested in Android but I hope that answer should be the same.
|
First, why there are separate /lib and /lib64:
The Filesystem Hierarchy Standard
mentions that separate /lib and /lib64 exist because:
10.1. There may be one or more variants of the /lib directory on systems which support more than one binary format requiring
separate libraries. (...) This is commonly used for 64-bit or 32-bit
support on systems which support multiple binary formats, but require
libraries of the same name. In this case, /lib32 and /lib64 might be
the library directories, and /lib a symlink to one of them.
On my Slackware 14.2 for example there are /lib and /lib64
directories for 32-bit and 64-bit libraries respectively even though
/lib is not as a symlink as the FHS snippet would suggest:
$ ls -l /lib/libc.so.6
lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib/libc.so.6 -> libc-2.23.so
$ ls -l /lib64/libc.so.6
lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib64/libc.so.6 -> libc-2.23.so
There are two libc.so.6 libraries in /lib and /lib64.
Each dynamically built
ELF binary
contains a hardcoded path to the interpreter, in this case either
/lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2:
$ file main
main: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, not stripped
$ readelf -a main | grep 'Requesting program interpreter'
[Requesting program interpreter: /lib/ld-linux.so.2]
$ file ./main64
./main64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped
$ readelf -a main64 | grep 'Requesting program interpreter'
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
The job of the interpreter is to load necessary shared libraries. You
can ask a GNU interpreter what libraries it would load without even
running a binary using LD_TRACE_LOADED_OBJECTS=1 or a ldd wrapper:
$ LD_TRACE_LOADED_OBJECTS=1 ./main
linux-gate.so.1 (0xf77a9000)
libc.so.6 => /lib/libc.so.6 (0xf760e000)
/lib/ld-linux.so.2 (0xf77aa000)
$ LD_TRACE_LOADED_OBJECTS=1 ./main64
linux-vdso.so.1 (0x00007ffd535b3000)
libc.so.6 => /lib64/libc.so.6 (0x00007f56830b3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f568347c000)
As you can see a given interpreter knows exactly where to look for
libraries - 32-bit version looks for libraries in /lib and 64-bit
version looks for libraries in /lib64.
FHS standard says the following about /bin:
/bin contains commands that may be used by both the system
administrator and by users, but which are required when no other
filesystems are mounted (e.g. in single user mode). It may also
contain commands which are used indirectly by scripts.
IMO the reason why there are no separate /bin and /bin64 is that if we had
the file with the same name in both of these directories we couldn't call one of them
indirectly because we'd have to put /bin or /bin64 first in
$PATH.
However, notice that the above is just the convention - the Linux
kernel does not really care if you have separate /bin and /bin64.
If you want them, you can create them and setup your system accordingly.
You also mentioned Android - note that except for running a modified
Linux kernel it has nothing to do with GNU systems such as
Ubuntu - no glibc, no bash (by default, you can of course compile and deploy it manually), and also directory structure is
completely different.
| Why there are `/lib` and `/lib64` but only `/bin`? |
1,378,457,636,000 |
Operating System Concepts says
Consider a sequential read of a file on disk using the standard
system calls open(), read(), and write(). Each file access requires
a system call and disk access.
Alternatively, we can use the virtual memory techniques discussed so
far to treat file I/O as routine memory accesses. This approach, known
as
memory mapping a file, allows a part of the virtual address space to be logically associated with the file. As we shall see, this can
lead to significant performance increases. Memory mapping a file is
accomplished by mapping a disk block to a page (or pages) in memory.
Initial access to the file proceeds through ordinary demand paging,
resulting in a page fault. However, a page-sized portion of the file is
read from the file system into a physical page (some systems may opt to
read in more than a page-sized chunk of memory at a time). Subsequent
reads and writes to the file are handled as routine memory accesses.
Manipulating files through memory rather than incurring the overhead of
using the read() and write() system calls simplifies and speeds up file
access and usage.
Could you analyze the performance of memory mapped file?
If I am correct, memory mapping file works as following. It takes a system call to create a memory mapping.
Then when it accesses the mapped memory, page faults happen. Page faults also have overhead.
How does memory mapping a file have significant performance increases over the standard I/O system calls?
|
Memory mapping a file directly avoids copying buffers which happen with read() and write() calls. Calls to read() and write() include a pointer to buffer in process' address space where the data is stored. Kernel has to copy the data to/from those locations. Using mmap() maps the file to process' address space, so the process can address the file directly and no copies are required.
There is also no system call overhead when accessing memory mapped file after the initial call if the file is loaded to memory at initial mmap(). If a page of the mapped file is not in memory, access will generate a fault and require kernel to load the page to memory. Reading a large block with read() can be faster than mmap() in such cases, if mmap() would generate significant number of faults to read the file. (It is possible to advise kernel in advance with madvise() so that the kernel may load the pages in advance before access).
For more details, there is related question on Stack Overflow: mmap() vs. reading blocks
| How does memory mapping a file have significant performance increases over the standard I/O system calls? |
1,378,457,636,000 |
We have seen OS doing Copy on Write optimisation when forking a process. Reason being that most of the time fork is preceded by exec, so we don't want to incur the cost of page allocations and copying the data from the caller address space unnecessarily.
So does this also happen when doing CP on a linux with ext4 or xfs (journaling) file systems? If it does not happen, then why not?
|
The keyword to search is reflink. It was recently implemented in XFS.
EDIT: the XFS implementation was initially marked EXPERIMENTAL. This warning was removed in the kernel release 4.16, a number of months after I wrote the above :-).
| Does any file system implement Copy on Write mechanism for CP? |
1,378,457,636,000 |
Apart from upgrading the kernel, are there any changes to a Linux system that require a reboot? I know there are situations where a reboot makes things easier, but are there any that cannot be accomplished except with a reboot?
To clarify: I'm thinking of a typical desktop or server system that isn't suffering from a hardware malfunction.
|
A couple of things come to mind:
Recover from a kernel panic
A kernel panic, by definition, cannot be recovered from without restarting the kernel.
Recover from hangs which leave you without terminal access
If the system is unresponsive and you're stranded without a way to issue commands to recover, the only thing you might be able to do is to reboot. Usually, you'd want to avoid manual power cycling. For these kinds of situations, the Linux kernel has Magic SysRq support which can be used to reboot the machine in an emergency.
As long as CONFIG_MAGIC_SYSRQ option has been enabled in the kernel configuration, and the kernel.sysrq sysctl option is enabled, you can issue commands directly to the kernel with magic SysRq key combinations:
Note that Alt+SysRq below means press and hold down Alt, then press and hold SysRq (typically the PrintScrn key).
Alt+SysRq+r: regain control of keyboard
Alt+SysRq+e: send SIGTERM to all processes, except init, giving them a chance to terminate gracefully
Alt+SysRq+i: send SIGKILL to all processes, except init, forcing them to terminate
Alt+SysRq+s: attempt to sync all mounted filesystems
Alt+SysRq+u: remount all filesystem read-only
Alt+SysRq+b: reboot, or
Alt+SysRq+o: shutdown
A mnemonic for the magic SysRq key combinations to attempt a graceful reboot is:
"Reboot Even If System Utterly Broke"
For headless servers, there's even an iptables target enabling remote SysRq sequences over a network.
Recover from unbootable state
If the system has already been brought to a state where a regular boot is not possible (e.g. as a result of a failed system upgrade, corrupt filesystem etc.), then the only way to access a recovery console on the system might be to reboot using appropriate boot-time options.
Change boot-time kernel parameters
Some kernel parameters (e.g. audit to enable / disable kernel auditing) can only be set when the kernel is loaded at boot-time.
| When is a reboot required? |
1,378,457,636,000 |
Why not 2^62, or 2^31 or anything else?
What is the maximum value of the Process ID?
|
It seems to be a purely arbitrary choice. It could be anything, but somebody1 felt 4 million is enough. Use the source:
/*
* A maximum of 4 million PIDs should be enough for a while.
* [NOTE: PID/TIDs are limited to 2^29 ~= 500+ million, see futex.h.]
*/
#define PID_MAX_LIMIT (CONFIG_BASE_SMALL ? PAGE_SIZE * 8 : \
(sizeof(long) > 4 ? 4 * 1024 * 1024 : PID_MAX_DEFAULT))
The history on git only seems to go back as far as 2005, and the value has been that at least as long.
1The manpage says that /proc/sys/kernel/pid_max was added in 2.5.34, and looking at the changelog, it looks like the somebody was Ingo Molnár:
<[email protected]>
[PATCH] pid-max-2.5.33-A0
This is the pid-max patch, the one i sent for 2.5.31 was botched. I
have removed the 'once' debugging stupidity - now PIDs start at 0 again.
Also, for an unknown reason the previous patch missed the hunk that had
the declaration of 'DEFAULT_PID_MAX' which made it not compile ...
However, Ingo only added DEFAULT_PID_MAX. PID_MAX_LIMIT was added by Linus Torvalds in 2.5.37:
<[email protected]>
Make pid_max grow dynamically as needed.
Turns out, I misread the changelog.
The changes are in the 2.5.37 patchset:
diff -Nru a/include/linux/threads.h b/include/linux/threads.h
--- a/include/linux/threads.h Fri Sep 20 08:20:41 2002
+++ b/include/linux/threads.h Fri Sep 20 08:20:41 2002
@@ -17,8 +17,13 @@
#define MIN_THREADS_LEFT_FOR_ROOT 4
/*
- * This controls the maximum pid allocated to a process
+ * This controls the default maximum pid allocated to a process
*/
-#define DEFAULT_PID_MAX 0x8000
+#define PID_MAX_DEFAULT 0x8000
+
+/*
+ * A maximum of 4 million PIDs should be enough for a while:
+ */
+#define PID_MAX_LIMIT (4*1024*1024)
#endif
That's as far as my search skills get me.
Thanks to @hobbs, it seems Ingo is the somebody after all. The patch I quoted above was first sent by him. From the LKML post accompanying it:
memory footprint of the new PID allocator scales dynamically with
/proc/sys/kernel/pid_max: the default 32K PIDs cause a 4K allocation,
a pid_max of 1 million causes a 128K footprint. The current absolute
limit for pid_max is 4 million PIDs - this does not cause any
allocation in the kernel, the bitmaps are demand-allocated runtime.
The pidmap table takes up 512 bytes.
There was a heated discussion about having higher limits, but it seems nothing came out of it in the end.
| Why is the maximum PID in a 64-bit Linux system 2^22? |
1,378,457,636,000 |
Is there an easy command that I can use to zero out the last 1MB of a hard drive?
For the start of the drive I would dd if=/dev/zero of=/dev/sdx bs=1M count=1. The seek option for dd looks promising, but does someone have an easy way to determine exactly how far I should seek?
I have a hardware RAID appliance, that stores some of the RAID configuration at the end of the drive. I need the RAID appliance to see the drives as un-configured, so I want to remove the RAID configuration without having to spend the time to do a full wipe of the drives. I have a dozen 2TB drives, and a full erase of all of those drives would take a long time.
|
The simplest way on Linux to get the size of the disk is with blockdev --getsz:
sudo -s
dd bs=512 if=/dev/zero of=/dev/sdx count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))
| Wipe last 1MB of a Hard drive |
1,378,457,636,000 |
No matter how much I set the HISTSIZE environment variable to be larger than 5000, when printing the history list with the history builtin, it prints only the last 5000 commands.
I need that because I often have a large .bash_history which exceeds 5000 lines, and sometimes one needs to address an early command by pressing Ctrl-R, but if that command is more than 5000 commands earlier, I can't access it using that mechanism. I know I can use grep on the .bash_history, but I think the Ctrl-R mechanism would be much more faster (and convenient). I use gnu bash version 4.1.
That is the full content of my .bashrc file:
#!/bin/bash
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
# don't put duplicate lines in the history. See bash(1) for more options
# ... or force ignoredups and ignorespace
#HISTCONTROL=ignoredups:ignorespace:erasedups
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=50000
HISTFILESIZE=500000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\ [\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\@-\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
. /etc/bash_completion
fi
|
This is the actual code that loads the history (from bashhist.c around line 260):
/* Load the history list from the history file. */
void
load_history ()
{
char *hf;
/* Truncate history file for interactive shells which desire it.
Note that the history file is automatically truncated to the
size of HISTSIZE if the user does not explicitly set the size
differently. */
set_if_not ("HISTSIZE", "500");
sv_histsize ("HISTSIZE");
set_if_not ("HISTFILESIZE", get_string_value ("HISTSIZE"));
sv_histsize ("HISTFILESIZE");
/* Read the history in HISTFILE into the history list. */
hf = get_string_value ("HISTFILE");
if (hf && *hf && file_exists (hf))
{
read_history (hf);
using_history ();
history_lines_in_file = where_history ();
}
}
If the values of HISTSIZE and HISTFILESIZE are set, they will be used.
Readline, the library that actually handles input / line editing and history does offer facilities to put a cap on just how big the history buffer can grow. However, Bash does not place a hard ceiling on this where values any larger would be ignored, at least that I could find.
Edit
From comments, readline was indeed the culprit. I was looking (rather foolishly) at functional parameters:
there is a variable called history-size that can be read from the inputrc file. that variable sets the maximum number of history entries saved in the history list. I checked it's value in my local inputrc file to found it equal 5000. Setting it to a larger value solved the problem.
| Is there a way to set the size of the history list in bash to more than 5000 lines? |
1,378,457,636,000 |
How can I verify if a hard drive is encrypted in Fedora 20?
If not, does it mean I have to re install Fedora to encrypt it?
|
Assuming that the drive is /dev/sdb, and the partition you want to check is /dev/sdb1, run this command:
$ blkid /dev/sdb1
the output will change if the partition is encrypted or not:
/dev/sdb1: UUID="xxxxxxxxxxxx" TYPE="crypto_LUKS" #encrypted
/dev/sdb1: UUID="xxxxxxxxxxxx" TYPE="ext4" #not encrypted, fs is ext4
If the partition is not encrypted, and assuming that you are NOT trying to encrypt the / partition, you have to:
Make a backup of the data on that partition
Initialize the partition as encrypted
$ cryptsetup luksFormat /dev/sdb1
BEWARE: this command will wipe all the contents of the partition!!!
It will ask you for a passphrase to open the volume; now if you try to run blkid, the output should be TYPE="crypto_LUKS"
Open the encrypted partition to use it
$ cryptsetup luksOpen /dev/sdb1 secret
where "secret" is the name of the volume we are opening
Format the new "secret" volume
$ mkfs.ext4 /dev/mapper/secret
Mount it providing the passphrase created before
$ mount /dev/mapper/secret /whereyouwant
Now you should be able to use the encrypted partition!
Optionally, if you want to mount it at reboot, you should edit /etc/crypttab and insert a line similar to this (it will request the password at boot):
secret /dev/sdb1 none
Where secret is the name of the volume we created before.
Or something like this, if you want to put your password in some plain text file:
secret /dev/sdb1 /whereyouwant-sdb1-luks-pwdfile
Just keep in mind for this, you also have to add the key:
$ cryptsetup luksAddKey /dev/sdb1 /whereyouwant-sdb1-luks-pwdfile
And edit the /etc/fstab and insert a line similar to this:
/dev/mapper/secret /whereyouwant ext4 defaults 1 2
| Verify if a hard drive is encrypted on Linux |
1,378,457,636,000 |
How to block command, let say mkdir for specific user ?
What I did just created read-only function and store in users profile ~/.bashrc
/bin/mkdir() {
echo "mkdir command not allow for you"
}
mkdir() {
echo "mkdir command not allow for you"
}
./mkdir() {
echo "mkdir command not allow for you"
}
readonly -f /bin/mkdir
readonly -f mkdir
readonly -f ./mkdir
Test:
rahul@ubuntu:~$ cd /bin/
rahul@ubuntu:/bin$ ./mkdir /home/rahul/ggg
mkdir command not allow for you
rahul@ubuntu:/bin$ cd
rahul@ubuntu:~$ mkdir testing
mkdir command not allow for you
rahul@ubuntu:~$ /bin/mkdir testing
mkdir command not allow for you
So my question is What should be the way of achieving this ? is there any tool for this ?
Update 1 # But if user is smart , he could copy mkdir binary and rename it and use it . So how to achieve this ?
|
I don't know how to do it with bash, but I know of another shell that restricts the user environment: lshell (limited shell).
A quick overview of configuration
Lshell is configured via an INI file. By default, it holds a whitelist of allowed commands, but it can be easily configured to prohibit user from using a specific command.
This configuration (default conf /etc/lshell.conf) prohibits user foo from using mkdir:
[foo]
allowed = 'all' - ['mkdir', 'bash', 'sh', 'csh', 'dash', 'env']
In order to configure a user account to use lshell by default, you must:
chsh -s /usr/bin/lshell foo
Lshell can do more, like:
3 levels of granularity: user, group, all.
Can restrict access to certain paths in the system.
Can restrict the use of certain characters (like |).
Can restrict the use of certain commands only over SSH.
And more.
Update 1# Added Test Result :
rahul:~$ which bash
/bin/bash
rahul:~$ dd if=$(which bash) of=my_bash
*** forbidden syntax: dd if=$(which bash) of=my_bash
rahul:~$ bash
*** forbidden command: bash
rahul:~$ cp /bin/bash my_bash
*** forbidden path: /bin/bash
rahul:~$ /bin/bash
*** forbidden command: /bin/bash
rahul:~$ sh
*** forbidden command: sh
rahul:~$ dash
*** forbidden command: dash
rahul:~$ env bash
*** forbidden command: env
rahul:~$ cp /bin/mkdir mycreatedir
*** forbidden path: /bin/mkdir
| Block Particular Command in Linux for Specific user |
1,378,457,636,000 |
What is the difference between chmod 775 and chmod 2755?
|
from man chmod:
2000 (the setgid bit). Executable files with this bit set will
run with effective gid set to the gid of the file owner.
| Difference between "chmod 775" and "chmod 2755" |
1,378,457,636,000 |
Is there any way to automate Linux server configuration? I'm working on setting up a couple of new build servers, as well as an FTP server, and would like to automate as much of the process as possible.
The reason for this is that the setup and configuration of these servers needs to be done in an easily repeatable way. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future.
Essentially, all the servers need is to install the OS, as well as a handful of packages. There's nothing overly complicated about the setups.
So, is there a way to automate this process (or at least some amount of it)?
EDIT: Also, say I use Kickstart, is there a way to remove the default Ubuntu repositories, and just install the packages from a collection of .deb files we have locally (preferably through apt, rather than dpkg)?
|
Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations. For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.)
The other approach is to use configuration management. This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role.
As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world.
We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future.
You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it:
It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation.
| How to automate Linux server configuration? |
1,378,457,636,000 |
Could somebody explain to me the difference between kill and killall? Why doesn't killall see what ps shows?
# ps aux |grep db2
root 1123 0.0 0.8 841300 33956 pts/1 Sl 11:48 0:00 db2wdog
db2inst1 1125 0.0 3.5 2879496 143616 pts/1 Sl 11:48 0:02 db2sysc
root 1126 0.0 0.6 579156 27840 pts/1 S 11:48 0:00 db2ckpwd
root 1127 0.0 0.6 579156 27828 pts/1 S 11:48 0:00 db2ckpwd
root 1128 0.0 0.6 579156 27828 pts/1 S 11:48 0:00 db2ckpwd
# killall db2ckpwd
db2ckpwd: no process found
# kill -9 1126
# kill -9 1127
# kill -9 1128
System is SuSe 11.3 (64 bit); kernel 2.6.34-12; procps version 3.2.8; killall from PSmisc 22.7; kill from GNU coreutils 7.1
|
Is this on Linux?
There are actually a few subtly different versions of the command name that are used by ps, killall, etc.
The two main variants are: 1) the long command name, which is what you get when you run ps u; and 2) the short command name, which is what you get when you run ps without any flags.
Probably the biggest difference happens if your program is a shell script or anything that requires an interpreter, e.g. Python, Java, etc.
Here's a really trivial script that demonstrates the difference. I called it mycat:
#!/bin/sh
cat
After running it, here's the two different types of ps.
Firstly, without u:
$ ps -p 5290
PID TTY ... CMD
5290 pts/6 ... mycat
Secondly, with u:
$ ps u 5290
USER PID ... COMMAND
mikel 5290 ... /bin/sh /home/mikel/bin/mycat
Note how the second version starts with /bin/sh?
Now, as far as I can tell, killall actually reads /proc/<pid>/stat, and grabs the second word in between the parens as the command name, so that's really what you need to be specifying when you run killall. Logically, that should be the same as what ps without the u flag says, but it would be a good idea to check.
Things to check:
what does cat /proc/<pid>/stat say the command name is?
what does ps -e | grep db2 say the command name is?
do ps -e | grep db2 and ps au | grep db2 show the same command name?
Notes
If you're using other ps flags too, then you might find it simpler to use ps -o comm to see the short name and ps -o cmd to see the long name.
You also might find pkill a better alternative. In particular, pkill -f tries to match using the full command name, i.e. the command name as printed by ps u or ps -o cmd.
| killall gives me `no process found ` but ps |
1,378,457,636,000 |
I'm looking for the latest source code of man command, the version in my Linux is pretty old(v1.6f), but I failed after googling a while.
I mean the latest source code of man, not man-pages but the binary file in /usr/bin/man itself which can be compiled and installed.
|
You can usually query your distribution to see where sources come from. For example, I'm on Fedora, and I can see that the man command comes from the man-db package:
$ rpm -qf /usr/bin/man
man-db-2.6.7.1-16.fc21.x86_64
I can then query the man-db package for the upstream url:
$ rpm -qi man-db | grep -i url
URL : http://www.nongnu.org/man-db/
And there you are, http://www.nongnu.org/man-db/.
You can perform a similar sequence of steps with the packaging systems used on other distributions.
| Where is the latest source code of man command for linux? |
1,378,457,636,000 |
Can someone tell me what is the relationship between a specified nice level and child processes?
For example, if I have a default nice of 0, and I start a script with nice 5, which in turn starts some child processes (in this case about 20 in parallel), what is the nice of the child processes?
|
A child process inherits whatever nice value is held by the parent at the time that it is forked (in your example, 5).
However, if the nice value of the parent process changes after forking the child processes, the child processes do not inherit the new nice value.
You can easily observe this with the monitoring tool top. If the nice field (NI) is not shown by default, you can add it by pressing f and choosing I. This will add the NI column to the top display.
* I: NI = Nice value
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1937 root 20 0 206m 66m 45m S 6.2 1.7 11:03.67 X
Good information from man 2 fork
fork() creates a new process by duplicating the calling process. The new process, referred to as the child, is an exact duplicate of the calling process, referred to as the parent, except for the following points:
The child has its own unique process ID, and this PID does not match
the ID of any existing process group (setpgid(2)).
The child's parent process ID is the same as the parent's process ID.
The child does not inherit its parent's memory locks (mlock(2), mlockall(2)).
Process resource utilizations (getrusage(2)) and CPU time counters (times(2)) are reset to zero in the child.
The child's set of pending signals is initially empty (sigpending(2)).
The child does not inherit semaphore adjustments from its parent (semop(2)).
The child does not inherit record locks from its parent (fcntl(2)).
The child does not inherit timers from its parent (setitimer(2), alarm(2), timer_create(2)).
The child does not inherit outstanding asynchronous I/O operations from its parent (aio_read(3), aio_write(3)), nor does it inherit any asynchronous I/O contexts from its parent (see io_setup(2)).
| Nice and child processes |
1,378,457,636,000 |
I have this message in dmesg log with linux 3.11.6-1 (2013-10-27) (debian version).
I wonder how to fix/remove it?
[ 5.098132] ACPI Warning: 0x0000000000000428-0x000000000000042f SystemIO conflicts with Region \PMIO 1 (20130517/utaddress-251)
[ 5.098147] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 5.098156] ACPI Warning: 0x0000000000000530-0x000000000000053f SystemIO conflicts with Region \GPIO 1 (20130517/utaddress-251)
[ 5.098167] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 5.098171] ACPI Warning: 0x0000000000000500-0x000000000000052f SystemIO conflicts with Region \GPIO 1 (20130517/utaddress-251)
[ 5.098180] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 5.098186] lpc_ich: Resource conflict(s) found affecting gpio_ich
[ 5.099072] ACPI Warning: 0x000000000000f040-0x000000000000f05f SystemIO conflicts with Region \_SB_.PCI0.SBUS.SMBI 1 (20130517/utaddress-251)
[ 5.099085] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
lspci :
$ lspci
00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM Controller (rev 09)
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4)
00:1c.1 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 (rev c4)
00:1c.5 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 6 (rev c4)
00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation QM77 Express Chipset LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04)
02:00.0 Network controller: Intel Corporation Centrino Ultimate-N 6300 (rev 35)
03:00.0 SD Host controller: O2 Micro, Inc. Device 8221 (rev 05)
|
This message is about some driver being denied access to devices controlled by the ACPI.
By and large, my experience is that it can be safely ignored. If however you really insist on removing the warnings, I suggest you do not try booting with the option acpi=off, or maybe you try just once to see what happens. But I am afraid you might find you have troubles with wifi, bluetooth, .... However here they say that this is mostly harmless, so no harm in trying.
One possible way to fix it is to boot with the option
processor.nocst=1
which introduces compatibility with some old ACPI software, see here. An alternative is to use the option
acpi_enforce_resources=lax
which obviously allows loading the drivers. This might, or not, interfere with ACPI operations.
Just for the sake of completeness (apologies if you already know this), to introduce these modifications, edit /etc/default/grub and replace
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
with
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=off"
or whichever option you decide to try. Update grub, reboot.
| How do I remove acpi Warning on boot? |
1,378,457,636,000 |
The following image shows how a 32-bit process virtual address space is divided:
But how a 64-bit process virtual address space is divided?
|
x86
The 64-bit x86 virtual memory map splits the address space into two: the lower section (with the top bit set to 0) is user-space, the upper section (with the top bit set to 1) is kernel-space. (Note that x86-64 defines “canonical” “lower half” and “higher half” addresses, with a number of bits effectively limited to 48 or 57; see Wikipedia or the Intel SDM, volume 3 section 4.5, for details.)
The complete map is documented in detail in the kernel; currently it looks like
Start addr
Offset
End addr
Size
VM area description
0000_0000_0000_0000
0
0000_7fff_ffff_ffff
128 TiB
user-space virtual memory
0000_8000_0000_0000
+128 TiB
ffff_7fff_ffff_ffff
~16M TiB
non-canonical
ffff_8000_0000_0000
-128 TiB
ffff_ffff_ffff_ffff
128 TiB
kernel-space virtual memory
with 48-bit virtual addresses. The 57-bit variant has the same structure, with 64 PiB of usable address space on either side of a 16K PiB hole:
Start addr
Offset
End addr
Size
VM area description
0000_0000_0000_0000
0
00ff_ffff_ffff_ffff
64 PiB
user-space virtual memory
0100_0000_0000_0000
+64 PiB
feff_ffff_ffff_ffff
~16K PiB
non-canonical
ff00_0000_0000_0000
-64 PiB
ffff_ffff_ffff_ffff
64 PiB
kernel-space virtual memory
(Note that 16K PiB = 16M TiB = 264 bytes. The vast majority of the available address space is non-canonical.)
Both of these layouts provide access to the same physical address space, using 52 address lines (4 PiB). 4-level paging only provides access to a 256 TiB subset at any given time; 5-level paging provides access to the full physical address space. Current x86 CPUs can handle far less than this; as far as I’m aware, the most a single socket CPU can handle is 6TiB.
Unlike the 32-bit case, the “64-bit” memory map is a direct reflection of hardware constraints.
ARM
64-bit ARM has a similar address distinction in hardware: the top twelve or sixteen bits are 0 for user-space, 1 for kernel-space. Linux uses 39, 42 or 48 bits for virtual addresses, depending on the number of page table levels and the page size. With ARMv8.2-LVA, another four bits are added, resulting in 52-bit virtual addresses.
This is also documented in detail in the kernel.
| How a 64-bit process virtual address space is divided in Linux? |
1,378,457,636,000 |
I was trying to open eclipse in my ubuntu VM with the below commmand.. And as soon as I do that, I always get the below exception -
ubuntu@username-dyn-vm1-48493:~$ eclipse
Eclipse:
An error has occurred. See the log file
/home/ubuntu/.eclipse/org.eclipse.platform_3.8_155965261/configuration/1381367113197.log.
so when I went to that particular log file, this is what I can see in the log -
ubuntu@username-dyn-vm1-48493:~$ more /home/ubuntu/.eclipse/org.eclipse.platform_3.8_155965261/configuration/1381367113197.log
!SESSION 2013-10-10 01:05:13.088 -----------------------------------------------
eclipse.buildId=debbuild
java.version=1.7.0_25
java.vendor=Oracle Corporation
BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US
Command-line arguments: -os linux -ws gtk -arch x86_64
!ENTRY org.eclipse.osgi 4 0 2013-10-10 01:05:17.555
!MESSAGE Application error
!STACK 1
org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed]
at org.eclipse.swt.SWT.error(SWT.java:4387)
at org.eclipse.swt.widgets.Display.createDisplay(Display.java:914)
at org.eclipse.swt.widgets.Display.create(Display.java:900)
at org.eclipse.swt.graphics.Device.<init>(Device.java:156)
at org.eclipse.swt.widgets.Display.<init>(Display.java:498)
at org.eclipse.swt.widgets.Display.<init>(Display.java:489)
at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:716)
at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:161)
at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:154)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:96)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)
Does anyone know what wrong has happened? Or what I am supposed to do to fix this issue? Thanks..
Update:-
Version details below -
ubuntu@username-dyn-vm1-48493:~$ dpkg -l libgtk[0-9]* | grep ^i
ii libgtk2.0-0:amd64 2.24.17-0ubuntu2 amd64 GTK+ graphical user interface library
ii libgtk2.0-bin 2.24.17-0ubuntu2 amd64 programs for the GTK+ graphical user interface library
ii libgtk2.0-common 2.24.17-0ubuntu2 all common files for the GTK+ graphical user interface library
|
I think this is a problem with gtk. Check what version is installed.
dpkg -l libgtk[0-9]* | grep ^i
If it's not installed or is the incorrect version then do a sudo apt-get install gtk or do an sudo apt-get update.
EDIT
The problem was that SSH was using SSH to remote into a Linux VM and didn't have an X-Server set up on Windows and didn't have X11 forwarding enabled. After getting that straightened out the OP shouldn't have any issues running Eclipse.
| org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] while running eclipse on ubuntu |
1,378,457,636,000 |
what is the command to modify metric of an existing route entry in linux?
I am able to change gateway of an existing entry using "ip route change" command as below but not able to change metrics. Is there any other command for that?
route –n
40.2.2.0 30.1.3.2 255.255.255.0 eth2
ip route change 40.2.2.0/24 via 30.1.2.2
route -n
40.2.2.0 30.1.2.2 255.255.255.0 eth1
|
As noted in a comment to the question, quoting a message on the linux-net mailing list: "The metric/priority cannot be changed [...] This is a limitation of the current protocol [...]."
The only way is to delete the route and add a new one.
This is done using the route command, example:
sudo route add -net default gw 10.10.0.1 netmask 0.0.0.0 dev wlan0 metric 1
Debian manpage for the route command
| Modifying existing route entry in linux |
1,378,457,636,000 |
I have a USB Zigbee dongle, but I'm unable to connect to it. It briefly shows up in /dev/ttyUSB0, but then quickly disappears. I see the following output in the console:
$ dmesg --follow
...
[ 738.365561] usb 1-10: new full-speed USB device number 8 using xhci_hcd
[ 738.607730] usb 1-10: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.64
[ 738.607737] usb 1-10: New USB device strings: Mfr=0, Product=2, SerialNumber=0
[ 738.607739] usb 1-10: Product: USB Serial
[ 738.619446] ch341 1-10:1.0: ch341-uart converter detected
[ 738.633501] usb 1-10: ch341-uart converter now attached to ttyUSB0
[ 738.732348] audit: type=1130 audit(1632606446.974:2212): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty-device@sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 738.768081] audit: type=1130 audit(1632606447.007:2213): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty@-sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 738.776433] usb 1-10: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
[ 738.783508] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
[ 738.783521] ch341 1-10:1.0: device disconnected
[ 739.955783] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input35
...
|
The problem here is BRLTTY, a program that "provides access to the Linux/Unix console (when in text mode) for a blind person using a refreshable braille display".
If you are not blind, you can disable BRLTTY in two different ways:
Remove udev rules
BRLTTY uses udev rules to get permissions to mess with the TTYs without being root. You can disable these rules by overriding the rules shipped by your distro with /dev/null:
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
Disable service
The BRLTTY service is launched by the brltty.path service. This service can be completely prevented from ever starting by running by doing the following:
$ sudo systemctl mask brltty.path
Created symlink /etc/systemd/system/brltty.path → /dev/null.
| Unable to use USB dongle based on USB-serial converter chip |
1,378,457,636,000 |
When I'm using tail -f and I want to return to the shell, I always use CTRL+C. Or when I am typing a command and feel like aborting it and starting over, I simply CTRL+C to get back to an empty command line prompt. Is this considered bad practice? I sometimes feel there might be a better way to break away from something, but really have no idea.
|
Ctrl+C sends a SIGINT to the program. This tells the program that you want to interrupt (and end) it's process. Most programs correctly catch this and cleanly exit. So, yes, this is a "correct" way to end most programs.
There are other keyboard shortcuts for sending other signals to programs, but this is the most common.
| Is CTRL+C incorrect to use to return to command line? |
1,378,457,636,000 |
I'm trying to inject keystrokes into the input daemon so as to simulate typing from a Bash script. Is this possible, and if so, how can I make it happen?
|
If you are operating at the X level (as in Gilles' question), then use xdotool like so:
xdotool key KEYSTROKE_SPECIFIER
Where KEYSTROKE_SPECIFIER can be something like "a" or "F2" or "control+j"
EDIT: I missed your response to Gilles' question, sorry. I'll leave this response here as a solution for the X-case.
| How to inject keystrokes via a shell script? |
1,378,457,636,000 |
I tried to restrict the number of a service (in a container) restarts. The OS version is CentOs 7.5, the service file is pretty much as below (removed some parameters for reading convenience). It should be pretty straight forward as some other posts pointed out (Post of Server Fault restart limit 1 , Post of Stack Overflow restart limit 2 ). Yet StartLimitBurst and StartLimitIntervalSec never work for me.
I tested with several ways:
I check the service PID, kill the service with kill -9 **** several times. The service always gets restarted after 20s!
I also tried to mess up the service file, make the container never
runs. Still, it doesn't work, the service file just keep restarting.
Any idea?
[Unit]
Description=Hello Fluentd
After=docker.service
Requires=docker.service
StartLimitBurst=2
StartLimitIntervalSec=150s
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker stop "fluentd"
ExecStartPre=-/usr/bin/docker rm -f "fluentd"
ExecStart=/usr/bin/docker run fluentd
ExecStop=/usr/bin/docker stop "fluentd"
Restart=always
RestartSec=20s
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
|
StartLimitIntervalSec= was added as part of systemd v230. In systemd v229 and below, you can only use StartLimitInterval=. You will also need to put StartLimitInterval= and StartLimitBurst= in the [Service] section - not the [Unit] section.
To check your systemd version on CentOS, run rpm -q systemd.
If you ever upgrade to systemd v230 or above, the old names in the [Service] section will continue to work.
References:
[systemd-devel] Unknown lvalue 'StartLimitIntervalSec' ?
core: make the StartLimitXYZ= settings generic and apply to any kind of unit, not just services
You can have this problem without seeing any error at all, because systemd ignores unknown directives. systemd assumes that many newer directives can be ignored and still allow the service to run.
It is possible to manually check a unit file for unknown directives. At least it seems to work on recent systemd:
$ systemd-analyze verify foo.service
/etc/systemd/system/foo.service:9: Unknown lvalue 'FancyNewOption' in section 'Service'
| Systemd's StartLimitIntervalSec and StartLimitBurst never work |
1,378,457,636,000 |
Warning: Running this command in most shells will result in a broken system that will need a forced shutdown to fix
I understand the recursive function :(){ :|: & };: and what it does. But I don't know where is the fork system call. I'm not sure, but I suspect in the pipe |.
|
As a result of the pipe in x | y, a subshell is created to contain the pipeline as part of the foreground process group. This continues to create subshells (via fork()) indefinitely, thus creating a fork bomb.
$ for (( i=0; i<3; i++ )); do
> echo "$BASHPID"
> done
16907
16907
16907
$ for (( i=0; i<3; i++ )); do
> echo "$BASHPID" | cat
> done
17195
17197
17199
The fork does not actually occur until the code is run, however, which is the final invocation of : in your code.
To disassemble how the fork bomb works:
:() - define a new function called :
{ :|: & } - a function definition that recursively pipes the calling function into another instance of the calling function in the background
: - call the fork bomb function
This tends to not be too memory intensive, but it will suck up PIDs and consume CPU cycles.
| Where is the fork() on the fork bomb :(){ :|: & };:? |
1,378,457,636,000 |
I have a device that needs a block of memory that is reserved solely for it, without the OS intervening. Is there any way to tell BIOS or the OS that a block of memory is reserved, and it must not use it?
I am using this device on an openSUSE machine.
|
What you're asking for is called DMA. You need to write a driver to reserve this memory.
Yes, I realize you said you didn't want the OS to intervene, and a driver becomes part of the OS, but in absence of a driver's reservation, the kernel believes all memory belongs to it. (Unless you tell the kernel to ignore the memory block, per Aaron's answer, that is.)
Chapter 15 (PDF) of "Linux Device Drivers, 3/e" by Rubini, Corbet and Kroah-Hartmann covers DMA and related topics.
If you want an HTML version of this, I found the second-edition version of the chapter elsewhere online. Beware that the 2nd edition is over a decade old now, having come out when kernel 2.4 was new. There's been a lot of work on the memory management subsystem of the kernel since those days, so it may not apply very well any more.
| How can I reserve a block of memory from the Linux kernel? |
1,378,457,636,000 |
I have tried
echo yes | ssh [email protected]
yes | ssh [email protected]
ssh -y [email protected]
none of which appear to work?
EDIT #1
Part of my problem was I thought every command after the ssh command was a remote command when the commands were in fact local. I guess remote commands have to be declared in a string which is passed to the ssh command as an argument e.g.
$ ssh [email protected] 'remote command'
|
If you don't care to authenticate the hosts via SSH and either blindly accept the keys from servers or ignore them, better to just ignore them.
$ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no <user>@<host>
Keep in mind that you're arm tying SSH's ability to protect you but this is the more appropriate way to use the tools as they're intended vs. forcing them through external means.
| How to accept yes from script "Are you sure you want to continue connecting (yes/no)?" [duplicate] |
1,378,457,636,000 |
What determines which Linux commands require root access? I understand the reasons why it is desirable that, say, apt-get should require root privilege; but what distinguishes these commands from the rest? Is it simply a matter of the ownership and execute permissions of the executable?
|
In linux, the privileges of root were at one point divided into "capabilities", so you can get a full listing of root's special privileges by looking into that documentation: man 7 capabilities.
To answer your question, a command will require running as root when it needs one of these privileges, and its non-script executable does not have the relevant capability set in its file metadata (e.g. if a python script requires the capability, then the capability would need to be in the python interpreter specified in the shebang line).
Do note that some commands that need root access do not need something like sudo because they have the SUID bit set in their executable. This bit causes the executable to run as the owner (typically root) when executed by anyone that has execute access. An example is sudo itself as changing users is a privileged action it needs to do.
EDIT: I note from your question that you might have the idea that you can determine if a command will need root access before running it. That's not the case. A program may sometimes require root privileges and other times not, and this could be a decision made by the program because of data it's provided during runtime. Take for example, calling vim, just like that without arguments, and then through a series of keypresses and pasting, telling it to write something to a file it has no permission to write, or maybe executing another command that itself will require root privileges. Nothing about the command before executing could indicate that it would eventually require root access. That's something that can only be determined at the point it tries to do something that requires it.
Anyway, here are very few examples from the referenced manpage of the privileges of root:
Make arbitrary manipulations of process UIDs (setuid(2), setreuid(2), setresuid(2), setfsuid(2));
Bypass file read, write, and execute permission checks. (DAC is an abbreviation of "discretionary access control".)
Bypass permission checks for sending signals (see kill(2)). This includes use of the ioctl(2) KDSIGACCEPT operation.
Perform various network-related operations:
interface configuration;
administration of IP firewall, masquerading, and accounting;
modify routing tables;
Bind a socket to Internet domain privileged ports (port numbers less than 1024).
Load and unload kernel modules (see init_module(2) and delete_module(2));
Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock.
Perform a range of system administration operations including: quotactl(2), mount(2), umount(2), swapon(2), swapoff(2), sethostname(2), and setdomainname(2);
Use reboot(2) and kexec_load(2).
Use chroot(2).
Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes;
| What determines which Linux commands require root access? |
1,378,457,636,000 |
How can I check what hardware I have? (With BIOS version etc.)
|
If your system supports a procfs, you can get much information of your running system. Its an interface to the kernels data structures, so it will also contain information about your hardware. For example to get details about the used CPU you could cat /proc/cpuinfo
For more information you should see the man proc.
More hardware information can be obtained through the kernel ring buffer logmessages with dmesg. For example this will give you a short summary of recently attached hardware and how it is integreated in the system.
These are some basic "interfaces" you will have on every distribution to obtain some hardware information.
Other 'small' tools to gather hardware information are:
lspci - PCI Hardware
lsusb - USB Hardware
Depending on your distribution you will also have access to one of these two tools to gather a detailed overview of your hardware configuration:
lshw
hwinfo (SuSE specific but availible under other distributions also)
The "gate" to your hardware is thorugh the "Desktop Management Interface" (-> DMI). This framework will expose your system information to your software and is used by lshw for example. A tool to interact directly with the DMI is dmidecode and availible on the most distributions as package. It will come with biosdecode which shows you also the complete availbile BIOS informations.
| Getting information on a machine's hardware in Linux |
1,378,457,636,000 |
What is the difference between /lib and /usr/lib and /var/lib? Some of the files are symbolic links that "duplicate" content of other directories.
|
Someone else can probably explain this with much more detail and historical reference but the short answer:
/lib
is a place for the essential standard libraries. Think of libraries required for your system to run. If something in /bin or /sbin needs a library that library is likely in /lib.
/usr/lib
the /usr directory in general is as it sounds, a user based directory. Here you will find things used by the users on the system. So if you install an application that needs libraries they might go to /usr/lib. If a binary in /usr/bin or /usr/sbin needs a library it will likely be in /usr/lib.
/var/lib
the /var directory is the writable counterpart to the /usr directory which is often required to be read-only. So /var/lib would have a similar purpose as /usr/lib but with the ability to write to them.
| What is the difference between /lib and /usr/lib and /var/lib? |
1,378,457,636,000 |
I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server?
I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu.
local $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
I changed the /etc/fuse.conf
local $ sudo cat /etc/fuse.conf
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#mount_max = 1000
# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other
And my user is in the fuse group
local $ sudo grep fuse /etc/group
fuse:x:105:MY_LOACL_USERNAME
And I mount the remote dir with (tried with/without combinations of sudo, default_permissions, allow_other):
local $sudo sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ /mnt/LOCAL_DIR_NAME/
The REMOTE_USERNAME has write permissions to the dir/files (on the remote server).
I tried the above command without sudo, default_permissions, and in all cases I get:
local $ ls -al /mnt/LOCAL_DIR_NAME/a_file
-rw-rw-r-- 1 699 699 1513 Aug 12 16:08 /mnt/LOCAL_DIR_NAME/a_file
local $ test -w /mnt/LOCAL_DIR_NAME/a_file && echo "Writable" || echo "Not Writable"
Not Writable
Clarification 0
In response to user3188445's comment:
$ whoami
LOCAL_USER
$ cd
$ mkdir test_mnt
$ sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ test_mnt/
$ ls test_mnt/
I see the contents of the dir correctly
$ ls -al test_mnt/
total 216
drwxr-xr-x 1 699 699 4096 Aug 12 16:42 .
drwxr----- 58 LOCAL_USER LOCAL_USER 4096 Aug 17 15:46 ..
-rw-r--r-- 1 699 699 2557 Jul 30 16:48 sample_file
drwxr-xr-x 1 699 699 4096 Aug 11 17:25 sample_dir
$ touch test_mnt/new_file
touch: cannot touch ‘test_mnt/new_file’: Permission denied
# extra info: SSH to the remote host and check file permissions
$ ssh REMOTE_USERNAME@REMOTE_HOST
# on remote host
$ ls -al /remote/dir/path/
lrwxrwxrwx 1 root root 18 Jul 30 13:48 /remote/dir/path/ -> /srv/path/path/path/
$ cd /remote/dir/path/
$ ls -al
total 216
drwxr-xr-x 26 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 12 13:42 .
drwxr-xr-x 4 root root 4096 Jul 30 14:37 ..
-rw-r--r-- 1 REMOTE_USERNAME REMOTE_USERNAME 2557 Jul 30 13:48 sample_file
drwxr-xr-x 2 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 11 14:25 sample_dir
|
The question was answered in a linux mailing list; I post a translated answer here for completeness.
Solution
The solution is to not use both of the options default_permissions and allow_other when mounting (which I didn't try in my original experiments).
Explanation
The problem seems to be quite simple. When you use the option default_permissions in fusermount then fuse's permission control of the fuse mount is handled by the kernel and not by fuse.
This means that the REMOTE_USER's uid/gid aren't mapped to the LOCAL_USER (sshfs.c IDMAP_NONE). It works the same way as a simple nfs fs without mapping.
So, it makes sense to prohibit the access, if the uid/gid numbers don't match.
If you have the option allow_other then this dir is writable only by the local user with uid 699, if it exists.
From fuse's man:
'default_permissions'
By default FUSE doesn't check file access permissions, the
filesystem is free to implement its access policy or leave it to
the underlying file access mechanism (e.g. in case of network
filesystems). This option enables permission checking, restricting
access based on file mode. It is usually useful together with the
'allow_other' mount option.
'allow_other'
This option overrides the security measure restricting file access
to the user mounting the filesystem. This option is by default only
allowed to root, but this restriction can be removed with a
(userspace) configuration option.
| Mount with sshfs and write file permissions |
1,378,457,636,000 |
I have few .mdf images, that can be mounted with Alcohol 120%, but on Linux, is that possible?
I've tried things similar to mount -o loop -t iso9660 XX.mdf /mnt/iso, but that doesn't work here, I got ISOFS: Unable to identify CD-ROM format.
|
Try first to convert it into an iso file, with mdf2iso (you have to install it) like this :
mdf2iso your_file.mdf
Linux cannot mount mdf file (which is a closed format) natively.
Or, you can try to rename it into "your_file.iso" and mount it with the command you gave, but it's not working with every mdf image.
Or if you're using an X Server, you can try the software acetoneiso which is basically some sort of Daemon Tools / Alcohol 120% for Linux.
| How to mount mdf image, iso9660 doens't work for it? |
1,378,457,636,000 |
Linux doesn't actually distinguish between processes and threads, and implements both as a data structure task_struct.
So what does Linux provide to some programs for them to tell threads of a process from its child processes? For example, Is there a way to see details of all the threads that a process has in Linux?
Thanks.
|
From a task_struct perspective, a process’s threads have the same thread group leader (group_leader in task_struct), whereas child processes have a different thread group leader (each individual child process).
This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.)
In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads.
| How does Linux tell threads apart from child processes? |
1,378,457,636,000 |
The zswap documentation says:
Zswap seeks to be simple in its policies. Sysfs attributes allow for one user
controlled policy:
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.
This specifies the maximum percentage of memory the compressed pool can occupy.
How do I find out:
The current percentage of memory occupied by the compressed pool
How much of this pool is in use
Compression ratios, hit rates, and other useful info
|
Current statistics:
# grep -R . /sys/kernel/debug/zswap/
Compression ratio:
# cd /sys/kernel/debug/zswap
# perl -E "say $(cat stored_pages) * 4096 / $(cat pool_total_size)"
Current settings:
$ grep -R . /sys/module/zswap
| Get current zswap memory usage and statistics |
1,378,457,636,000 |
My question is basically the same as Only allow certain outbound traffic on certain interfaces.
I have two interfaces eth1 (10.0.0.2) and wlan0 (192.168.0.2).
My default route is for eth1.
Let's say I want all https-traffic to go through wlan0.
Now if I use the solution suggested in the other question, https traffic will go through wlan0, but will still have the source-address of eth1 (10.0.0.2). Since this address is not routeable for the wlan0 gateway, answers won't ever come back. The easy way would be to just set the bind-addr properly in the application, but in this case it is not applicable.
I figure I need to rewrite the src-addr:
# first mark it so that iproute can route it through wlan0
iptables -A OUTPUT -t mangle -o eth1 -p tcp --dport 443 -j MARK --set-mark 1
# now rewrite the src-addr
iptables -A POSTROUTING -t nat -o wlan0 -p tcp --dport 443 -j SNAT --to 192.168.0.2
Now tcpdump sees the outgoing packets just fine and ingoing packets arrive for 192.168.0.2, however they probably never end up in the application, because all I ever get to see, is that the application is resending the SYN-packet, although the SYN-ACK was already received.
So I thought, maybe I need to rewrite the incoming address too:
iptables -A PREROUTING -t nat -i wlan0 -p tcp --sport 443 -j DNAT --to 10.0.0.2
but that didn't work either. So I’m kind of stuck here. Any suggestions?
|
You're close.
The actual reason that the application isn't seeing the return traffic is because of the kernel's built in IP spoofing protection. I.e., the return traffic doesn't match the routing table and is therefore dropped. You can fix this by turning off spoofing protection like this:
sudo sysctl net.ipv4.conf.wlan0.rp_filter=0
But I wouldn't recommend it. The more proper way is to create an alternate routing instance.
The mark is necessary. Keep it.
Source NAT is also necessary.
The final DNAT is unnecessary, so you can remove it.
Make sure you have the iproute package installed. If you have the ip command then you're set (which it looks like you do, but if not get that first).
Edit /etc/iproute2/rt_tables and add a new table by appending the following line:
200 wlan-route
You then need to configure your new routing table named wlan-route with a default gateway and create rules to conditionally send traffic to that table. I'll assume your default gateway is 192.168.0.1. Naturally this needs to match your actual network, and not just my assumptions.
ip route add default via 192.168.0.1 dev wlan0 table wlan-route
ip rule add fwmark 0x1 table wlan-route
Your final annotated script would look like this:
# Populate secondary routing table
ip route add default via 192.168.0.1 dev wlan0 table wlan-route
# Anything with this fwmark will use the secondary routing table
ip rule add fwmark 0x1 table wlan-route
# Mark these packets so that iproute can route it through wlan-route
iptables -A OUTPUT -t mangle -o eth1 -p tcp --dport 443 -j MARK --set-mark 1
# now rewrite the src-addr
iptables -A POSTROUTING -t nat -o wlan0 -p tcp --dport 443 -j SNAT --to 192.168.0.2
| Output traffic on different interfaces based on destination port |
1,378,457,636,000 |
As a sysadmin I sometimes face situations, where a program behaves abnormally, while not creating errors at all or creating nonsense error-messages.
In the past - before java came in - there were two counter-measures:
If nothing else helps - RTFM ;-)
If even 1. does not help - trace the system-calls and see what is happening
I usually use strace -f for this task with Linux (other OS have similar trace-tools). Now while this usually works well for any old-fashioned program, the trace gets very fuzzy when doing the same on a java-process. There are so many system-calls seemingly unrelated to any real action, that it is terrible to search through such a dump.
Are there better ways to do that (if the source-code is not available)?
|
As ckhan mentioned, jstack is great because it gives the full stack trace of all active threads in the JVM. The same can be obtained on stderr of the JVM using SIGQUIT.
Another useful tool is jmap which can grab a heap dump from the JVM process using the PID of the process:
jmap -dump:file=/tmp/heap.hprof $PID
This heap dump can be loaded in tools like visualvm (which is now part of the standard Oracle java sdk install, named jvisualvm). In addition, VisualVM can connect to the running JVM and display information about the JVM, including showing graphs of internal CPU usage, thread counts, and heap usage - great for tracking down leaks.
Another tool, jstat, can collect garbage collection statistics for the JVM over a period of time much like vmstat when run with a numeric argument (e.g. vmstat 3).
Finally, it is possible to use a Java Agent to push instrumentation on all methods of all objects at load-time. The library javassist can help to make this very easy to do. So, it is feasible to add your own tracing. The hard part with that would be finding a way to get trace output only when you wanted it and not all the time, which would likely slow the JVM to a crawl. There's a program called dtrace that works in a manner like this. I've tried it, but was not very successful. Note that agents cannot instrument all classes because the ones needed to bootstrap the JVM are loaded before the agent can instrument, and then it's too late to add instrumentation to those classes.
My Suggestion - start with VisualVM and see if that tells you what you need to know since it can show the current threads and important stats for the JVM.
| How to trace a java-program? |
1,378,457,636,000 |
My laptop is Lenovo T400, and OS is Ubuntu 12.04.
I have not been able to adjust the thresholds for battery starting charging and stopping charging. I observed that its current starting charging threshold is about 40%, and stopping charging threshold is about 60%. I forgot if it was me and which program I used to control the battery to stop charging at 60% and start charging at 40%.
I followed my previous post https://askubuntu.com/questions/58789/how-to-check-charged-percentage-of-battery-and-to-adjust-its-thresholds, but I don't find /sys/devices/platform/smapi. Also I have /proc/acpi/battery/BAT0/, but I have only three files alarm, info and state.
I want to adjust the thresholds. So I wonder how to do that?
|
You need to install tp_smapi-dkms, just do
apt-get install tp_smapi-dkms
When finished, use lsmod | grep tp_smapi to check if module is loaded, to adjust the charge thresholds, do something like this
echo 40 > /sys/devices/platform/smapi/BAT0/start_charge_thresh
echo 60 > /sys/devices/platform/smapi/BAT0/stop_charge_thresh
Add these lines to /etc/rc.local to run them at boot.
This module works at least on X220.
| How to adjust charging thresholds of laptop battery? |
1,404,752,287,000 |
Typically reading from /dev/random produces 100-500 bytes and blocks, waiting for an entropy to be collected.
Why doesn't writing information to /dev/random by other processes speed up reading? Shouldn't it provide the required entropy?
It can be useful for unblocking gpg or similar software without restarting it and re-entering everything, for generating non-super-top-secret keys, etc.
|
You can write to /dev/random because it is part of the way to provide extra random bytes to /dev/random, but it is not sufficient, you also have to notify the system that there is additional entropy via an ioctl() call.
I needed the same functionality for testing my smartcard setup program, as I did not want to wait for my mouse/keyboard to generate enough for the several calls to gpg that were made for each test run. What I did is to run the Python program, which follows, in parallel to my tests. It of course should not be used at all for real gpg key generation, as the random string is not random at all (system generated random info will still be interleaved). If you have an external source to set the string for random, then you should be able to have high entropy. You can check the entropy with:
cat /proc/sys/kernel/random/entropy_avail
The program:
#!/usr/bin/env python
# For testing purposes only
# DO NOT USE THIS, THIS DOES NOT PROVIDE ENTROPY TO /dev/random, JUST BYTES
import fcntl
import time
import struct
RNDADDENTROPY=0x40085203
while True:
random = "3420348024823049823-984230942049832423l4j2l42j"
t = struct.pack("ii32s", 8, 32, random)
with open("/dev/random", mode='wb') as fp:
# as fp has a method fileno(), you can pass it to ioctl
res = fcntl.ioctl(fp, RNDADDENTROPY, t)
time.sleep(0.001)
(Don't forget to kill the program after you are done.)
| Why writing to /dev/random does not make parallel reading from /dev/random faster? |
1,404,752,287,000 |
The bulk of the question is in title, but to elaborate a little:
On most Linuxes I can find /usr/share/terminfo -type f. But on Solaris machine I have nearby - this directory doesn't even exist.
I could iterate over a list of terminals, and do something like:
for TERM in xterm xtermc xterm-color xterm-256color screen rxvt
do
tput cols >/dev/null 2>/dev/null && echo "$TERM available"
done
But it's slow. Any options to discover path used by tput to terminal definitions, and run "find" myself?
|
On Solaris 10 you can do:
find /usr/share/lib/terminfo -type f -print
You should be able to do something like:
find /usr -type d -name terminfo -print
to find where the directory is located.
You can also read to find the exact path:
man terminfo
| How can I check which terminal definitions are available? |
1,404,752,287,000 |
During an audit of /var/log/auth.log on one of my public webservers, I found this:
Jan 10 03:38:11 Bucksnort sshd[3571]: pam_unix(sshd:auth): authentication failure;
logname= uid=0 euid=0 tty=ssh ruser= rhost=61.19.255.53 user=bin
Jan 10 03:38:13 Bucksnort sshd[3571]: Failed password for bin from 61.19.255.53
port 50647 ssh2
At first blush, this looks like typical ssh login spam from random hackers; however, as I looked closer I noticed something else. Most failed /var/log/auth.log entries say invalid user in them, like this one:
Jan 9 10:45:23 Bucksnort sshd[3006]: Failed password for invalid user sales
from 123.212.43.5 port 10552 ssh2
The disquieting thing about that failed login message for bin is that it is a valid user in /etc/passwd that even has a login shell:
[mpenning@Bucksnort ~]$ grep ^bin /etc/passwd
bin:x:2:2:bin:/bin:/bin/sh
I thought I had covered the all the default usernames that could login remotely when I disabled PermitRootLogin in /etc/ssh/sshd_config; discovering this entry opened new possibilities in my paranoid mind. If somehow services ran under bin, then it is remotely possible that someone could somehow insert an ssh key into the bin user's directory from a running service on the box, so I would like to completely disable login for the bin user, if possible.
Questions
This server is remote, and expensive to fix (i.e. I will pay for remote hands to hook up a KVM, plus KVM rental). I am trying to figure out what I might break if I change the /etc/passwd entry for bin to look like this:
bin:x:2:2:bin:/bin:/bin/false
I ran the following commands trying to figure out what bin is needed for... However, these commands came up with no files and I could find no processes owned by bin. What does the bin user do anyway?
$ sudo find / -group bin
$ sudo find / -user bin
Are there any other users that should get their login shells set to /bin/false? FYI, I have already have /bin/false on www-data.
Am I being too paranoid?
I am running Debian, if that matters.
|
A user who has a valid shell and no password can still log in by non-password-based methods, the most common being an ssh key. A valid shell is necessary to run cron jobs. A valid shell is also necessary for su bin -c 'wibble' to work (on Linux at least, su bin -s /bin/sh -c 'wibble' will also work).
In the case of bin, most systems never run a command as bin in normal operation, so setting the shell to /bin/false would be ok.
There is no risk of any direct attack allowing bin to log in over SSH, because that would require creating /bin/.ssh/authorized_keys as the user bin or as root. In other words, the only way to get in is to be in. However, having a valid shell does increase the risk of misconfiguration. It can also permit some remote attacks with services other than SSH; for example a user reports that an attacker could set a password for daemon remotely via Samba, then use that password to log in over SSH.
You can plug the SSH hole by listing the names of the system users in a DenyUsers directive in /etc/ssh/sshd_config (unfortunately, you can't use a numerical range). Or, conversely, you can put an AllowGroups directive and only allow the groups that contain physical users (e.g. users if you grant all your physical users that group membership).
There are bugs filed over this issue in Debian (#274229, #330882, #581899), currently open and classified as “wishlist”. I tend to agree that these are bugs and system users should have /bin/false as their shell unless it appears necessary to do otherwise.
| Why does the 'bin' user need a login shell? |
1,404,752,287,000 |
According to the Intel security-center post dated May 1, 2017, there is a critical vulnerability on Intel processors which could allow an attacker to gain privilege (escalation of privilege) using AMT, ISM and SBT.
Because the AMT has direct access to the computer’s network hardware, this hardware vulnerability will allow an attacker to access any system.
There is an escalation of privilege vulnerability in Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can allow an unprivileged attacker to gain control of the manageability features provided by these products. This vulnerability does not exist on Intel-based consumer PCs.
Intel have released a detection tool available for Windows 7 and 10. I am using information from dmidecode -t 4 and by searching on the Intel website I found that my processor uses Intel® Active Management Technology (Intel® AMT) 8.0.
Affected products:
The issue has been observed in Intel manageability firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 for Intel® Active Management Technology, Intel® Small Business Technology, and Intel® Standard Manageability. Versions before 6 or after 11.6 are not impacted.
The description:
An unprivileged local attacker could provision manageability features gaining unprivileged network or local system privileges on Intel manageability SKUs: Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology (SBT)
How can I easily detect and mitigate the Intel escalation of privilege vulnerability on a Linux system?
|
The clearest post I’ve seen on this issue is Matthew Garrett’s (including the comments).
Matthew has now released a tool to check your system locally: build it, run it with
sudo ./mei-amt-check
and it will report whether AMT is enabled and provisioned, and if it is, the firmware versions (see below). The README has more details.
To scan your network for potentially vulnerable systems, scan ports 623, 624, and 16992 to 16993 (as described in Intel’s own mitigation document); for example
nmap -p16992,16993,16994,16995,623,664 192.168.1.0/24
will scan the 192.168.1/24 network, and report the status of all hosts which respond. Being able to connect to port 623 might be a false positive (other IPMI systems use that port), but any open port from 16992 to 16995 is a very good indicator of enabled AMT (at least if they respond appropriately: with AMT, that means an HTTP response on 16992 and 16993, the latter with TLS).
If you see responses on ports 16992 or 16993, connecting to those and requesting / using HTTP will return a response with a Server line containing “Intel(R) Active Management Technology” on systems with AMT enabled; that same line will also contain the version of the AMT firmware in use, which can then be compared with the list given in Intel’s advisory to determine whether it’s vulnerable.
See CerberusSec’s answer for a link to a script automating the above.
There are two ways to fix the issue “properly”:
upgrade the firmware, once your system’s manufacturer provides an update (if ever);
avoid using the network port providing AMT, either by using a non-AMT-capable network interface on your system, or by using a USB adapter (many AMT workstations, such as C226 Xeon E3 systems with i210 network ports, have only one AMT-capable network interface — the rest are safe; note that AMT can work over wi-fi, at least on Windows, so using built-in wi-fi can also lead to compromission).
If neither of these options is available, you’re in mitigation territory. If your AMT-capable system has never been provisioned for AMT, then you’re reasonably safe; enabling AMT in that case can apparently only be done locally, and as far as I can tell requires using your system’s firmware or Windows software. If AMT is enabled, you can reboot and use the firmware to disable it (press CtrlP when the AMT message is displayed during boot).
Basically, while the privilege vulnerability is quite nasty, it seems most Intel systems aren’t actually affected. For your own systems running Linux or another Unix-like operating system, escalation probably requires physical access to the system to enable AMT in the first place. (Windows is another story.) On systems with multiple network interfaces, as pointed out by Rui F Ribeiro, you should treat AMT-capable interfaces in the same way as you’d treat any administrative interface (IPMI-capable, or the host interface for a VM hypervisor) and isolate it on an administrative network (physical or VLAN). You cannot rely on a host to protect itself: iptables etc. are ineffective here, because AMT sees packets before the operating system does (and keeps AMT packets to itself).
VMs can complicate matters, but only in the sense that they can confuse AMT and thus produce confusing scanning results if AMT is enabled. amt-howto(7) gives the example of Xen systems where AMT uses the address given to a DomU over DHCP, if any, which means a scan would show AMT active on the DomU, not the Dom0...
| How to detect and mitigate the Intel escalation of privilege vulnerability on a Linux system (CVE-2017-5689)? |
1,404,752,287,000 |
What's the fastest method to backup and restore a luks encrypted device (e.g. a full encrypted usb-device to a image-file).
The usb-device can be decrypted/accessed. I'm looking for a solution to mount the backup image as a file (encryped). Can it be possible?
Keep it simple, stupid.
|
cryptsetup handles image files just as well as block devices, if that was your question. So if you make a dd image (which will be freaking huge) it will work. And if it didn't, you could just create the loop device yourself.
Best practice (if you want to keep the backup encrypted) is to encrypt the backup disk also, then open both containers, then run any backup solution of your choice as you would with unencrypted filesystems. It won't be the fastest method as it'd decrypt data from the source disk and then re-encrypt it for the backup disk. On the other hand it allows for incremental backup solutions, so it should still beat the dd-image-creation on average.
If you want to stick to dd, the only way to make something faster than dd would be a partimage of sorts which takes LUKS header and offset into account, so it would only store the encrypted data that is actually in use by the filesystem.
If the source disk is a SSD and you allow TRIM inside LUKS, and the SSD shows trimmed regions as zeroes, you get this behaviour for free with dd conv=sparse. It's still not something I'd recommend, though.
| Best practice to backup a LUKS encrypted device |
1,404,752,287,000 |
Thanks to some good Q&A around here and this page, I now understand links. I see hard links refer the same inode by a different name, and copies are different "nodes, with different names. Plus soft links have the original file name and path as their inode, so if the file is moved, the link breaks.
So, I tested what I've learnt with some file ("saluton_mondo.cpp" below), made a hard and a soft link and a copy.
jmcf125@VMUbuntu:~$ ls -lh soft hard copy s*.cpp
-rw-rw-r-- 1 jmcf125 jmcf125 205 Aŭg 27 16:10 copy
-rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 hard
-rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 saluton_mondo.cpp
lrwxrwxrwx 1 jmcf125 jmcf125 17 Aŭg 27 16:09 soft -> saluton_mondo.cpp
I found awkward that the hard link, however, has the same size as the original and, logically, the copy. If the hard link and the original share the same inode, that has the data, and only differ by the filename, shouldn't the hard link take only the space of its name, instead of 205 bytes? Or is that the size of the original file that ls -lh returns? But then how can I know what space does the filename take? Here it says hard links have no size. Is their file name kept alongside the original file name? Where is the file name of hard links stored?
|
A file is an inode with meta data among which a list of pointers to where to find the data.
In order to be able to access a file, you have to link it to a directory (think of directories as phone directories, not folders), that is add one or more entries to one of more directories to associate a name with that file.
All those links, those file names point to the same file. There's not one that is the original and the other ones that are links. They are all access points to the same file (same inode) in the directory tree. When you get the size of the file (lstat system call), you're retrieving information (that metadata referred to above) stored in the inode, it doesn't matter which file name, which link you're using to refer to that file.
By contrast symlinks are another file (another inode) whose content is a path to the target file. Like any other file, those symlinks have to be linked to a directory (must have a name) so you can access them. You can also have several links to a symlinks, or in other words, symlinks can be given several names (in one or more directories).
$ touch a
$ ln a b
$ ln -s a c
$ ln c d
$ ls -li [a-d]
10486707 -rw-r--r-- 2 stephane stephane 0 Aug 27 17:05 a
10486707 -rw-r--r-- 2 stephane stephane 0 Aug 27 17:05 b
10502404 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:05 c -> a
10502404 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:05 d -> a
Above the file number 10486707 is a regular file. Two entries in the current directory (one with name a, one with name b) link to it. Because the link count is 2, we know there's no other name of that file in the current directory or any other directory. File number 10502404 is another file, this time of type symlink linked twice to the current directory. Its content (target) is the relative path "a".
Note that if 10502404 was linked to another directory than the current one, it would typically point to a different file depending on how it was accessed.
$ mkdir 1 2
$ echo foo > 1/a
$ echo bar > 2/a
$ ln -s a 1/b
$ ln 1/b 2/b
$ ls -lia 1 2
1:
total 92
10608644 drwxr-xr-x 2 stephane stephane 4096 Aug 27 17:26 ./
10485761 drwxrwxr-x 443 stephane stephane 81920 Aug 27 17:26 ../
10504186 -rw-r--r-- 1 stephane stephane 4 Aug 27 17:24 a
10539259 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:26 b -> a
2:
total 92
10608674 drwxr-xr-x 2 stephane stephane 4096 Aug 27 17:26 ./
10485761 drwxrwxr-x 443 stephane stephane 81920 Aug 27 17:26 ../
10539044 -rw-r--r-- 1 stephane stephane 4 Aug 27 17:24 a
10539259 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:26 b -> a
$ cat 1/b
foo
$ cat 2/b
bar
Files have no names associated with them other than in the directories that link them. The space taken by their names is the entries in those directories, it's accounted for in the file size/disk usage of the directories.
You'll notice that the system call to remove a file is unlink. That is, you don't remove files, you unlink them from the directories they're referenced in. Once unlinked from the last directory that had an entry to a given file, that file is then destroyed (as long as no process has it opened).
| Why do hard links seem to take the same space as the originals? |
1,404,752,287,000 |
Are there any Linux boot loaders supporting full disk encryption (a la TrueCrypt). I know there was work towards adding encryption support to GRUB2, but this does not seem to be ready yet. Any other options?
(Note that I am really referring to full disk encryption here—including /boot)
Most of the answers describe a setup where /boot is not encrypted, and some of them try to explain why an unencrypted /boot should be OK.
Without getting into a discussion on why I actually need /boot to be encrypted, here is an article that describes exactly what I need, based on a modified version of GRUB2:
http://xercestech.com/full-system-encryption-for-linux.geek
The problem with this is that these modifications apparently are not supported in the current GRUB2 codebase (or maybe I am overlooking something).
|
I think the current version of GRUB2 does not have support for loading and decrypting LUKS partitions by itself (it contains some ciphers but I think they are used only for its password support). I cannot check the experimental development branch, but there are some hints in the GRUB page that some work is planned to implement what you want to do.
Update (2015): the latest version of GRUB2 (2.00) already includes code to access LUKS and GELI encrypted partitions. (The xercestch.com link the OP provided mention the first patches for that, but they are now integrated in the latest release).
However, if you are trying to encrypt the whole disk for security reasons, please note that an unencrypted boot loader (like TrueCrypt, BitLocker or a modified GRUB) offers no more protection than an unencrypted /boot partition (as noted by JV in a comment above). Anybody with physical access to the computer can just as easily replace it with a custom version. That is even mentioned in the article at xercestech.com you linked:
To be clear, this does not in any way make your system less vulnerable to offline attack, if an attacker were to replace your bootloader with their own, or redirect the boot process to boot their own code, your system can still be compromised.
Note that all software-based products for full disk encryption have this weakness, no matter if they use an unencrypted boot loader or an unencrypted boot/preboot partition. Even products with support for TPM (Trusted Platform Module) chips, like BitLocker, can be rooted without modifying the hardware.
A better approach would be to:
decrypt at the BIOS level (in motherboard or disk adapter or external hardware [smartcard], with or without a TPM chip), or
carry the PBA (preboot authorization) code (the /boot partition in this case) in a removable device (like a smartcard or an USB stick).
To do it the second way, you can check the Linux Full Disk Encryption (LFDE) project at: http://lfde.org/ which provides a post-install script to move the /boot partition to an external USB drive, encrypting the key with GPG and storing it in the USB too. In that way, the weaker part of the boot pathway (the non-encrypted /boot partition) is always with you (you will be the only one with physical access to the decrypting code AND the key). (Note: this site has been lost and the author's blog also disappeared, however you can find the old files at https://github.com/mv-code/lfde just note the last development was done 6 years ago). As a lighter alternative, you can install the unencrypted boot partition in an USB stick while installing your OS.
Regards, MV
| Linux boot loaders supporting full disk encryption? |
1,404,752,287,000 |
How can I install Chrome on Linux without needing to log in as root?
Note that I want to use Chrome, not Chromium.
If I go to the official download page, I get the choice between:
Please select your download package:
32 bit .deb (For Debian/Ubuntu)
64 bit .deb (For Debian/Ubuntu)
32 bit .rpm (For Fedora/openSUSE)
64 bit .rpm (For Fedora/openSUSE)
Can I somehow extract and install Chrome from the .deb or the .rpm without needing to be root? Or is there another link that I missed?
|
I've successfully extracted the Fedora/OpenSUSE RPM into my home directory and ran chrome from there. You simply need to make sure that the symlinks for the libraries are all there. This assumes that the libraries area already installed, and $HOME/bin is in my $PATH.
I just ran:
mkdir ~/chrome; cd ~/chrome
rpm2cpio ~/Download/google-chrome-stable_current_x86_64.rpm | cpio -id
cd opt/google/chrome
ln -s /usr/lib64/libnss3.so libnss3.so.1d
ln -s /usr/lib64/libnssutil3.so libnssutil3.so.1d
ln -s /usr/lib64/libsmime3.so libsmime3.so.1d
ln -s /lib64/libplc4.so libplc4.so.0d
ln -s /lib64/libnspr4.so libnspr4.so.0d
ln -s /lib64/libbz2.so.1.0.6 libbz2.so.1.0
ln -s ~/chrome/opt/google/chrome/google-chrome ~/bin/google-chrome
Now, if you don't have all those libraries installed already, or there are other dependencies for the chrome binary that are unmet, you might need to build and install them in your homedir. Google Chrome helpfully adds ~/chrome/opt/google/chrome/lib to the $LD_LIBRARY_PATH, so you could install those additional dependencies there.
| Installing Chrome on Linux without needing to be root |
1,404,752,287,000 |
What do the terms "in-tree" and "out-of-tree" exactly mean? Also, does "source tree" specifically refer to the official kernel released from / maintained at kernel.org or is it a more general term which can refer to any (modified) Linux kernel source?
|
"source tree" is not a term specific to kernel source development, so it has to be a more general term and its meaning with regards to kernel source is context dependent.
I have not come across "in-tree" and "out-of-tree" outside of the Linux kernel source development and then only for working with modules. All modules start out as "out-of-tree" developments, that can be compiled using the context of a source-tree. Once a module gets accepted to be included, it becomes an in-tree module. A I have not come across an official definition for both terms though, maybe that was never necessary as it was clear to those working with modules what was meant.
E.g. while Reiserfs module was still an out-of-tree module I did the RPM package generation for SuSE, once it became in-tree there was no longer need for that.
| Linux kernel: meaning of source-tree, in-tree and out-of-tree |
1,404,752,287,000 |
Using LDXE and Ubuntu, I can log into a virtual console via Ctrl+Alt+F1.
The text is far too small. How do I change the screen resolution to get a larger font?
|
You should edit the file /etc/default/console-setup and change the FONTSIZE variable. Once you've made your changes you must reconfigure your terminal by running:
$ sudo service console-setup restart
| How do I change the screen font size when using a virtual console? |
1,404,752,287,000 |
I accidentally overwrote the /bin/bash file with a dumb script that I intented to put inside the /bin folder.
How do I get the contents of that file back? Is there a way I can find the contents on the web, and just copy them back in? What are my options here, considering that terminal gives an error talking about "Too many Symbolic Links?"
I'm still a newcomer to this kind of thing, and I appreciate all the help I can get.
Edit: I forgot to mention I'm on Kali 2.2 Rolling, which is pretty much debian with some added features.
Edit 2: I also restarted the machine, as I didn't realize my mistake until a few days ago. That makes this quite a bit harder.
|
bash is a shell, probably your system shell, so now weird things happen, while parts of the shell are still in memory. Once you log out or reboot, you,ll be in deeper trouble.
So the first thing should be to change your shell to something safe. See what shells you have installed
cat /etc/shells
Then change your shell to one of the other shells listed there, for example
chsh -s /bin/dash
Update, because you already rebooted:
You are lucky that nowadays the boot process doesn't rely on bash, so your system boots, you just can't get a command line. But you can start an editor to edit /etc/passwd and change the shell in the root line from /bin/bash to /bin/dash. Log out and log in again. Just don't make any other change in that file, or you may mess up your system completely.
Then try to reinstall bash with
apt-get --reinstall install bash
If everything succeeded you can chsh back to bash.
Finally: I think, kali is a highly specialized distribution, probably not suited for people who accidently overwrite their shell. As this sentence was called rude and harsh, I should add that I wrote it out of my own experience. When I was younger, I did ruin my system because nobody told me to avoid messing around as root.
| What are the contents of /bin/bash, and what do I do if I accidentally overwrote them |
1,404,752,287,000 |
I know with mkdir I can do mkdir A B C D E F to create each directory. How do I create directories A-Z or 1-100 with out typing in each letter or number?
|
It's probably easiest to just use a for loop:
for char in {A..Z}; do
mkdir $char
done
for num in {1..100}; do
mkdir $num
done
You need at least bash 3.0 though; otherwise you have to use something like seq
| How do I make multiple directories at once in a directory? |
1,404,752,287,000 |
On recent Linux based operating systems there is no ifconfig and traceroute. Some functionality has been incorporated into the ip utility (see here for examples), but I have not found a replacement for the traceroute command.
I know that I can do yum install net-tools or yum install traceroute when I am on CentOS or RHEL but our servers come preinstalled without that command and while we are allowed to sudo certain commands installing additional software is always a problem
|
It strongly depends on your configuration.
I've got two options for you:
Compile traceroute by yourself. Use compiled binary without any problem.
If there is busybox package installed on your system, then you can simply use it's binary as traceroute. Just link busybox binary as traceroute (in case of my system it is ln -s /bin/busybox traceroute), and then use this link to act as traceroute: ./traceroute google.com.
| traceroute command: replacement or alternative |
1,404,752,287,000 |
I'm trying to configure the network interface on embedded linux using ifconfig:
ifconfig eth0 192.168.0.101 netmask 255.255.255.0
but I don't know how to add the default gateway as an ifconfig parameter,
Any Ideas?
|
ifconfig is not the correct command to do that.
You can use route like in route add default gw 192.168.0.254 for example.
And if route is not present, but ip is, you can use it like this: ip route add default via 192.168.0.254 dev eth0, assuming that 192.168.0.254 is the ip of your gateway
| How to set the Default gateway |
1,404,752,287,000 |
To capture a particular pattern, awk and grep can be used. Why should we use one over the other? Which is faster and why?
If I had a log file and I wanted to grab a certain pattern, I could do one of the following
awk '/pattern/' /var/log/messages
or
grep 'pattern' /var/log/messages
I haven't done any benchmarking, so I wouldn't know. Can someone elaborate this? It is great to know the inner workings of these two tools.
|
grep will most likely be faster:
# time awk '/USAGE/' imapd.log.1 | wc -l
73832
real 0m2.756s
user 0m2.740s
sys 0m0.020s
# time grep 'USAGE' imapd.log.1 | wc -l
73832
real 0m0.110s
user 0m0.100s
sys 0m0.030s
awk is a interpreted programming language, where as grep is a compiled c-code program (which is additionally optimized towards finding patterns in files).
(Note - I ran both commands twice so that caching would not potentially skew the results)
More details about interpreted languages on wikipedia.
As Stephane has rightly pointed out in comments, your mileage may vary due to the implementation of the grep and awk you use, the operating system it is on and the character set you are processing.
| Using grep vs awk |
1,404,752,287,000 |
As I understood, "sparse file" means that the file may have 'gaps' so the actual used data may be smaller than the logical file size.
How do Linux file systems save files on disk?
I'm mainly interested in ext4. But:
Can a file be saved not sequentially on disk? By that, I mean that part of the file is located at physical address X and the next part at physical address Y which isn't close to X + offset).
Can I somehow control the file sequentiality?
I want to allocate a file of 10GB. I want it to be sequential on disk and not divided between different offsets.
Does it act differently between the different types?
|
Can a file be saved not sequentially on disk? I mean, part of the file is located under physical address X and the other part under physical address Y which isn't close to X + offset).
Yes; this is known as file fragmentation and is not uncommon, especially with larger files. Most file systems allocate space as it's needed, more or less sequentially, but they can't guess future behaviour — so if you write 200MiB to a file, then add a further 100MiB, there's a non-zero chance that both sets of data will be stored in different areas of the disk (basically, any other write needing more space on disk, occurring after the first write and before the second, could come in between the two). If a filesystem is close to full, the situation will usually be worse: there may not be a contiguous area of free space large enough to hold a new file, so it will have to be fragmented.
Can I somehow control the file sequentiallity?
I want to allocate big file of 10GB. I want it to be sequential in disk and not divided between different offsets.
You can tell the filesystem about your file's target size when it's created; this will help the filesystem store it optimally. Many modern filesystems use a technique known as delayed allocation, where the on-disk layout of a new file is calculated as late as possible, to maximise the information available when the calculation is performed. You can help this process by using the posix_fallocate(3) function to tell the filesystem how much disk space should be allocated in total. Modern filesystems will try to perform this allocation sequentially.
Does it act differently between the different types?
Different filesystems behave differently, yes. Log-based filesystems such as NILFS2 don't allocate storage in the same way as extent-based filesystems such as Ext4, and that's just one example of variation.
| Are files saved on disk sequentially? |
1,404,752,287,000 |
Let's say I have a directory dir with three subdirectories dir1 .. dir3. And inside I have many files and other subdirectories.
I'd like to search for a file inside, say with a *.c ending, but I'd only like to search in subdirectory "dir/dir2" and all its subdirectories. How can I formulate that?
Assuming I'm in dir/ I have:
find . -name "*.c"
to search in all directories.
How do I restrict to only dir2?
|
Find will accept any valid path so
find ./dir2 -name '*.c'
should do the trick
If the dir directory is /home/user/dir you could give find the full path
find /home/user/dir/dir2 -name '*.c'
| find-command for certain subdirectories |
1,404,752,287,000 |
On linux, there is a /dev/root device node. This will be the same block device as another device node, like /dev/sdaX. How can I resolve /dev/root to the 'real' device node in this situation, so that I can show a user a sensible device name?
For example, I might encounter this situation when parsing /proc/mounts.
I'm looking for solutions that would work from a shell/python script but not C.
|
Parse the root= parameter from /proc/cmdline.
| Find out what device /dev/root represents in Linux? |
1,404,752,287,000 |
What is the purpose of having both? Aren't they both used for mounting drives?
|
I recommend visiting the Filesystem Hierarchy Standard.
/media is mount point for removable media. In other words, where system mounts removable media. This directory contains sub-directories used for mounting removable media such as CD-ROMs, floppy disks, etc.
/mnt is for temporary mounting. In other words, where user can mount things. This directory is generally used for mounting filessytems temporarily when needed.
Ref:
http://www.pathname.com/fhs/pub/fhs-2.3.html#MEDIAMOUNTPOINT
http://www.pathname.com/fhs/pub/fhs-2.3.html#MNTMOUNTPOINTFORATEMPORARILYMOUNT
| What's the difference between mnt vs media? [duplicate] |
1,404,752,287,000 |
I'm looking for a backup utility with incremental backups, but in a more complicated way.
I tried rsync, but it doesn't seem to be able to do what I want, or more likely, I don't know how to make it do that.
So this is an example of what I want to achieve with it.
I have the following files:
testdir
├── picture1
├── randomfile1
├── randomfile2
└── textfile1
I want to run the backup utility and basically create an archive (or a tarball) of all of these files in a different directory:
$ mystery-command testdir/ testbak
testbak
└── 2020-02-16--05-10-45--testdir.tar
Now, let's say the following day, I add a file, such that my structure looks like:
testdir
├── picture1
├── randomfile1
├── randomfile2
├── randomfile3
└── textfile1
Now when I run the mystery command, I will get another tarball for that day:
$ mystery-command testdir/ testbak
testbak
├── 2020-02-16--05-10-45--testdir.tar
└── 2020-02-17--03-24-16--testdir.tar
Here's the kicker: I want the backup utility to detect the fact that picture1, randomfile1, randomfile2 and textfile1 have not been changed since last backup, and only backup the new/changed files, which in this case is randomfile3, such that:
tester@raspberrypi:~ $ tar -tf testbak/2020-02-16--05-10-45--testdir.tar
testdir/
testdir/randomfile1
testdir/textfile1
testdir/randomfile2
testdir/picture1
tester@raspberrypi:~ $ tar -tf testbak/2020-02-17--03-24-16--testdir.tar
testdir/randomfile3
So as a last example, let's say the next day I changed textfile1, and added picture2 and picture3:
$ mystery-command testdir/ testbak
testbak/
├── 2020-02-16--05-10-45--testdir.tar
├── 2020-02-17--03-24-16--testdir.tar
└── 2020-02-18--01-54-41--testdir.tar
tester@raspberrypi:~ $ tar -tf testbak/2020-02-16--05-10-45--testdir.tar
testdir/
testdir/randomfile1
testdir/textfile1
testdir/randomfile2
testdir/picture1
tester@raspberrypi:~ $ tar -tf testbak/2020-02-17--03-24-16--testdir.tar
testdir/randomfile3
tester@raspberrypi:~ $ tar -tf testbak/2020-02-18--01-54-41--testdir.tar
testdir/textfile1
testdir/picture2
testdir/picture3
With this system, I would save space by only backing up the incremental changes between each backup (with obviously the master backup that has all the initial files), and I would have backups of the incremental changes, so for example if I made a change on day 2, and changed the same thing again on day 3, I can still get the file with the change from day 2, but before the change from day 3.
I think it's kinda like how GitHub works :)
I know I could probably create a script that runs a diff and then selects the files to backup based on the result (or more efficiently, just get a checksum and compare), but I want to know if there's any utility that can do this a tad easier :)
|
Update:
Please see some caveats here: Is it possible to use tar for full system backups?
According to that answer, restoration of incremental backups with tar is prone to errors and should be avoided. Do not use the below method unless you're absolutely sure you can recover your data when you need it.
According to the documentation you can use the -g/--listed-incremental option to create incremental tar files, eg.
tar -cg data.inc -f DATE-data.tar /path/to/data
Then next time do something like
tar -cg data.inc -f NEWDATE-data.tar /path/to/data
Where data.inc is your incremental metadata, and DATE-data.tar are your incremental archives.
| Linux backup utility for incremental backups |
1,404,752,287,000 |
We can see that the synopsis of rm command is:
rm [OPTION]... [FILE]...
Doesn't it mean that we can use only rm command without any option or argument?
When I run the command rm on its own, the terminal then shows the following error:
rm: missing operand
Try 'rm --help' for more information.
Can anyone tell me why this is the case?
|
The standard synopsis for the rm utility is specified in the POSIX standard1&2 as
rm [-iRr] file...
rm -f [-iRr] [file...]
In its first form, it does require at least one file operand, but in its second form it does not.
Doing rm -f with no file operands is not an error:
$ rm -f
$ echo "$?"
0
... but it just doesn't do very much.
The standard says that for the -f option, the rm utility should...
Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of no file operands, or in the case of operands that do not exist. Any previous occurrences of the -i option shall be ignored.
This confirms that it must be possible to run rm -f without any pathname operands and that this is not something that makes rm exit with a diagnostic message nor a non-zero exit status.
This fact is very useful in a script that tries to delete a number of files as
rm -f -- "$@"
where "$@" is a list of pathnames that may or may not be empty, or that may contain pathnames that do not exist.
(rm -f will still generate a diagnostic message and exit with a non-zero exit status if there are permission issues preventing a named file from being removed.)
Running the utility with neither option nor pathname operands is an error though:
$ rm
usage: rm [-dfiPRrv] file ...
$ echo "$?"
1
The same holds true for GNU rm (the above shows OpenBSD rm) and other implementations of the same utility, but the exact diagnostic message and the non-zero exit-status may be different (on Solaris the value is 2, and on macOS it's 64, for example).
In conclusion, the GNU rm manual may just be a bit imprecise as it's true that with some option (-f, which is an optional option), the pathname operand is optional.
1 since the 2016 edition, after resolution of this bug, see the previous edition for reference.
2 POSIX is the standard that defines what a Unix system is and how it behaves. This standard is published by The Open Group. See also the question "What exactly is POSIX?".
| Why does rm manual say that we can run it without any argument, when this is not true? |
1,404,752,287,000 |
I am emptying out a hard drive on some Linux 4.x OS using this command:
sudo sh -c 'pv -pterb /dev/zero > /dev/sda'
And I opened another tty and started sudo htop and noticed this:
PID USER PRI NI CPU% RES SHR IO_RBYTES IO_WBYTES S TIME+ Command
4598 root 20 0 15.5 1820 1596 4096 17223823 D 1:14.11 pv -pterb /dev/zero
The value for IO_WBYTES seems quite normal, but IO_RBYTES remains at 4 KiB and never changes.
I ran a few other programs, for example
dd if=/dev/zero of=/dev/zero
cat /dev/zero > /dev/zero
and was surprised to see none of them generates a lot of IO_RBYTES or IO_WBYTES.
I think this is not specific to any program, but why don't reads from /dev/zero and writes to /dev/{zero,null} count as I/O bytes?
|
They do count as I/O, but not of the type measured by the fields you’re looking at.
In htop, IO_RBYTES and IO_WBYTES show the read_bytes and write_bytes fields from /proc/<pid>/io, and those fields measure bytes which go through the block layer. /dev/zero doesn’t involve the block layer, so reads from it don’t show up there.
To see I/O from /dev/zero, you need to look at the rchar and wchar fields in /proc/<pid>/io, which show up in htop as RCHAR and WCHAR:
rchar: characters read
The number of bytes which this task has caused to be
read from storage. This is simply the sum of bytes
which this process passed to read(2) and similar system
calls. It includes things such as terminal I/O and is
unaffected by whether or not actual physical disk I/O
was required (the read might have been satisfied from
pagecache).
wchar: characters written
The number of bytes which this task has caused, or
shall cause to be written to disk. Similar caveats
apply here as with rchar.
See man 5 proc and man 1 htop for details.
| Why don't reads from /dev/zero count as IO_RBYTES? |
1,404,752,287,000 |
I am learning command line from a book called "Linux Command Line and Shell Scripting Bible, Second Edition." The book states this:
Some Linux implementations contain a table of processes to start
automatically on bootup. On Linux systems, this table is usually
located in the special file /etc/inittabs.
Other systems (such as the popular Ubuntu Linux distribution) utilize
the /etc/init.d folder, which contains scripts for starting and
stopping individual applications at boot time. The scripts are started
via entries under the /etc/rcX.d folders, where X is a run level.
Probably because I am new to linux, I did not understand what the second paragraph quoted meant. Can someone explain the same in a much plainer language?
|
Let's forget init.d or rcx.d and keep things very simple. Imagine you were programming a program whose sole responsibility is to run or kill other scripts one by one.
However your next problem is to make sure they run in order. How would you perform that?
And lets imagine this program looked inside a scripts folder for running the scripts. To order the priority of scripts you would name them in lets say numerical order. This order is what dictates the relation between init.d and rc
In other words init.d contains the scripts to run and the rcX.d contains their order to run.
The X value in rcX.d is the run level. This could be loosely translated to the OS current state.
If you dig inside the rcX.d scripts you will find this formatting:
Xxxabcd
X is replaced with K or S, which stands for whether the script should be killed or started in the current run level
xx is the order number
abcd is the script name (the name is irrelevant however where it points is the script this will run)
| What's the connection between "/etc/init.d" and "/etc/rcX.d" directories in Linux? |
1,404,752,287,000 |
I need some clarification/confirmation/elaboration on the different roles DAC, ACL and MAC play in Linux file security.
After some research from the documentation, this is my understanding of the stack:
SELinux must allow you access to the file object.
If the file's ACLs (e.g., setfacl, getfacl for an ACL mount) explicitly allows/denies access to the object, then no further processing is required.
Otherwise, it is up to the file's permissions (rwxrwxrwx DAC model).
Am I missing something? Are there situations where this is not the case?
|
When a process performs an operation to a file, the Linux kernel performs the check in the following order:
Discretionary Access Control (DAC) or user dictated access control. This includes both classic UNIX style permission checks and POSIX Access Control Lists (ACL). Classical UNIX checks compare the current process UID and GID versus the UID and GID of the file being accessed with regards to which modes have been set (Read/Write/eXecute). Access Control List extends classic UNIX checks to allow more options regarding permission control.
Mandatory Access Control (MAC) or policy based access control. This is implemented using Linux Security Modules (LSM) which are not real modules anymore (they used to be but it was dropped). They enable additionnal checks based on other models than the classical UNIX style security checks. All of those models are based on a policy describing what kind of opeartions are allowed for which process in which context.
Here is an example for inodes access (which includes file access) to back my answer with links to an online Linux Cross Reference. The "function_name (filename:line)" given are for the 3.14 version of the Linux kernel.
The function inode_permission (fs/namei.c:449) first checks for read permission on the filesystem itself (sb_permission in fs/namei.c:425), then calls __inode_permission (fs/namei.c:394) to check for read/write/execute permissions and POSIX ACL on an inode in do_inode_permission (fs/namei.c:368) (DAC) and then LSM-related permissions (MAC) in security_inode_permission (security/security.c:550).
There was only one exception to this order (DAC then MAC): it was for the mmap checks. But this has been fixed in the 3.15 version of the Linux kernel (relevant commit).
| What roles do DAC (file permissions), ACL and MAC (SELinux) play in Linux file security? |
1,404,752,287,000 |
Coming from the Windows world, I have found the majority of the folder directory names to be quite intuitive:
\Program Files contains files used by programs (surprise!)
\Program Files (x86) contains files used by 32-bit programs on 64-bit OSes
\Users (formerly Documents and Settings) contains users' files, i.e. documents and settings
\Users\USER\Application Data contains application-specific data
\Users\USER\Documents contains documents belonging to the user
\Windows contains files that belong to the operation of Windows itself
\Windows\Fonts stores font files (surprise!)
\Windows\Temp is a global temporary directory
et cetera. Even if I had no idea what these folders did, I could guess with good accuracy from their names.
Now I'm taking a good look at Linux, and getting quite confused about how to find my way around the file system.
For example:
/bin contains binaries. But so do /sbin, /usr/bin, /usr/sbin, and probably more that I don't know about. Which is which?? What is the difference between them? If I want to make a binary and put it somewhere system-wide, where do I put it?
/media contains external media file systems. But so does /mnt. And neither of them contain anything on my system at the moment; everything seems to be in /dev. What's the difference? Where are the other partitions on my hard disk, like the C: and D: that were in Windows?
/home contains the user files and settings. That much is intuitive, but then, what is supposed to go into /usr? And how come /root is still separate, even though it's a user with files and settings?
/lib contains shared libraries, like DLLs. But so does /usr/lib. What's the difference?
What is /etc? Does it really stand for "et cetera", or something else? What kinds of files should go in there -- global or local? Is it a catch-all for things no one knew where to put, or is there a particular use case for it?
What are /opt, /proc, and /var? What do they stand for and what are they used for? I haven't seen anything like them in Windows*, and I just can't figure out what they might be for.
If anyone can think of other standard places that might be good to know about, feel free to add it to the question; hopefully this can be a good reference for people like me, who are starting to get familiar with *nix systems.
*OK, that's a lie. I've seen similar things in WinObj, but obviously not on a regular basis. I still don't know what these do on Linux, though.
|
Linux distributions use the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html You can also try man hier.
I'll try to sum up answers your questions off the top of my head, but I strongly suggest that you read through the FHS:
/bin is for non-superuser system binaries
/sbin is for superuser (root) system binaries
/usr/bin & /usr/sbin are for non-critical shared non-superuser or superuser binaries, respectively
/mnt is for temporarily mounting a partition
/media is for mounting many removable media at once
/dev contains your system device files; it's a long story :)
The /usr folder, and its subfolders, can be shared with other systems, so that they will have access to the same programs/files installed in one place. Since /usr is typically on a separate filesystem, it doesn't contain binaries that are necessary to bring the system online.
/root is separate because it may be necessary to bring the system online without mounting other directories which may be on separate partitions/hard drives/servers
Yes, /etc stands for "et cetera". Configuration files for the local system are stored there.
/opt is a place where you can install programs that you download/compile. That way you can keep them separate from the rest of the system, with all of the files in one place.
/proc contains information about the kernel and running processes
/var contains variable size files like logs, mail, webpages, etc.
To access a system, you generally don't need /var, /opt, /usr, /home; some of potentially largest directories on a system.
One of my favorites, which some people don't use, is /srv. It's for data that is being hosted via services like http/ftp/samba. I've see /var used for this a lot, which isn't really its purpose.
| Standard and/or common directories on Unix/Linux OSes |
1,404,752,287,000 |
I am running Lubuntu 11.10 and I want to display my audio / sound card driver from the command line.
|
To find out what sound drivers are loaded, look for drivers containing snd and their dependencies (assuming your sound driver is part of the ALSA framework; most are):
/sbin/lsmod | grep snd
For example, my PC has an Intel sound chip, and amongst the dependencies of the snd module is the snd_hda_intel module, which is my chip's driver.
You can also ask the ALSA tools. And to see the chip identification (independently of any driver), use lspci (or lsusb, if it's an external sound device over USB).
| How do I display the name of my audio card driver from the command line in Lubuntu 11.10? |
1,404,752,287,000 |
I'm trying to make a PCMCIA tuner card work in my headless home server, running Debian Squeeze. Now, as I have very big troubles finding the correct command line to capture, transcode end stream the video to the network using VLC, I decided to go step by step, and work first on local output.
That's where the problem comes in: there seems to be no framebuffer device (/dev/fb0) to access for displaying graphics on the attached screen! And indeed I noticed I don't have the Linux penguin image at boot (didn't pay attention before as screen is attached, but always off, and anyway computer is always on).
As I'm not very familiar with Linux graphics, I would like to understand:
Is this related to my particular hardware (see below)? Or is it specific to Debian Squeeze/ a kernel version/... ?
Is there some driver I need to manually install/load?
Now some general information:
The computer has no dedicated graphic card, but an embedded graphic chipset (Intel G31 Express), embedded on the motherboard (Gigabyte G31M-ES2L)
I don't want to install a full featured X server, just have a framebuffer device for this particular test
Any ideas/comments on the issue?
|
I can address your question, having previously worked with the Linux FB.
How Linux Does Its FB.
First you need to have FrameBuffer support in your kernel, corresponding to your hardware. Most modern distributions have support via kernel modules. It does not matter if your distro comes preconfigured with a boot logo, I don't use one and have FB support.
It does not matter if you have a dedicated graphics card, integrated will work as long as the Hardware Framebuffer is supported.
You don't need X, which is the the most enticing aspect of having the FrameBuffer. Some people don't know better, so they advocated some form of X to workaround their misunderstandings.
You don't need to work with the FB directly, which many people incorrectly assume. A very awesome library for developing with FrameBuffer is DirectFB it even has some basic acceleration support. I always suggest at least checking it out, if you are starting a full-featured FB based project (Web Browser, Game, GUI ...)
Specific To Your Hardware
Use the Vesa Generic FrameBuffer, its modules is called vesafb. You can load it, if you have it available, with the commands modprobe vesafb. many distributions preconfigure it disabled, you can check in /etc/modprobe.d/. blacklist vesafb might need to be commented out with a #, in a blacklist-framebuffer.conf or other blacklist file.
The Best option, is a Hardware specific KMS driver. The main one for Intel is Intel GMA, not sure what its modules are named. You will need to read up about it from your distro documents. This is the best performing FB option, I personally would always go KMS first if possible.
Use the Legacy Hardware specific FB Drivers, Not recommended as they are sometimes buggy. I would avoid this option, unless last-resort necessary.
I believe this covers all your questions, and should provide the information to get that /dev/fb0 device available. Anything more specific would need distribution details, and if you are somewhat experienced, RTFM should be all you need. (after reading this).
I hope I have helped, Your lucky your asking about one of my topics! This is a neglected subject on UNIX-SE, as not everybody (knowingly) uses the Linux FrameBuffer.
NOTE: UvesaFB Or VesaFB?
You may have read people use uvesafb over vesafb, as it had better performance. This WAS generally true, but not in a modern distro with modern Hardware. If your Graphics Hardware supports protected mode VESA (VESA >= 2.0 ), and you have a somewhat recent kernel vesafb is now a better choice.
| No framebuffer device: how to enable it? |
1,404,752,287,000 |
I am trying to send messages from kafka-console-producer.sh, which is
#!/bin/bash
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx512M"
fi
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"
I am pasting messages then via Putty terminal. On receive side I see messages truncated approximately to 4096 bytes. I don't see anywhere in Kafka, that this limit is set.
Can this limit be from bash/terminal or Putty?
|
4095 is the limit of the tty line discipline internal editor length on Linux. From the termios(3) man page:
The maximum line length is 4096 chars (including the terminating newline character); lines longer than 4096 chars are truncated. After 4095 characters, input processing (e.g., ISIG and ECHO* processing) continues, but any input data after 4095 characters up to (but
not including) any terminating newline is discarded. This ensures that the terminal can always receive more input until at least one
line can be read.
See also the corresponding code in the Linux kernel.
For instance, if you enter:
$ wc -cEnter
Enter in the shell's own line editor (readline in the case of bash) submits the line to the shell. As the command line is complete, the shell is ready to execute it, so it leaves its own line editor, puts the terminal device back in canonical (aka cooked) mode, which enables that crude line editor (actually implemented in tty driver in the kernel).
Then, if you paste a 5000 byte line, press Ctrl+D to submit that line, and once again to tell wc you're done, you'll see 4095 as output.
(Note that that limit does not apply to bash's own line editor, you'll see you can paste a lot more data at the prompt of the bash shell).
So if your receiving application reads lines of input from its stdin and its stdin is a terminal device and that application doesn't implement its own line editor (like bash does) and doesn't change the input mode, you won't be able to enter lines longer than 4096 bytes (including the terminating newline character).
You could however disable the line editor of the terminal device (with stty -icanon) before you start that receiving application so it reads input directly as you enter it. But then you won't be able to use Backspace / Ctrl + W for instance to edit input nor Ctrl + D to end the input.
If you enter:
$ saved=$(stty -g); stty -icanon icrnl; head -n1 | wc -c; stty "$saved"Enter
paste your 5000 byte long line and press Enter, you'll see 5001.
| Is there any limit on line length when pasting to a terminal in Linux? |
1,404,752,287,000 |
I want to see all bash commands that have been run on a Linux server across multiple user accounts. The specific distribution I'm using is CentOS 5.7. Is there a way to globally search .bash_history files on a server or would it be a more home-grown process of locate | cat | grep? (I shudder just typing that out).
|
Use getent to enumerate the home directories.
getent passwd |
cut -d : -f 6 |
sed 's:$:/.bash_history:' |
xargs -d '\n' grep -s -H -e "$pattern"
If your home directories are in a well-known location, it could be as simple as
grep -e "$pattern" /home/*/.bash_history
Of course, if a user uses a different shell or a different value of HISTFILE, this won't tell you much. Nor will this tell you about commands that weren't executed through a shell, or about aliases and functions and now-removed external commands that were in some user directory early in the user's $PATH. If what you want to know is what commands users have run, you need process accounting or some fancier auditing system; see Monitoring activity on my computer., How to check how long a process ran after it finished?.
| Can I search bash history across all users on a server? |
1,404,752,287,000 |
I want two jobs to run sometime every day, serially, in exactly the order I specify. Will this crontab reliably do what I want?
@daily job1
@daily job2
I'm assuming they run one after the other, but I was unable to find the answer by searching the Web or from any of these manpages: cron(1), crontab(1), crontab(5).
The crontab above obviously won't do what I want if cron runs things scheduled with @daily in parallel or in an unpredictable order.
I know I can simply make one shell script to fire them off in order, I'm just curious how cron is supposed to work (and I'm too lazy to gather test data or read the source code).
Cron is provided by the cron package. OS is Ubuntu 10.04 LTS (server).
|
After a quick glance at the source (in Debian squeeze, which I think is the same version), it does look like entries within a given file and with the same times are executed in order. For this purpose, @daily and 0 0 * * * are identical (in fact @daily is identical to 0 0 * * * in this cron).
I would not rely on this across the board. It's possible that one day someone will decide that cron should run jobs in parallel, to take advantage of these 32-core CPUs that have 31 cores running idle. This might be done when implementing this 20-year old todo item encountered in the cron source:
All of these should be flagged and load-limited; i.e.,
instead of @hourly meaning "0 * * * *" it should mean
"close to the front of every hour but not 'til the
system load is low". (…) (vix, jan90)
It's very easy to write @daily job1; job2 here. If it's important that the jobs execute in order, make it a direct consequence of what you write.
Additionally, making the order explicit removes the risk that a future administrator will reorder the lines thinking that it won't matter.
| Are multiple @daily crontab entries processed in order, serially? |
1,404,752,287,000 |
I have three logical volumes in a single volume group using a single physical volume (the whole existing disk /dev/sda).
I now want to move one of those logical volumes to a new, faster disk, i.e., going from:
/dev/sda
|-vg0-root → mounted to /
|-vg0-foo → mounted to /foo
|-vg0-bar → mounted to /bar
to:
/dev/sda
|-vg0-root → mounted to /
|-vg0-foo → mounted to /foo
/dev/sdb
|-vg1-bar → mounted to /bar
From what I understand I cannot use pvmove or vgsplit because there's only one physical volume in the existing volume group.
What's a good approach to achieve this (preferably online, creating a new volume group for the new disk is not a requirement)?
|
One volume group solution:
pvcreate /dev/sdb
vgextend vg0 /dev/sdb
pvmove -n /dev/vg0/bar /dev/sda /dev/sdb
Two volume group solution:
pvcreate /dev/sdb
vgcreate vg1 /dev/sdb
lvcreate -l100%FREE vg1
mkfs -t ext4 /dev/vg1/lvol1
mount /dev/vg1/lvol1 /mnt
Now difficult part, all activities MUST stop on /bar:
cd /mnt ; ( cd /bar ; tar cf - * ) | tar xf -
cd /
umount /mnt
mount /dev/vg1/lvol1 /bar
where
pvcreate erase all data on disk (and prepare for LVM)
lvcreate sould create a logical volume lvol1, you specify lv name with -n bar
I use HP-UX syntax for lv, you might have to use /dev/mapper/myvg-mylv syntax
Once you have verified data are OK, in new place:
you can safely delete old /bar
edit /etc/fstab to use new /bar
| Move logical volume to a new physical disk |
1,404,752,287,000 |
I just installed RHEL 6.3 on a Dell 1950 server.
This server as two GBit ports, Gb0 and Gb1.
For some obscure reason, udev chose to name Gb0 eth1 and Gb1 eth0.
This is definitly not a good find for me and just gives confusion.
So I modified the configuration in /etc/udev/rules.d/70-persistent-net.rules:
# PCI device 0x14e4:0x164c (bnx2)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", \
ATTR{address}=="00:20:19:52:d3:c0", \
ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x14e4:0x164c (bnx2)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", \
ATTR{address}=="00:20:19:52:d3:be", \
ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
I just changed the "NAME" field on the file in order to reflect what I want.
I rebooted the server and it didn't worked.
In the dmesg log I can read the following :
udev: renamed network interface eth1 to rename5
udev: renamed network interface eth0 to eth1
udev: renamed network interface rename5 to eth0
Any idea on what is wrong here?
Why is udev switching like this? I have another similar server, where I do not have this issue.
|
In my case, the issue is coming from the fact that the mac address for each interface was set in three files :
/etc/udev/rules.d/70-persistent-net.rules
/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
We need consistency between ifcfg file and net.rules for the mac address.
| Udev : renaming my network interface |
1,404,752,287,000 |
On the GNU Project webpage, there's a subsection called "All GNU packages" which lists the various software in the GNU project.
Are there any GNU distributions which use only these packages -- i.e. a "pure" GNU operating system that runs on only GNU packages?
I'm not particularly interested on whether this would be a practical operating system, just if it's theoretically possible to run GNU Hurd with purely the GNU packages. If not, what kind of software must still be implemented to achieve this goal (i.e. what's missing)?
If GNU Hurd is the limiting factor, than if an exception is made for the kernel, would a pure GNU OS be possible using the Linux kernel?
|
The explicit goal of the GNU project is to provide a complete open source/libre/free operating system.
Are there any GNU distributions which use only these packages -- i.e. a "pure" GNU operating system that runs on only GNU packages?
There is a reference here to an official sounding GNU binary distro based on Hurd which "consists of GNU Mach, the Hurd, the C library and many applications". It may or may not be currently maintained, however, as I couldn't find any other online references to it. But it does sound like it fits your criteria.
I'm not particularly interested on whether this would be a practical operating system, just if it's theoretically possible to run GNU Hurd with purely the GNU packages.
The answer to the previous question implies an obvious answer WRT Hurd. Of course, it might help to define more precisely what would count as a reasonably complete "operating system". I'll provide two definitions:
A collection of software sufficient to boot up to a shell prompt.
A system which fulfills POSIX criteria. This is essentially a stricter version of #1, since the highest level mandatory entity in a POSIX system would be the shell.
This is a little arbitrary, since an operating system designed to fulfill some special purpose might not need a shell at all. However, in that case it would become a more specific question about the nature of the "special purpose".
In any case, the answer is yes, although GNU's implementation of some things may not be 100% perfectly POSIX compliant (and there are a handful of required utilities, such as crontab, which GNU doesn't provide). Here are the potential components:
Kernel (Hurd)
C library (glibc)
Essential utilities (GNU core-utils, etc.)
Shell (bash, which is a GNU project)
I did not include a bootloader, since that is not part of the OS -- but in any case grub is also a GNU project.
| Is it possible to run pure GNU? |
1,404,752,287,000 |
I've been thinking about discontinuing the use of GNU Coreutils on my Linux systems, but to be honest, unlike many other GNU components, I can't think of any alternatives (on Linux). What alternatives are there to GNU coreutils? will I need more than one package? Links to the project are a must, bonus points for naming distro packages.
Also please don't suggest things unless you know they work on Linux, and can reference instructions. I doubt I'll be switching kernels soon, and I'm much too lazy for anything much beyond a straightforward ./configure; make; make install. I'm certainly not going to hack C for it.
warning: if your distro uses coreutils removing them could break the way your distro functions. However not having them be first in your $PATH shouldn't break things, as most scripts should use absolute paths.
|
busybox the favorite of Embedded Linux systems.
BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system.
BusyBox has been written with size-optimization and limited resources in mind. It is also extremely modular so you can easily include or exclude commands (or features) at compile time. This makes it easy to customize your embedded systems. To create a working system, just add some device nodes in /dev, a few configuration files in /etc, and a Linux kernel.
You can pretty much make any coreutil name a link to the busybox binary and it will work. you can also run busybox <command> and it will work. Example: if you're on Gentoo and haven't installed your vi yet, you can run busybox vi filename and you'll be in vi. It's
Arch Linux - community/busybox
Gentoo Linux - sys-apps/busybox
Alpine Linux - based on BusyBox and uClibc, here's an overview
| Any options to replace GNU coreutils on Linux? |
1,404,752,287,000 |
I'm running CentOS in Linux text mode. When I run the command ls /usr/, the output is too hard to read (dark blue on black). How can I change the text coloring?
|
If you are wanting to change your colours in the console, that is outside X, then you can specify colours in your .bashrc, like so:
if [ "$TERM" = "linux" ]; then
echo -en "\e]P0222222" #black
echo -en "\e]P8222222" #darkgrey
echo -en "\e]P1803232" #darkred
....
fi
Where you are defining black as #222222 See this post for the details: http://phraktured.net/linux-console-colors.html
If you are working in X, then you can customize your setup by defining your colours in your .Xresources like so:
!black
*color0: #3D3D3D
*color8: #5E5E5E
!red
*color1: #8C4665
*color9: #BF4D80
...
and then sourcing this file when you start X, typically from your .xinitrc:
xrdb -merge ~/.Xresources
The Arch Wiki has a page on .Xresources that explains all of the options:
https://wiki.archlinux.org/index.php/Xresources
Another enhancement you can make either in X or not is to specify all of the different filetypes that you would like to colour—and their respective colours in a .dir_colors file, like so:
.xinitrc 01;31
.Xauthority 01;31
.Xmodmap 00;31
.Xresources 01;33
...
To get started, copy /etc/dir_colors to your user's /home directory and make your changes. Then source this from your .bashrc with eval $(dircolors -b ~/.dir_colors) This will allow you fine-grained control over the colours of files and filetypes when you use ls.
You can find (an incredibly detailed and thorough) .dir_colors example file here:
https://github.com/trapd00r/LS_COLORS/blob/master/LS_COLORS
With a combination of all three approaches, you can create a reasonably uniform setup, whether you are working in the console or in X.
| How to colorize output of ls ? |
1,404,752,287,000 |
According to https://www.computerhope.com/unix/udiff.htm
The diff command analyzes two files and prints the lines that are
different.
Can I use the same diff command to compare two strings?
$ more file*
::::::::::::::
file1.txt
::::::::::::::
hey
::::::::::::::
file2.txt
::::::::::::::
hi
$ diff file1.txt file2.txt
1c1
< hey
---
> hi
Instead of saving the content of hey and hi into two different files, can I read it directly?
By the way, there are no files named hey or hi in the example below, which is why I get No such file or directory error messages.
$ diff hey hi
diff: hey: No such file or directory
diff: hi: No such file or directory
|
Yes, you can use diff on two strings, if you make files from them, because diff will only ever compare files.
A shortcut way to do that is using process substitutions in a shell that supports these:
diff <( printf '%s\n' "$string1" ) <( printf '%s\n' "$string2" )
Example:
$ diff <( printf '%s\n' "hey" ) <( printf '%s\n' "hi" )
1c1
< hey
---
> hi
In other shells,
printf '%s\n' "$string1" >tmpfile
printf '%s\n' "$string2" | diff tmpfile -
rm -f tmpfile
In this second example, one file contains the first string, while the second string is given to diff on standard input. diff is invoked with the file containing the first string as its first argument. As its second argument, - signals that it should read standard input (on which the second string will arrive via printf).
| Using the diff command to compare two strings? |
1,404,752,287,000 |
One of functionalities I miss the most from "small embedded" in Embedded Linux is the interrupts. A signal appears on a specific pin, or other interrupt source is triggered and whatever was done inside the CPU gets interrupted, and my function of interrupt handler is launched. In Linux everything is buffered, if something happens the system just goes about its own course and when (at last) given thread is brought to foreground, its wait-state expecting the external source ends, and its handler starts.
The closest thing I know are the signals, which can trigger a handler interrupting normal flow of the thread, but still, the handler will not pick up the signal until the kernel brings the thread into foreground, which may be many milliseconds after the signal happened - and triggering the signals isn't as robust either; I need an app or a kernel module to send a signal, I can't just trivially attach it to a GPIO pin.
How could I achieve a functionality similar to hardware interrupts within Linux userspace software - have a specific function launched or specific thread brought to foreground immediately after an externally sourced condition is triggered, without waiting for the process queue to bring my thread to foreground?
If you feel this question is too broad, let's narrow it to a specific example: a Raspberry Pi board receives a signal on one of its GPIO pins (not necessarily arbitrary; if only some pins can do that, that's okay.) I want my userspace application to react to this event within least time possible, be it bringing it out of wait state, launching a handler function or any equivalent mechanism, but above all not waiting for the task queue to cycle through all pending processes before the handler is brought to foreground, but trigger it ASAP. (and specifically, when there is no signal, not leaving the system locked forever with the handler process occupying 100% CPU time polling the input and never yielding to the OS.) Is there such a mechanism?
|
If I understand your question this articled sounds like what you're looking for. The article is titled: Device drivers in user space.
excerpt
UIO drivers
Linux provides a standard UIO (User I/O) framework for developing user-space-based device drivers. The UIO framework defines a small kernel-space component that performs two key tasks:
a. Indicate device memory regions to user space.
b. Register for device interrupts and provide interrupt indication to user space.
The kernel-space UIO component then exposes the device via a set of sysfs entries like /dev/uioXX. The user-space component searches for these entries, reads the device address ranges and maps them to user space memory.
The user-space component can perform all device-management tasks including I/O from the device. For interrupts however, it needs to perform a blocking read() on the device entry, which results in the kernel component putting the user-space application to sleep and waking it up once an interrupt is received.
I've never done this before so I can not offer you much more guidance than this, but thought it might be helpful towards your quest.
| Can I achieve functionality similar to interrupts in Linux userspace? |
1,404,752,287,000 |
My employer is located in Europe (CET), and therefore we use daylight saving time, which requires shifting an hour hence and forth twice a year. Our servers are running in the cloud in different locations. The employee who set up all the infrastructure is gone. He decided to use UTC as the system time zone on all servers (currently Ubuntu 18.04, 20.04, and 22.04).
This is not ideal, because you have to mentally add 1/2 hours to every date you see in a log file, depending on the time of the year (+2 hours in the summer, +1 hour in the winter). The timing of some cronjobs also needs to be adjusted twice a year, because the tasks should be run at noon CET.
Is there any good reason to (still) use UTC as the system's time zone? Or should I rather switch to CET, so that my cronjobs and logfiles better align to the wall clock?
|
I can really understand your log-reading pain, but I wouldn't want to discuss times in log files with my American and German colleagues during the ca 2 weeks a year where Daylights saving time has affected one continent, but not the other¹. Personally, while that certainly isn't as relevant for services that have mostly local usage (e.g. a print server – not like someone in Arizona will print on my Southern German printer), I've found anything but UTC timestamps in mail server logs vastly confusing.
Regarding your cron jobs: I'm slowly trying to wean myself off good ole crontab, in favor of systemd timer units. They have an OnCalendar= field, and that takes a timezone specification! So, you can still say, hey, awesome, at 7:00 AM in Berlin, kick of that RFC 2324 transfer, or whatever.
All in all, yeah, for a server, stay in UTC. But, in all honesty, I think consistency is more important than "Müller's perceived administrative beauty" here. If the rest of your admin scope / adjacent teams and users expect CET, then by all means: Go CET. Things should work.
For levity:
¹ I might be an outlier. My mechanical wristwatch is in UTC as well.
| Should I (still) use UTC for all my servers? |
1,404,752,287,000 |
What are the bare minimum components for a Linux OS to be functional, and that I can use as a base to expand and improve as I learn Linux and my understanding and needs grow?
|
If you mean learn Linux as in getting to know the source code, you may want to try Linux from scratch
| What is the smallest possible Linux implementation? [closed] |
1,404,752,287,000 |
I used mount to show mounted drives, I don't want to see the not so interesting ones (i.e. non-physical). So I used to have a script mnt that did:
mount | grep -Ev 'type (proc|sysfs|tmpfs|devpts) '
under Ubuntu 8.04 and showed me ext3 and reiserfs mount points only. That line is actually commented out and now I use (for Ubuntu 12.04):
mount | grep -Ev 'type (proc|sysfs|tmpfs|devpts|debugfs|rpc_pipefs|nfsd|securityfs|fusectl|devtmpfs) '
to only show my ext4 and zfs partitions (I dropped using reiserfs).
Now I am preparing for Ubuntu 14.04 and the script has to be extended again (cgroup,pstore). Is there a better way to do this without having to extend the script? I am only interested in physical discs that are mounted and mounted network drives (nfs,cifs).
|
The -t option for mount also works when displaying mount points and takes a comma separated list of filesystem types:
mount -t ext3,ext4,cifs,nfs,nfs4,zfs
I am not sure if that is a better solution. If you start using (e.g. btrfs) and forget to add that to the list you will not see it and maybe not miss it. I'd rather actively filter out any new "uninteresting" filesystem when they pop up, even though that list is getting long.
You can actively try to only grep the interesting mount points similar to what @Graeme proposed, but since you are interested in NFS/CIFS mounts as well (which don't start with /), you should do:
mount | grep -E --color=never '^(/|[[:alnum:]\.-]*:/)'
( the --color is necessary to suppress coloring of the initial / on the lines found). As Graeme pointed out name based mounting of NFS shares should be allowed as well. The pattern either selects lines starting with a / or any combination of "a-zA-Z0-9." followed by :/ (for NFS mounts).
| Showing only "interesting" mount points / filtering non interesting types |
1,404,752,287,000 |
I am trying to (as close as possibly) atomically change a symlink. I've tried:
ln -sf other_dir existing_symlink
That just put the new symlink in the directory that existing_symlink pointed to.
ln -sf other_dir new_symlink
mv -f new_symlink existing_symlink
That did the same thing: it moved the symlink into the directory.
cp -s other_dir existing_symlink
It refuses because it's a directory.
I've read that mv -T is was made for this, but busybox doesn't have the -T flag.
|
I don't see how you can get atomic operation. The man page for symlink(2) says it gives EEXIST if the target already exists. If the kernel doesn't support atomic operation, your userland limitations are irrelevant.
I also don't see how mv -T helps, even if you have it. Try it on a regular Linux box, one with GNU mv:
$ mkdir a b
$ ln -s a z
$ mv -T b z
mv: cannot overwrite non-directory `z' with directory `b'
I think you're going to have to do this in two steps: remove the old symlink and recreate it.
| How does one atomically change a symlink to a directory in busybox? |
1,306,859,455,000 |
I've just installed a Fedora 19 on VMware workstation 9.
The default network device is "ens33" instead of "eth0" on RHEL.
The reason I have to use "eth0" is that the license component of one of our products has be to be linked with "eth0".
There are some posts discussing about similar issues, most of which are for older OS.
I haven't found one that exactly match my situation.
|
The easiest way to restore the old way Kernel/modules/udev rename your ethernet interfaces is supplying these kernel parameters to Fedora 19:
net.ifnames=0
biosdevname=0
To do so follow this steps:
Edit /etc/default/grub
At the end of GRUB_CMDLINE_LINUX line append "net.ifnames=0
biosdevname=0"
Save the file
Type "grub2-mkconfig -o /boot/grub2/grub.cfg"
Type "reboot"
If you didn't supply these parameters during the installation, you will probably need to adjust and/or rename interface files at /etc/sysconfig/network-scripts/ifcfg-*.
Up to Fedora 18, just biosdevname=0 was enough.
As an example, in a certain machine, in a exhaustive research, I got:
-No parameters: NIC identified as "enp5s2".
-Parameter biosdevname=0: NIC identified as "enp5s2".
-Parameter net.ifnames=0: NIC identified as "em1".
-Parameter net.ifnames=0 AND biosdevname=0: NIC identified as "eth0".
| How can I change the default "ens33" network device to old "eth0" on Fedora 19? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.