date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,354,072,861,000 |
The following happens on different Linuces:
When I'm in a virtual console, hold Alt and press ← or →, the virtual ttys cycle. This is really annoying as I'm using fish-shell which also uses this key combo. I could remap fish's short cuts, but I don't want to. Instead I want to disable the linux function or remap it.
How can I disable or change the tty-cycling-key-combo?
|
You can use the loadkeys command to remap keys on the Linux console. The following lines define the key bindings to switch consoles (on a PC keyboard):
alt keycode 105 = Decr_Console
alt keycode 106 = Incr_Console
Load your own keymap file that overrides these bindings with an escape sequence that fish recognizes. To make a key send an escape sequence, you need to bind it to a key name of the form FNUMBER and define a character sequence for FNUMBER.
alt keycode 105 = F105
alt keycode 106 = F106
string F105 = "\033\033[D"
string F105 = "\033\033[C"
Different distributions (and sometimes different packages for console support) store the system boot-time keymap in different locations under /etc. Look for a file called *.kmap or *.kmap.gz or *.map or *.map.gz under /etc or consult your distribution's manual. Some distributions store a the keymap's name in /etc instead and put the actual keymap elsewhere; look for a keymap-related setting under /etc/sysconfig or other configuration directory.
You can either write your own keymap and use include "/path/to/foo.map" to reference the system keymap, or arrange to load your own keymap containing just the settings you want to change during the boot process.
| How to disable Alt-Arrow switching of Virtual Consoles? |
1,354,072,861,000 |
I have noticed that subsequent runs of grep on the same query (and also a different query, but on the same file) are much faster than the first run (the effect is easily noticeable when searching through a big file).
This suggests that grep uses some sort of caching of the structures used for search, but I could not find a reference on the Internet.
What mechanism enables grep to return results faster in subsequent searches?
|
Not grep as such, but the filesystem itself often caches recently read data, causing later runs to go faster since grep is effectively searching in memory instead of disk.
| Does grep use a cache to speed up the searches? |
1,354,072,861,000 |
I just know that Interrupt is a hardware signal assertion caused in a processor pin. But I would like to know how Linux OS handles it.
What all are the things that happen when an interrupt occurs?
|
Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail.
When an interrupt occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one.
Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the interrupt handler. When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers).
The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts.
The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts:
The “top half” is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory.
The “bottom half” does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled.
As usual on this topic, for more information, read Linux Device Drivers; chapter 10 is about interrupts.
| How is an Interrupt handled in Linux? |
1,354,072,861,000 |
After the last upgrade on:
Operating System: Debian GNU/Linux buster/sid
Kernel: Linux 4.18.0-2-686-pae
Architecture: x86
/usr/lib/tracker/tracker-store eats a huge load of CPU.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7039 nath 20 0 96136 24460 11480 R 100,0 1,3 0:01.76 tracker-store
When I run tracker daemon I get:
Miners:
17 Nov 2018, 21:17:06: ? File System - Not running or is a disabled plugin
17 Nov 2018, 21:17:06: ? Applications - Not running or is a disabled plugin
17 Nov 2018, 21:17:06: ? Extractor - Not running or is a disabled plugin
I thought I disabled all tracker activities, what is it doing?
The fan is going like crazy and a reboot does not improve the situation.
|
after having tracker-store running with almost 100% CPU, almost all the time for 7 days now, it seems like I found an easy fix:
tracker reset --hard
CAUTION: This process may irreversibly delete data.
Although most content indexed by Tracker can be safely reindexed, it can?t be assured that this is the case for all data. Be aware that you may be incurring in a data loss situation, proceed at your own risk.
Are you sure you want to proceed? [y|N]:
/usr/lib/tracker/tracker-store process is gone, fan is spinning down, and everything is quiet after a week. After a reboot tracker-store still stays quiet.
Update for Tracker3:
tracker3 reset -s -r
| /usr/lib/tracker/tracker-store causes very heavy CPU load on Debian "Buster" |
1,354,072,861,000 |
For this question, let's consider a bash shell script, though this question must be applicable to all types of shell script.
When someone executes a shell script, does Linux load all the script at once (into memory maybe) or does it read script commands one by one (line by line)?
In other words, if I execute a shell script and delete it before the execution completes, will the execution be terminated or will it continue as it is?
|
If you use strace you can see how a shell script is executed when it's run.
Example
Say I have this shell script.
$ cat hello_ul.bash
#!/bin/bash
echo "Hello Unix & Linux!"
Running it using strace:
$ strace -s 2000 -o strace.log ./hello_ul.bash
Hello Unix & Linux!
$
Taking a look inside the strace.log file reveals the following.
...
open("./hello_ul.bash", O_RDONLY) = 3
ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7fff0b6e3330) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
read(3, "#!/bin/bash\n\necho \"Hello Unix & Linux!\"\n", 80) = 40
lseek(3, 0, SEEK_SET) = 0
getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0
fcntl(255, F_GETFD) = -1 EBADF (Bad file descriptor)
dup2(3, 255) = 255
close(3)
...
Once the file's been read in, it's then executed:
...
read(255, "#!/bin/bash\n\necho \"Hello Unix & Linux!\"\n", 40) = 40
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 3), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc0b38ba000
write(1, "Hello Unix & Linux!\n", 20) = 20
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
read(255, "", 40) = 0
exit_group(0) = ?
In the above we can clearly see that the entire script appears to be being read in as a single entity, and then executed there after. So it would "appear" at least in Bash's case that it reads the file in, and then executes it. So you'd think you could edit the script while it's running?
NOTE: Don't, though! Read on to understand why you shouldn't mess with a running script file.
What about other interpreters?
But your question is slightly off. It's not Linux that's necessarily loading the contents of the file, it's the interpreter that's loading the contents, so it's really up to how the interpreter's implemented whether it loads the file entirely or in blocks or lines at a time.
So why can't we edit the file?
If you use a much larger script however you'll notice that the above test is a bit misleading. In fact most interpreters load their files in blocks. This is pretty standard with many of the Unix tools where they load blocks of a file, process it, and then load another block. You can see this behavior with this U&L Q&A that I wrote up a while ago regarding grep, titled: How much text does grep/egrep consume each time?.
Example
Say we make the following shell script.
$ (
echo '#!/bin/bash';
for i in {1..100000}; do printf "%s\n" "echo \"$i\""; done
) > ascript.bash;
$ chmod +x ascript.bash
Resulting in this file:
$ ll ascript.bash
-rwxrwxr-x. 1 saml saml 1288907 Mar 23 18:59 ascript.bash
Which contains the following type of content:
$ head -3 ascript.bash ; echo "..."; tail -3 ascript.bash
#!/bin/bash
echo "1"
echo "2"
...
echo "99998"
echo "99999"
echo "100000"
Now when you run this using the same technique above with strace:
$ strace -s 2000 -o strace_ascript.log ./ascript.bash
...
read(255, "#!/bin/bash\necho \"1\"\necho \"2\"\necho \"3\"\necho \"4\"\necho \"5\"\necho \"6\"\necho \"7\"\necho \"8\"\necho \"9\"\necho \"10\"\necho
...
...
\"181\"\necho \"182\"\necho \"183\"\necho \"184\"\necho \"185\"\necho \"186\"\necho \"187\"\necho \"188\"\necho \"189\"\necho \"190\"\necho \""..., 8192) = 8192
You'll notice that the file is being read in at 8KB increments, so Bash and other shells will likely not load a file in its entirety, rather they read them in in blocks.
References
The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours
| How Does Linux deal with shell scripts? |
1,354,072,861,000 |
When adding a new user, how is the string validated?
I suppose there is a regular expression. What is that regular expression?
|
The general rule for username is its length must less than 32 characters. It depend on your distribution to make what is valid username.
In Debian, shadow-utils 4.1, there is a is_valid_name function in chkname.c:
static bool is_valid_name (const char *name)
{
/*
* User/group names must match [a-z_][a-z0-9_-]*[$]
*/
if (('\0' == *name) ||
!((('a' <= *name) && ('z' >= *name)) || ('_' == *name))) {
return false;
}
while ('\0' != *++name) {
if (!(( ('a' <= *name) && ('z' >= *name) ) ||
( ('0' <= *name) && ('9' >= *name) ) ||
('_' == *name) ||
('-' == *name) ||
( ('$' == *name) && ('\0' == *(name + 1)) )
)) {
return false;
}
}
return true;
}
And the length of username was checked before:
bool is_valid_user_name (const char *name)
{
/*
* User names are limited by whatever utmp can
* handle.
*/
if (strlen (name) > USER_NAME_MAX_LENGTH) {
return false;
}
return is_valid_name (name);
}
| What is the regex to validate Linux users? |
1,354,072,861,000 |
Are there any (good known, reliable) file systems on Linux that store the creation time of files and directories in the i-node table?
If there are, is the "changed" time replaced by the creation time of an i-node in a stat call?
|
The ext4 file system does store the creation time. stat -c %W myfile can show it to you.
| What file systems on Linux store the creation time? |
1,354,072,861,000 |
Many new laptop and desktop computers do not have 9-pin/25-pin serial ports. Why do many Linux distributions still contain /dev/ttyS0, dev/ttyS1 device files?
Since udev can create the device files dynamically, why are /dev/ttyS0, /dev/ttyS1 still created statically? Each time I boot up, /dev/ttyS0 and /dev/ttyS1 are in there.
By the way: I am using Debian 7.0.
|
These /dev nodes appear because the standard PC serial port driver is compiled into the kernel you're using, and it is finding UARTs. That causes /sys/devices/platform/serial8250 (or something compatible) to appear, so udev creates the corresponding /dev nodes.
These UARTs are most likely one of the many features of your motherboard's chipset. Serial UARTs in the chipset are quite common still, even though it is becoming less and less common for a DB-9 connector to be attached to these IC UART pins.
On some motherboards, there is a header connector for each serial port, and you have to buy an adapter cable if you want to route that connector to the back of the PC:
Other motherboards using the same chipset might not even expose the header connector, even though the feature is available in silicon, purely to save a bit of PCB space and a few cents for the header connector.
A few serial UARTs add negligible cost to a mass-produced PC chipset IC, whereas it adds a few dollars to the final retail cost of a motherboard to run a DB-9 connector out to the board edge. There is also a cost in PCB space; space at the board edge is especially precious.
There is no standard way to probe for the existence of a device connected to an RS-232 serial port.
Contrast USB, where the mere presence of a port on the motherboard doesn't cause a /dev node to be created, but plugging a device in does, because there is a fairly complex negotiation between the device and the host OS. In effect, the device announces itself to the OS, so udev can react by creating an appropriate /dev node for the device.
| Why do some Linux distributions still have /dev/ttyS0, ttyS1, etc., even though newer computers don't have such a serial port? |
1,354,072,861,000 |
I am looking for a way to mount a ZIP archive as a filesystem so that I can transparently access files within the archive. I only need read access -- the ZIP will not be modified. RAM consumption is important since this is for a (resource constrained) embedded system. What are the available options?
|
fuse-zip is an option and claims to be faster than the competition.
# fuse-zip -r archivetest.zip /mnt
archivemount is another:
# archivemount -o readonly archivetest.zip /mnt
Both will probably need to open the whole archive, therefore won't be particularly quick. Have you considered extracting the ZIP to a HDD or USB-stick beforehand and simply mounting that read-only?
There are also other libraries like fuse-archive and ratarmount which supposedly are more performant under certain situations and provide additional features.
| Mount zip file as a read-only filesystem |
1,354,072,861,000 |
I want to check, from the linux command line, if a given cleartext password is the same of a crypted password on a /etc/shadow
(I need this to authenticate web users. I'm running an embedded linux.)
I have access to the /etc/shadow file itself.
|
You can easily extract the encrypted password with awk. You then need to extract the prefix $algorithm$salt$ (assuming that this system isn't using the traditional DES, which is strongly deprecated because it can be brute-forced these days).
correct=$(</etc/shadow awk -v user=bob -F : 'user == $1 {print $2}')
prefix=${correct%"${correct#\$*\$*\$}"}
For password checking, the underlying C function is crypt, but there's no standard shell command to access it.
On the command line, you can use a Perl one-liner to invoke crypt on the password.
supplied=$(echo "$password" |
perl -e '$_ = <STDIN>; chomp; print crypt($_, $ARGV[0])' "$prefix")
if [ "$supplied" = "$correct" ]; then …
Since this can't be done in pure shell tools, if you have Perl available, you might as well do it all in Perl. (Or Python, Ruby, … whatever you have available that can call the crypt function.) Warning, untested code.
#!/usr/bin/env perl
use warnings;
use strict;
my @pwent = getpwnam($ARGV[0]);
if (!@pwent) {die "Invalid username: $ARGV[0]\n";}
my $supplied = <STDIN>;
chomp($supplied);
if (crypt($supplied, $pwent[1]) eq $pwent[1]) {
exit(0);
} else {
print STDERR "Invalid password for $ARGV[0]\n";
exit(1);
}
On an embedded system without Perl, I'd use a small, dedicated C program. Warning, typed directly into the browser, I haven't even tried to compile. This is meant to illustrate the necessary steps, not as a robust implementation!
/* Usage: echo password | check_password username */
#include <stdio.h>
#include <stdlib.h>
#include <pwd.h>
#include <shadow.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
char password[100];
struct spwd shadow_entry;
char *p, *correct, *supplied, *salt;
if (argc < 2) return 2;
/* Read the password from stdin */
p = fgets(password, sizeof(password), stdin);
if (p == NULL) return 2;
*p = 0;
/* Read the correct hash from the shadow entry */
shadow_entry = getspnam(username);
if (shadow_entry == NULL) return 1;
correct = shadow_entry->sp_pwdp;
/* Extract the salt. Remember to free the memory. */
salt = strdup(correct);
if (salt == NULL) return 2;
p = strchr(salt + 1, '$');
if (p == NULL) return 2;
p = strchr(p + 1, '$');
if (p == NULL) return 2;
p[1] = 0;
/*Encrypt the supplied password with the salt and compare the results*/
supplied = crypt(password, salt);
if (supplied == NULL) return 2;
return !!strcmp(supplied, correct);
}
A different approach is to use an existing program such as su or login. In fact, if you can, it would be ideal to arrange for the web application to perform whatever it needs via su -c somecommand username. The difficulty here is to feed the password to su; this requires a terminal. The usual tool to emulate a terminal is expect, but it's a big dependency for an embedded system. Also, while su is in BusyBox, it's often omitted because many of its uses require the BusyBox binary to be setuid root. Still, if you can do it, this is the most robust approach from a security point of view.
| How to check password with Linux? |
1,354,072,861,000 |
For example, I created a named pipe like the following:
mknod myPipe p
And I read from it from some process (for example, some server). For example purposes, I used tail:
tail -f myPipe
If several client processes write some messages into it (for example, echo "msg" >> myPipe, is there some chance that messages will get interleaved, like this:
<beginning of message1><message2><ending of message1>
Or is the process of writing to named pipe is atomic?
|
It depends on how much each process is writing (assuming your OS is POSIX-compliant in this regard). From write():
Write requests to a pipe or FIFO shall be handled in the same way as a regular file with the following exceptions:
[...]
Write requests of {PIPE_BUF} bytes or less shall not be interleaved with data from other processes doing writes on the same pipe. Writes of greater than {PIPE_BUF} bytes may have data interleaved, on arbitrary boundaries, with writes by other processes, whether or not the O_NONBLOCK flag of the file status flags is set.
Also in the Rationale section regarding pipes and FIFOs:
Atomic/non-atomic: A write is atomic if the whole amount written in one operation is not interleaved with data from any other process. This is useful when there are multiple writers sending data to a single reader. Applications need to know how large a write request can be expected to be performed atomically. This maximum is called {PIPE_BUF}. This volume of POSIX.1-2008 does not say whether write requests for more than {PIPE_BUF} bytes are atomic, but requires that writes of {PIPE_BUF} or fewer bytes shall be atomic.
The value if PIPE_BUF is defined by each implementation, but the minimum is 512 bytes (see limits.h). On Linux, it's 4096 bytes (see pipe(7)).
| What are guarantees for concurrent writes into a named pipe? |
1,354,072,861,000 |
I am looking for an explanation of what happens in Linux when this key combination is pressed to change the current terminal. In particular, what software component intercepts this key combination and changes the terminal? Is it the kernel? If it is the kernel, could you provide the location of the source file which handles this?
Edit:
I want to understand how this works in both a graphical (X11) and text-based environment.
|
It is the kernel. Keep in mind the keyboard is hardware and everything that happens there passes through the kernel; in the case of VT switching, it handles the event completely itself and does not pass anything on to userspace (however, I believe there is an ioctl related means by which userspace programs can be notified of a switch occurring involving them and perhaps affect it, which X no doubt does).
The kernel has a keymap build into it; this can be modified while running with loadkeys, and viewed with dumpkeys:
[...]
keycode 59 = F1 F13 Console_13 F25
alt keycode 59 = Console_1
control alt keycode 59 = Console_1
keycode 60 = F2 F14 Console_14 F26
alt keycode 60 = Console_2
control alt keycode 60 = Console_2
keycode 61 = F3 F15 Console_15 F27
alt keycode 61 = Console_3
control alt keycode 61 = Console_3
[...]
The kernel source contains a default keymap file which looks exactly like this; for 3.12.2 it's src/drivers/tty/vt/defkeymap.map. You'll also notice there is a corresponding defkeymap.c file (this can be generated with loadkeys --mktable). The handling is in keyboard.c (all these files are in the same directory) which calls set_console() from vt.c:
» grep set_console *.c
keyboard.c: set_console(last_console);
keyboard.c: set_console(i);
keyboard.c: set_console(i);
keyboard.c: set_console(value);
vt.c:int set_console(int nr)
vt_ioctl.c: set_console(arg);
I edited some hits out of that list; you can see the function signature on the second last line.
So these are the things involved in the switching. If you look at the sequence of calls, eventually you come back to kbd_event() in keyboard.c. This is registered as an event handler for the module:
(3.12.2 drivers/tty/vt/keyboard.c line 1473)
MODULE_DEVICE_TABLE(input, kbd_ids);
static struct input_handler kbd_handler = {
.event = kbd_event, <--- function pointer HERE
.match = kbd_match,
.connect = kbd_connect,
.disconnect = kbd_disconnect,
.start = kbd_start,
.name = "kbd",
.id_table = kbd_ids,
};
int __init kbd_init(void)
{
[...]
error = input_register_handler(&kbd_handler);
Hence, kbd_event() should be called when something bubbles up from the actual hardware driver (probably something from drivers/hid/ or drivers/input/). However, you won't see it referred to as kbd_event outside of that file, since it is registered via a function pointer.
Some resources for scrutinizing the kernel
The Linux Cross Reference Identifier Search is a great tool.
The Interactive Linux Kernel Map is an interesting graphical front end to the cross reference tool.
There are a few historical archives of the massive Linux Kernel Mailing List (LKML), which goes back to at least 1995; some of them are not maintained and have broken search features, but the gmane one seems to work very well. People have asked a lot of questions on the mail list and it is a primary means of communication amongst the developers as well.
You can inject your own printk lines into the source as a simple means of tracing (not all of the standard C lib can be used in kernel code, including printf from stdio). printk stuff ends up in syslog.
Wolfgang Mauerer wrote a great big book on the 2.6 kernel, Professional Linux Kernel Architecture, which goes through a lot of the source. Greg Kroah-Hartman, one of the principle devs for the last decade, also has a lot of things kicking around.
| What happens when Ctrl + Alt + F<Num> is pressed? |
1,354,072,861,000 |
I have a Macbook Air that runs Linux. I want to swap the alt and super keys in both sides of the keyboard with each other.
How do I do this with cli tools?
Update
Following Drav Sloan's answer I used the following:
keycode 64 = Alt_L
keycode 133 = Super_L
remove Mod1 = Alt_L
remove Mod4 = Super_L
add Mod1 = Super_L
add Mod4 = Alt_L
keycode 108 = Alt_R
keycode 134 = Super_R
remove Mod1 = Alt_R
remove Mod4 = Super_R
add Mod1 = Super_R
add Mod4 = Alt_R
|
One way to achieve that is via xmodmap. You can run xev to get key events. On running xev a box should appear and you can focus it and press the keys you want to swap. It should output details similar to for the Alt key:
KeyPress event, serial 28, synthetic NO, window 0x8800001,
root 0x25, subw 0x0, time 2213877115, (126,91), root:(1639,475),
state 0x0, keycode 14 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
I'm on a PC, and don't have a "Command Key", but have the equivalent "Windows Key", and
xev gives:
KeyPress event, serial 28, synthetic NO, window 0x8000001,
root 0x25, subw 0x0, time 2213687746, (111,74), root:(1624,98),
state 0x0, keycode 93 (keysym 0xffeb, Super_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
Because xmodmap has no idea of state, and can easily break key mappings, I suggest you do a:
xmodmap -pke > defaults
Then we create a xmodmap file:
keycode 14 = Alt_L
keycode 93 = Super_L
remove Mod1 = Alt_L
remove Mod4 = Super_L
add Mod1 = Super_L
add Mod4 = Alt_L
Note how I'm using the keycodes that xev returned. Also here I'm only replacing the left super and alt keys (and leaving the right ones to their old behavior). Then we can simply run xmodmap, to set these keys:
$ xmodmap -v modmap.file
! modmap:
! 1: keycode 14 = Alt_L
keycode 0xe = Alt_L
! 2: keycode 93 = Super_L
keycode 0x5d = Super_L
! 3: remove Mod1 = Alt_L
! Keysym Alt_L (0xffe9) corresponds to keycode(s) 0xe
remove mod1 = 0xe
! 4: remove Mod4 = Super_L
! Keysym Super_L (0xffeb) corresponds to keycode(s) 0x5d
remove mod4 = 0x5d
! 5: add Mod1 = Super_L
add mod1 = Super_L
! 6: add Mod4 = Alt_L
add mod4 = Alt_L
!
! executing work queue
!
keycode 0xe = Alt_L
keycode 0x5d = Super_L
remove mod1 = 0xe
remove mod4 = 0x5d
add mod1 = Super_L
add mod4 = Alt_L
You can run without the -v (verbose) switch for silent running, but I find it useful if you made mistakes in your modmap file. If things go messy then just reapply your defaults:
xmodmap defaults
Modmap is often ran at start up of X, so you can have these applied as defaults if you put your modmap commands in ~/.xmodmaprc.
| Swap alt and super |
1,354,072,861,000 |
My question is with regards to booting a Linux system from a separate /boot partition. If most configuration files are located on a separate / partition, how does the kernel correctly mount it at boot time?
Any elaboration on this would be great. I feel as though I am missing something basic. I am mostly concerned with the process and order of operations.
Thanks!
EDIT: I think what I needed to ask was more along the lines of the dev file that is used in the root kernel parameter. For instance, say I give my root param as root=/dev/sda2. How does the kernel have a mapping of the /dev/sda2 file?
|
Linux initially boots with a ramdisk (called an initrd, for "INITial RamDisk") as /. This disk has just enough on it to be able to find the real root partition (including any driver and filesystem modules required). It mounts the root partition onto a temporary mount point on the initrd, then invokes pivot_root(8) to swap the root and temporary mount points, leaving the initrd in a position to be umounted and the actual root filesystem on /.
| How does a kernel mount the root partition? |
1,354,072,861,000 |
I have some confusion regarding fork and clone. I have seen that:
fork is for processes and clone is for threads
fork just calls clone, clone is used for all processes and threads
Are either of these accurate? What is the distinction between these 2 syscalls with a 2.6 Linux kernel?
|
fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable.
In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied.
Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone(), but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it.
clone() is also used to implement the pthread_create() POSIX function for creating threads.
Portable programs should call fork() and pthread_create(), not clone().
| Fork vs Clone on 2.6 Kernel Linux |
1,354,072,861,000 |
A fork() system call clones a child process from the running process. The two processes are identical except for their PID.
Naturally, if the processes are just reading from their heaps rather than writing to it, copying the heap would be a huge waste of memory.
Is the entire process heap copied? Is it optimized in a way that only writing triggers a heap copy?
|
The entirety of fork() is implemented using mmap / copy on write.
This not only affects the heap, but also shared libraries, stack, BSS areas.
Which, incidentally, means that fork is a extremely lightweight operation, until the resulting 2 processes (parent and child) actually start writing to memory ranges. This feature is a major contributor to the lethality of fork-bombs - you end up with way too many processes before kernel gets overloaded with page replication and differentiation.
You'll be hard-pressed to find in a modern OS an example of an operation where kernel performs a hard copy (device drivers being the exception) - it's just far, far easier and more efficient to employ VM functionality.
Even execve() is essentially "please mmap the binary / ld.so / whatnot, followed by execute" - and the VM handles the actual loading of the process to RAM and execution. Local uninitialized variables end up being mmaped from a 'zero-page' - special read-only copy-on-write page containing zeroes, local initialized variables end up being mmaped (copy-on-write, again) from the binary file itself, etc.
| Does fork() immediately copy the entire process heap in Linux? |
1,354,072,861,000 |
On my Ubuntu machine, in /etc/sysctl.conf file, I've got reverse path filtering options commented out by default like this:
#net.ipv4.conf.default.rp_filter=1
#net.ipv4.conf.all.rp_filter=1
but in /etc/sysctl.d/10-network-security.conf they are (again, by default) not commented out:
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
So is reverse path filtering enabled or not? Which of the configuration locations takes priority? How do I check the current values of these and other kernel options?
|
Checking the value of a sysctl variable is as easy as
sysctl <variable name>
and, by the way, setting a sysctl variable is as straightforward as
sudo sysctl -w <variable name>=<value>
but changes made this way will probably hold only till the next reboot.
As to which of the config locations, /etc/sysctl.conf or /etc/sysctl.d/, takes precedence, here is what /etc/sysctl.d/README file says:
End-users can use 60-*.conf and above, or use /etc/sysctl.conf
directly, which overrides anything in this directory.
After editing the config in any of the two locations, the changes can be applied with
sudo sysctl -p
| Finding out the values of kernel options related to sysctl.conf and sysctl.d |
1,354,072,861,000 |
I want to trace the networking activity of a command, I tried tcpdump and strace without success.
For an example, If I am installing a package or using any command that tries to reach some site, I want to view that networking activity (the site it tries to reach).
I guess we can do this by using tcpdump. I tried but it is tracking all the networking activity of my system. Let's say if I run multiple networking related commmands and I want to track only particular command networking activity, that time it is difficult to find out the exact solution.
Is there a way to do that?
UPDATE:
I don't want to track everything that goes on my network interface.
I just want to track the command (for an example #yum install -y vim) networking activity. Such as the site it tries to reach.
|
netstat for simplicity
Using netstat and grepping on the PID or process name:
# netstat -np --inet | grep "thunderbird"
tcp 0 0 192.168.134.142:45348 192.168.138.30:143 ESTABLISHED 16875/thunderbird
tcp 0 0 192.168.134.142:58470 192.168.138.30:443 ESTABLISHED 16875/thunderbird
And you could use watch for dynamic updates:
watch 'netstat -np --inet | grep "thunderbird"'
With:
-n: Show numerical addresses instead of trying to determine symbolic host, port or user names
-p: Show the PID and name of the program to which each socket belongs.
--inet: Only show raw, udp and tcp protocol sockets.
strace for verbosity
You said you tried the strace tool, but did you try the option trace=network?
Note that the output can be quite verbose, so you might need some grepping. You could start by grepping on "sin_addr".
strace -f -e trace=network <your command> 2>&1 | grep sin_addr
Or, for an already running process, use the PID:
strace -f -e trace=network -p <PID> 2>&1 | grep sin_addr
| How to trace networking activity of a command? |
1,354,072,861,000 |
I am running the following command on an ubuntu system:
dd if=/dev/random of=rand bs=1K count=2
However, every time I run it, I end up with a file of a different size. Why is this? How can I generate a file of a given size filled with random data?
|
You're observing a combination of the peculiar behavior of dd with the peculiar behavior of Linux's /dev/random. Both, by the way, are rarely the right tool for the job.
Linux's /dev/random returns data sparingly. It is based on the assumption that the entropy in the pseudorandom number generator is extinguished at a very fast rate. Since gathering new entropy is slow, /dev/random typically relinquishes only a few bytes at a time.
dd is an old, cranky program initially intended to operate on tape devices. When you tell it to read one block of 1kB, it attempts to read one block. If the read returns less than 1024 bytes, tough, that's all you get. So dd if=/dev/random bs=1K count=2 makes two read(2) calls. Since it's reading from /dev/random, the two read calls typically return only a few bytes, in varying number depending on the available entropy. See also When is dd suitable for copying data? (or, when are read() and write() partial)
Unless you're designing an OS installer or cloner, you should never use /dev/random under Linux, always /dev/urandom. The urandom man page is somewhat misleading; /dev/urandom is in fact suitable for cryptography, even to generate long-lived keys. The only restriction with /dev/urandom is that it must be supplied with sufficient entropy; Linux distributions normally save the entropy between reboots, so the only time you might not have enough entropy is on a fresh installation. Entropy does not wear off in practical terms. For more information, read Is a rand from /dev/urandom secure for a login key? and Feeding /dev/random entropy pool?.
Most uses of dd are better expressed with tools such as head or tail. If you want 2kB of random bytes, run
head -c 2k </dev/urandom >rand
With older Linux kernels, you could get away with
dd if=/dev/urandom of=rand bs=1k count=2
because /dev/urandom happily returned as many bytes as requested. But this is no longer true since kernel 3.16, it's now limited to 32MB.
In general, when you need to use dd to extract a fixed number of bytes and its input is not coming from a regular file or block device, you need to read byte by byte: dd bs=1 count=2048.
| Why does dd from /dev/random give different file sizes? |
1,354,072,861,000 |
I know that the system call interface is implemented on a low level and hence architecture/platform dependent, not "generic" code.
Yet, I cannot clearly see the reason why system calls in Linux 32-bit x86 kernels have numbers that are not kept the same in the similar architecture Linux 64-bit x86_64? What is the motivation/reason behind this decision?
My first guess has been that a backgrounding reason has been to keep 32-bit applications runnable on a x86_64 system, so that via an reasonable offset to the system call number the system would know that user-space is 32-bit or 64-bit respectively. This is however not the case. At least it seems to me that read() being system call number 0 in x86_64 cannot be aligned with this thought.
Another guess has been that changing the system call numbers might have a security/hardening background, something I was not able to confirm myself.
Being ignorant to the challenges of implementation the architecture-dependent code parts, I still wonder how changing the system call numbers, when there seems no need (as even a 16-bit register would store largely more then the currently ~346 numbers to represent all calls), would help to achieve anything, other than break compatibility (though using the system calls through a library, libc, mitigates it).
|
As for the reasoning behind the specific numbering, which does not match any other architecture [except "x32" which is really just part of the x86_64 architecture]: In the very early days of the x86_64 support in the linux kernel, before there were any serious backwards compatibility constraints, all of the system calls were renumbered to optimize it
at the cacheline usage level.
I don't know enough about kernel development to know the specific basis for these choices, but apparently there is some logic behind the choice to renumber everything with these particular numbers rather than simply copying the list from an existing architecture and remove the unused ones. It looks like the order may be based on how commonly they are called - e.g. read/write/open/close are up front. Exit and fork may seem "fundamental", but they're each called only once per process.
There may also be something going on about keeping system calls that are commonly used together within the same cache line (these values are just integers, but there's a table in the kernel with function pointers for each one, so each group of 8 system calls occupies a 64-byte cache line for that table)
| Why are Linux system call numbers in x86 and x86_64 different? |
1,354,072,861,000 |
Could I get ZFS to work properly in Linux?
Are there any caveats / limitations?
|
ZFS is not in the official Linux kernel, and never will be unless Oracle relicenses the code under something compatible with the GPL.
This incompatibility is disputed. The main arguments in favor of ZFS being allowed on Linux systems revolve around the so-called "arm's length" rule. That rule applies in this case only if ZFS is provided as a separate module from the kernel, the two communicate only through published APIs, and both code bases can function independently of each other. The claim then is that neither code base's license taints the other because neither is a derived work of the other; they are independent, but cooperate. Nevertheless, even under this interpretation, it means the ZFS modules must still be shipped separately from the Linux kernel, which is how we see it being provided today by Ubuntu.
Quite separately from the CDDL vs GPL argument, NetApp claims they own patents on some technology used in ZFS. NetApp settled their lawsuit with Sun after the Oracle buyout, but that settlement doesn't protect any other Linux distributor. (Red Hat, Ubuntu, SuSE...)
As I see it, these are your alternatives:
Use btrfs instead, as it has similar features to ZFS but doesn't have the GPL license conflict and has been in the mainline kernel for testing since 2.6.29 (released in January 2009).
The main problem with btrfs is that it's had a long history of problems with its RAID 5/6 functionality. These problems are being worked out, but each time one of these problems surfaces, it resets the "stability clock."
Another concern is that Red Hat have indicated that the next release of Red Hat Enterprise Linux will not include btrfs.
One of the reasons Red Hat is taking that position on btrfs is that they have a plan to offer similar functionality using a different technology stack they are calling Stratis. Therefore, another option you have is to wait for Stratis to appear, with 1.0 scheduled for the first half of 2018, presumably to coincide with Red Hat Enterprise Linux 8.
Use a different OS for your file server (FreeBSD, say) and use NFS to connect it to your Linux boxes
Use ZFS on FUSE, a userspace implementation, which works neatly around the kernel licensing issue at the expense of a significant amount of performance
Integrate ZFS on Linux after installing the OS.
The license conflict makes distributing the combined system outside your organization legally questionable. I am not a lawyer, but my sense is that, patent issues aside, distributing ZFS on Linux is about as worrisome as distributing non-GPL binary drivers (such as those for certain video cards) with the system. If one of these bothers you, the other should, too.
Switch to Ubuntu, which has been shipping ZFS kernel modules with the OS since 16.04. Canonical believes that it is legally safe to distribute the ZFS kernel module with the OS itself. You would have to decide whether you trust Canonical's opinion; consider also that they may not be willing to indemnify you if a legal issue comes up.
Beware that it is not currently possible to boot from ZFS with Ubuntu without a whole lot of manual hackery.
Incidentally, btrfs is also backed by Oracle, but was started years before the Sun acquisition. I don't believe the two will ever merge, or one be deprecated in favor of the other due to the license conflict and patent issue. ZFS is too popular to go away, but there will continue to be demand for a ZFS alternative.
| ZFS under Linux, does it work? |
1,354,072,861,000 |
I tried running objdump on the lib to figure it out without success. Is there a way to find out what a library does?
|
It's GCC's runtime library, which contains some low-level functions that GCC emits calls to (like long long division on 32-bit CPUs).
Part of this library is required by the LSB.
| What does libgcc_s.so contain? |
1,354,072,861,000 |
Coming from Windows administration, I want to dig deeper in Linux (Debian).
One of my burning questions I could not answer searching the web (didn't find it) is: how can I achieve the so called "one-to-many" remoting like in PowerShell for Windows?
To break it down to the basics I would say:
My view on Linux:
I can ssh into a server and type my command
I get the result. For an environment of 10 servers I would have to write a (perl/python?) script sending the command for each of them?
My experience from Windows:
I type my command and with "invoke-command" I can "send" this to a bunch of servers (maybe from a textfile) to execute simultaneously and get the result back (as an object for further work).
I can even establish multiple sessions, the connection is held in the background, and selectively send commands to these sessions, and remote in and out like I need.
(I heard of chef, puppet, etc. Is this something like that?)
Update 2019:
After trying a lot - I suggest Rex (see this comment below) - easy setup (effectively it just needs ssh, nothing else) and use (if you know just a little bit perl it's even better, but it's optional)
With Rex(ify) you can do adhoc command and advance it to a real configuration management (...meaning: it is a CM in first place, but nice for adhoc tasks, too)
The website seams outdated, but currently (as of 01/2019) it's in active development and the IRC-Channel is also active.
With Windows' new openssh there are even more possibilities
you can try:
rex -u user -p password -H 192.168.1.3 -e 'say run "hostname"'
|
Summary
Ansible is a DevOps tool that is a powerful replacement for PowerShell
RunDeck as a graphical interface is handy
Some people run RunDeck+Ansible together
clusterssh
For sending remote commands to several servers, for a beginner, I would recommend clusterssh
To install clusterssh in Debian:
apt-get install clusterssh
Another clusterssh tutorial:
ClusterSSH is a Tk/Perl wrapper around standard Linux tools like XTerm
and SSH. As such, it'll run on just about any POSIX-compliant OS where
the libraries exist — I've run it on Linux, Solaris, and Mac OS X. It
requires the Perl libraries Tk (perl-tk on Debian or Ubuntu) and
X11::Protocol (libx11-protocol-perl on Debian or Ubuntu), in addition
to xterm and OpenSSH.
Ansible
As for a remote framework for multiple systems administration, Ansible is a very interesting alternative to Puppet. It is more lean, and it does not need dedicated remote agents as it works over SSH (it also has been bought by RedHat)
The Playbooks are more elaborate than the command line options.
However, to start using Ansible you need a simple installation and to setup the clients list text file.
Afterwards, to run a command in all servers, it is as simple as doing:
ansible all -m command -a "uptime"
The output also is very nicely formatted and separated per rule/server, and while running it in the background can be redirected to a file and consulted later.
You can start with simple rules, and Ansible usage will get more interesting as you grow in Linux, and your infra-structure becomes larger. As such it will do so much more than PowerShell.
As an example, a very simple Playbook to upgrade Linux servers that I wrote:
---
- hosts: all
become: yes
gather_facts: False
tasks:
- name: updates a server
apt: update_cache=yes
- name: upgrade a server
apt: upgrade=full
It also has many modules defined that let you easily write comprehensive policies.
Module Index - Ansible Documentation
It also has got an interesting official hub/"social" network of repositories to search for already made ansible policies by the community. Ansible Galaxy
Ansible is also widely used, and you will find lots of projects in github, like this one from myself for FreeRadius setup.
While Ansible is a free open source framework, it also has a paid web panel interface, Ansible Tower although the licensing is rather expensive.
Nowadays, after RedHat bought it, tower has also the open source version known as AWX.
As a bonus, Ansible also is capable of administering Windows servers, though I have never used it for that.
It is also capable of administering networking equipment (routers, switches, and firewall), which make it a very interesting solution as an automation turn key solution.
How to install Ansible
Rundeck
Yet again, for a remote framework easier to use, but not so potent as Ansible, I do recommend Rundeck.
It is a very powerful multi-user/login graphical interface where you can automate much of your common day-to-day tasks, and even give watered down views to sysops or helpdesk people.
When running the commands, it also gives you windows with the output broken down by server/task.
It can run multiple jobs in the background seamlessly, and allows you to see the report and output later on.
How to install RunDeck
Please note there are people running Ansible+RunDeck as a web interface; not all cases are appropriated for that.
It also goes without saying that using Ansible and/or RunDeck can be construed as a form or part of the infra-structure documentation, and over time allows to replicate and improve the actions/recipes/Playbooks.
Lastly, talking about a central command server, I would create one just up for the task. Actually the technical term is a jump box. 'Jump boxes' improve security, if you set them up right.
| Linux equivalent to PowerShell's "one-to-many" remoting |
1,433,043,054,000 |
I have directory exam with 2 files in it. I need to delete files but permission is denied. Even rm -rf command can't delete these files. I logged in as a root user.
|
From root user check attributes of files
# lsattr
if you notice i (immutable) or a (append-only), remove those attributes:
# man chattr
# chattr -i [filename]
# chattr -a [filename]
| Why can't I delete this file as root? |
1,433,043,054,000 |
I would like to know about Linux spinlocks in detail; could someone explain them to me?
|
A spin lock is a way to protect a shared resource from being modified by two or more processes simultaneously. The first process that tries to modify the resource "acquires" the lock and continues on its way, doing what it needed to with the resource. Any other processes that subsequently try to acquire the lock get stopped; they are said to "spin in place" waiting on the lock to be released by the first process, thus the name spin lock.
The Linux kernel uses spin locks for many things, such as when sending data to a particular peripheral. Most hardware peripherals aren't designed to handle multiple simultaneous state updates. If two different modifications have to happen, one has to strictly follow the other, they can't overlap. A spin lock provides the necessary protection, ensuring that the modifications happen one at a time.
Spin locks are a problem because spinning blocks that thread's CPU core from doing any other work. While the Linux kernel does provide multitasking services to user space programs running under it, that general-purpose multitasking facility doesn't extend to kernel code.
This situation is changing, and has been for most of Linux's existence. Up through Linux 2.0, the kernel was almost purely a single-tasking program: whenever the CPU was running kernel code, only one CPU core was used, because there was a single spin lock protecting all shared resources, called the Big Kernel Lock (BKL). Beginning with Linux 2.2, the BKL is slowly being broken up into many independent locks that each protect a more focused class of resource. Today, with kernel 2.6, the BKL still exists, but it's only used by really old code that can't be readily moved to some more granular lock. It is now quite possible for a multicore box to have every CPU running useful kernel code.
There's a limit to the utility of breaking up the BKL because the Linux kernel lacks general multitasking. If a CPU core gets blocked spinning on a kernel spin lock, it can't be retasked, to go do something else until the lock is released. It just sits and spins until the lock is released.
Spin locks can effectively turn a monster 16-core box into a single-core box, if the workload is such that every core is always waiting for a single spin lock. This is the main limit to the scalability of the Linux kernel: doubling CPU cores from 2 to 4 probably will nearly double the speed of a Linux box, but doubling it from 16 to 32 probably won't, with most workloads.
| What is a spinlock in Linux? |
1,433,043,054,000 |
At some point, in some teaching material (from Linux Foundation) on Linux that I came across, the following is mentioned:
ip command is more versatile and more efficient than ifconfig because it uses netlink sockets rather than ioctl system calls.
Can anyone elaborate a bit on this because I cannot understand what's going on under the hood?
P.S. I am aware of this topic on those tools but it does not address this specific difference on how they operate
|
The ifconfig command on operating systems such as FreeBSD and OpenBSD was updated in line with the rest of the operating system. It nowadays can configure all sorts of network interface settings on those operating systems, and handle a range of network protocols. The BSDs provide ioctl() support for these things.
This did not happen in the Linux world. There are, today, three ifconfig commands:
ifconfig from GNU inetutilsjdebp % inetutils-ifconfig -l
enp14s0 enp15s0 lo
jdebp % inetutils-ifconfig lo
lo Link encap:Local Loopback
inet addr:127.0.0.1 Bcast:0.0.0.0 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:9087 errors:0 dropped:0 overruns:0 frame:0
TX packets:9087 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:51214341 TX bytes:51214341
jdebp %
ifconfig from NET-3 net-tools jdebp % ifconfig -l
ifconfig: option -l' not recognised.
ifconfig:--help' gives usage information.
jdebp % ifconfig lo
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
inet6 ::2 prefixlen 128 scopeid 0x80<compat,global>
inet6 fe80:: prefixlen 10 scopeid 0x20<link>
loop txqueuelen 1000 (Local Loopback)
RX packets 9087 bytes 51214341 (48.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9087 bytes 51214341 (48.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
jdebp %
ifconfig from (version 1.40 of) the nosh toolset jdebp % ifconfig -l
enp14s0 enp15s0 lo
jdebp % ifconfig lo
lo
link up loopback running
link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00
inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1
inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255
inet6 address ::2 scope 0 prefixlen 128
inet6 address fe80:: scope 1 prefixlen 10
inet6 address ::1 scope 0 prefixlen 128
jdebp % sudo ifconfig lo inet4 127.1.0.2 alias
jdebp % sudo ifconfig lo inet6 ::3/128 alias
jdebp % ifconfig lo
lo
link up loopback running
link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00
inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1
inet4 address 127.1.0.2 prefixlen 32 bdaddr 127.1.0.2
inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255
inet6 address ::3 scope 0 prefixlen 128
inet6 address ::2 scope 0 prefixlen 128
inet6 address fe80:: scope 1 prefixlen 10
inet6 address ::1 scope 0 prefixlen 128
jdebp %
As you can see, the GNU inetutils and NET-3 net-tools ifconfigs have some marked deficiencies, with respect to IPv6, with respect to interfaces that have multiple addresses, and with respect to functionality like -l.
The IPv6 problem is in part some missing code in the tools themselves. But in the main it is caused by the fact that Linux does not (as other operating systems do) provide IPv6 functionality through the ioctl() interface. It only lets programs see and manipulate IPv4 addresses through the networking ioctl()s.
Linux instead provides this functionality through a different interface, send() and recv() on a special, and somewhat odd, address family of sockets, AF_NETLINK.
The GNU and NET-3 ifconfigs could have been adjusted to use this new API. The argument against doing so was that it was not portable to other operating systems, but these programs were in practice already not portable anyway so that was not much of an argument.
But they weren't adjusted, and remain as aforeshown to this day. (Some people worked on them at various points over the years, but the improvements, sad to say, never made it into the programs. For example: Bernd Eckenfels never accepted a patch that added some netlink API capability to NET-3 net-tools ifconfig, 4 years after the patch had been written.)
Instead, some people completely reinvented the toolset as an ip command, which used the new Linux API, had a different syntax, and combined several other functions behind a fashionable command subcommand-style interface.
I needed an ifconfig that had the command-line syntax and output style of the FreeBSD ifconfig (which neither the GNU nor the NET-3 ifconfig has, and which ip most certainly does not have). So I wrote one. As proof that one could write an ifconfig that uses the netlink API on Linux, it does.
So the received wisdom about ifconfig, such as what you quote, is not really true any more. It is now untrue to say that "ifconfig does not use netlink.". The blanket that covered two does not cover three.
It has always been untrue to say that "netlink is more efficient". For the tasks that one does with ifconfig, there isn't really much in it when it comes to efficiency between the netlink API and the ioctl() API. One makes pretty much the same number of API calls for any given task.
Indeed, each API call is two system calls in the netlink case, as opposed to one in the ioctl() system. And arguably the netlink API has the disadvantage that on a heavily-used system it explicitly incorporates the possibility of the tool never receiving an acknowledgement message informing it of the result of the API call.
It is, furthermore, untrue to say that ip is "more versatile" than the GNU and NET-3 ifconfigs because it uses netlink. It is more versatile because it does more tasks, doing things in one big program that one would do with separate programs other than ifconfig. It is not more versatile simply by dint of the API that it uses internally for performing those extra tasks. There's nothing inherent to the API about this. One could write an all-in-one tool that used the FreeBSD ioctl() API, for example, and equally well state that it is "more versatile" than the individual ifconfig, route, arp, and ndp commands.
One could write route, arp, and ndp commands for Linux that used the netlink API, too.
Further reading
Jonathan de Boyne Pollard (2019). ifconfig. nosh Guide. Softwares.
Eduardo Ferro (2009-04-16). ifconfig: reports wrong ip address / initial patch. Debian bug #359676.
| ip vs ifconfig commands pros and cons |
1,433,043,054,000 |
I keep receiving this error:
Warning!! Unsupported GPT (GUID Partition Table) detected. Use GNU Parted
I want to go back to the normal MBR. I found some advice here and did:
parted /dev/sda
mklabel msdos
quit
But when I get to the mklabel option it spits out a warning that I will lose all data on /dev/sda. Is there a way to get the normal MBR back without formatting the disk?
|
That link you posted looks like a very ugly hack type solution.
However, according to the man page, gdisk, which is used to convert MBR -> GPT, also has an option in the "recovery & transformation" menu (press r to get that) to convert GPT -> MBR; the g key will:
Convert GPT into MBR and exit. This option converts as many partitions
as possible into MBR form, destroys the GPT data structures,
saves the new MBR, and exits. Use this option if you've tried GPT and
find that MBR works better for you. Note that this function generates
up to four primary MBR partitions or three primary partitions and as
many logical partitions as can be generated. Each logical
partition requires at least one unallocated block immediately
before its first block.
I'd try that first.
| Remove GPT - Default back to MBR |
1,433,043,054,000 |
It is a common way to set the resolution of a text consoles (that are usually available by Ctrl-Alt-F1 thru Ctrl-Alt-F6) by using a vga=... kernel parameter.
I'm using Ubuntu 10.04 Lucid, output of uname -a is:
Linux 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux
To identify modes available i use the sudo hwinfo --framebuffer which reports:
02: None 00.0: 11001 VESA Framebuffer
[Created at bios.464]
Unique ID: rdCR.R1b4duaxSqA
Hardware Class: framebuffer
Model: "NVIDIA G73 Board - p456h1 "
Vendor: "NVIDIA Corporation"
Device: "G73 Board - p456h1 "
SubVendor: "NVIDIA"
SubDevice:
Revision: "Chip Rev"
Memory Size: 256 MB
Memory Range: 0xc0000000-0xcfffffff (rw)
Mode 0x0300: 640x400 (+640), 8 bits
Mode 0x0301: 640x480 (+640), 8 bits
Mode 0x0303: 800x600 (+800), 8 bits
Mode 0x0305: 1024x768 (+1024), 8 bits
Mode 0x0307: 1280x1024 (+1280), 8 bits
Mode 0x030e: 320x200 (+640), 16 bits
Mode 0x030f: 320x200 (+1280), 24 bits
Mode 0x0311: 640x480 (+1280), 16 bits
Mode 0x0312: 640x480 (+2560), 24 bits
Mode 0x0314: 800x600 (+1600), 16 bits
Mode 0x0315: 800x600 (+3200), 24 bits
Mode 0x0317: 1024x768 (+2048), 16 bits
Mode 0x0318: 1024x768 (+4096), 24 bits
Mode 0x031a: 1280x1024 (+2560), 16 bits
Mode 0x031b: 1280x1024 (+5120), 24 bits
Mode 0x0330: 320x200 (+320), 8 bits
Mode 0x0331: 320x400 (+320), 8 bits
Mode 0x0332: 320x400 (+640), 16 bits
Mode 0x0333: 320x400 (+1280), 24 bits
Mode 0x0334: 320x240 (+320), 8 bits
Mode 0x0335: 320x240 (+640), 16 bits
Mode 0x0336: 320x240 (+1280), 24 bits
Mode 0x033d: 640x400 (+1280), 16 bits
Mode 0x033e: 640x400 (+2560), 24 bits
Config Status: cfg=new, avail=yes, need=no, active=unknown
It looks like many hi-res modes are available, like 0x305, 0x307, 0x317, 0x318, 0x31a, 0x31b (by the way, what does the plus-number means in the list of modes?). However, setting any of these modes in kernel option string, line vga=0x305, results in either pitch black text console, or screen filled by blinking color/bw dots.
What is the 'modern', 'robust' way to set up high resolution in text consoles?
|
Newer kernels use KMS by default, so you should move away from appending vga= to your grub line as it will conflict with the native resolution of KMS. However, it depends upon the video driver you are using: the proprietary Nvidia driver doesn't support KMS, but you can work around it.
You should be able to get full resolution in the framebuffer by editing your /etc/default/grub and making sure that the GFXMODE is set correctly, and then adding a GFXPAYLOAD entry like so:
GRUB_GFXMODE=1680x1050x24
# Hack to force higher framebuffer resolution
GRUB_GFXPAYLOAD_LINUX=1680x1050
Remember to run sudo update-grub afterwards.
| How to set the resolution in text consoles (troubleshoot when any `vga=...` fails) |
1,433,043,054,000 |
For background I have just built a new machine with modern hardware including:
AMD FX-8350
Gigabyte GA-990FXA-UD3 motherboard
16GB RAM
NVidia GTX 650 Ti
Kingston SSD
Given that, I tried to install various versions of Linux on the SSD and was met with failure almost every time. I tried installing Arch, Debian stable, Debian sid, and Ubuntu 12.10 from a USB thumb drive but while the BIOS saw the USB drive and started to boot from it, as soon as the OS attempted to enumerate the USB devices I lost all USB functionality (including the boot device).
Eventually I burned a DVD and installed Ubuntu 12.10 onto the SSD. It should be noted that my USB keyboard (and mouse) work fine while in the American Megatrends UEFI/BIOS. Even when I'm in the pre-installation menus on the Live Ubuntu DVD the keyboard works fine.
As soon as Linux is booted (either Live DVD or from the SSD) I lose all USB functionality and can only navigate the OS using a PS/2 keyboard.
What I see in the dmesg/syslog is a few lines about "failed to load microcode amd_ucode/microcode_amd_fam15h.bin" and I can see USB devices failing to initialize.
If I do an lsusb I can see all the USB host controllers but none of the devices. Doing an lspci shows me all the hardware I'd expect. And doing an lsmod I do not see any usb modules loaded (usb_ehci for example).
I tried passing noapic to the kernel boot string and it had no effect on this problem.
The motherboard supports USB 3.0 but all the devices I have plugged into normal USB 2.0 ports.
I'm rather baffled at what could be killing/preventing USB (and my on-board network card) from working in Linux. There doesn't seem to be any problem with any of these devices working in BIOS and I do not have a Windows installation available to test and see if it works.
I've already RMA'd the motherboard once but the second one has exactly the same behavior so I think I can safely rule out hardware failure (since the behavior is identical, I don't think the odd of me getting two identically defective boards are greater than the odds of this being a Linux problem).
What else can I try to get USB (and ideally my network, but we'll stick to USB for now) working?
Edit #1:
Since I have no networking I can only relate interesting bits from dmesg here.
Of interest in dmesg I can see I have 11 USB host controllers (OHCI, EHCI, and xHCI). It detects my USB devices and then fails immediately as follows:
usb 3-1: new high-speed USB device number 2 using ehci_hcd
usb 3-1: device descriptor read/64, error -32
That repeats several times incrementing the number and trying other USB Host controllers until it falls back to OHCI controllers which also fail but have an additional message:
usb 8-1: device not accepting address 4, error -32
I think my networking problems have to do with the fact that I don't have IPv6 enabled on my router and that seems to be a problem
eth1: no IPv6 routers present
Edit #2:
lspci -vvv shows that my network adapters (both onboard and expansion) are Realtek Semiconductor (no surprise); RTL8111/8168B and RTL8169/8110 respectively. My USB controllers are Etron Technology EJ168 (xHCI) and AMD nee ATI SB7x0/SB8x0/SB9x0 (EHCI & OHCI)
Now running Debian wheezy modprobe shows usb_common, usbcore, xhci_hcd, ehci_hcd, and ohci_hcd all loaded and functioning.
|
I found the answer from this thread (http://ubuntuforums.org/showthread.php?t=2114055) over at ubuntuforums.org.
It seems with newer Gigabyte mainboards (at least) there is a BIOS option called IOMMU Controller that is disabled by default and gives no clue or indication as to what it is for.
Enabling this setting and rebooting "magically" restores all my USB and networking problems in a 64-bit Linux OS (doesn't matter which one).
I am rather shocked and elated that it was such a long search for such a simple fix.
Thanks everyone for your help and suggestions. Hopefully others will find this helpful.
Update: I'd just like to add that my current BIOS settings also include enabling XHCI Handoff and EHCI Handoff in addition to IOMMU Controller. Others have mentioned this as well and enabling those two handoffs also allows my USB 3.0 ports to function as expected.
| Why is USB not working in Linux when it works in UEFI/BIOS? |
1,433,043,054,000 |
I use Ubuntu. Sometimes, the system does not have any response with mouse and keyboard. Is there any way to solve this problem except hitting the reset button on the machine?
|
If you want a way to reboot, without saving open documents, but without hitting the reset button, then there are ways that are less likely to cause data loss. First, try Ctrl+Alt+F1. That should bring you to a virtual console, as ixtmixilix said. Once you're in a virtual console, Ctrl+Alt+Delete will shut down and reboot the machine.
If that technique doesn't work, there's always Alt+SysRq+REISUB.
As for fixing the problem without rebooting, without more information about what is going on, it would be difficult to give a good answer. If you could describe the circumstances under which this occurs (the best way to do that is to edit your question to add the information), then that may help people to give good answers. The other thing to consider is that, if your computer is becoming unresponsive--especially if it takes more than a a few seconds for Ctrl+Alt+F1 to bring up a virtual console--then you almost certainly have a bug, and by reporting it you can both help the community and maybe get an answer.
GUI Glitches Causing Unresponsive WM or X11/Wayland
This might be happening due to an interaction between an application and a window manager--or the X11 server or Wayland. A sign that this is the nature of the problem is if an application stops responding and prevents you from entering input with the keyboard or mouse to other application windows. (No application should be able to do this; some GUI component must have a bug in it for this to occur.) If that's what's happening, then you can kill the offending process in a virtual console (as ixtmixilix alluded to):
Press Ctrl+Alt+F1.
Log in. You won't see anything as you enter your password. That's normal.
Use a utility like ps to figure out the offending program's process name. Sometimes this is easy in Ubuntu, and other times it isn't. For example, the name of an Archive Manager process is file-roller. If you have trouble figuring it out, you can usually find the information online without too much trouble (or if you can't, you can post a question about it).
You can pipe ps's output to grep to narrow things down. Suppose it was Archive Manager that was causing the problem. Then you could run:
ps x | grep file-roller
You'll see an entry for your own grep command, plus an entry for file-roller.
Attempt to kill the offending process with SIGTERM. This gives it the chance to do last-minute cleanup like flushing file buffers, signaling to remote servers that it is about to disconnect (for protocols that do that), and releasing other sorts of resources. To do this, use the kill command:
kill PID
where PID is the process ID number of the process you want to kill, obtained from running ps in step 3.
SIGTERM is a way to firmly ask a process to quit. The process can ignore that signal, and will do so when malfunctioning under certain circumstances. So you should check to see that it worked. If it didn't, kill it with SIGKILL, which it cannot ignore, and which always works except in the rare case where the process is in uninterruptible sleep (or if it is not really running, but is rather a zombie process).
You can both check to see if the process is still running, and kill it with SIGKILL if it is, with just one command:
kill -KILL PID
If you get a message like kill: (PID) - No such process, you know killing it with SIGTERM worked. If you get no output, you know SIGTERM didn't work. In that case, SIGKILL probably did, but it's worth checking by running it again. (Press the up arrow key to bring up previous commands, for ease of typing.)
In rare instances for your own processes, or always with processes belonging to root or another user besides yourself, you must kill the process as root. To do that, prepend sudo (including the trailing space) before the above kill commands. If the above commands don't work or you're told you don't have the necessary access to kill the process, try it as root with sudo.
(By the way, kill -KILL is the same as the widely popular kill -9. I recommend kill -KILL because SIGKILL is not guaranteed to have 9 as its signal number on all platforms. It works on x86, but that doesn't mean it will necessarily work everywhere. In this way, kill -KILL is more likely to successfully end the process than kill -9. But they're equivalent on x86, so feel free to use it there if you like.)
If you know there are no other processes with the same name as the one you want to kill, you can use killall instead of kill and the name of the process instead of the process ID number.
A Process Monopolizing CPU Resources
If a process runs at or very near the highest possible priority (or to state it more properly, at or near the lowest possible niceness), it could potentially render your graphical user interface completely, or near-completely, unresponsive. However, in this situation, you would likely not be able to switch to a virtual console and run commands (or maybe even reboot).
If a process or a combination of processes running at normal or moderately elevated priority are slowing your machine down, you should be able to kill them using the technique in the section above. But if they are graphical programs, you can likely also kill them by clicking the close button on their windows--the desktop environment will give you the option to kill them if they are not responding. If this doesn't work, of course you can (almost) always kill them with kill -KILL.
I/O Problems
Buggy I/O can cause prolonged (even perpetual) unresponsiveness. This can be due to a kernel bug and/or buggy drivers. A partial workaround is to avoid heavy and simultaneous read and/or write operations (for example, don't copy two big files at once, in two simultaneous copy processes; don't copy a big file while watching an HD video or installing an OS in a virtual machine).
This is obviously unsatisfactory and the real solution is to find the problem and report it. Unless you're running a mainline kernel from kernel.org, kernel bugs should be reported against the package linux in Ubuntu (since Ubuntu gives special kernel builds that integrate distro-specific patches, and bug reports not confirmed against a mainline kernel will be rejected at kernel.org). You should do this by running ubuntu-bug linux (or apport-cli linux) on the affected machine. See the Ubuntu bug reporting documentation first; it explains how to do this properly.
Graphics Card Problems
Some GUI lockups can be caused by graphics card problems. There are a few things you can try, to alleviate this:
Search the web to see if other people have experienced similar problems with the same video card (and/or make and model of machine) on Ubuntu or other GNU/Linux distributions. There may be solutions more specific than what I can offer in this answer, without more specific information than is currently in your question.
See if different video drivers are available for you to try. You can do this by checking in Additional Drivers; you can also search the web to see what Linux drivers are available for your video card. Most proprietary video cards are Intel, AMD/ATi, or Nvidia (click those links to see the community documentation on installing and using proprietary drivers for these cards in Ubuntu). For Intel, you're best off sticking with the FOSS drivers that are present in Ubuntu, but there's still helpful information you can use. Regardless of what card you have, this general information may help.
If you're currently using proprietary drivers, you can try using different proprietary drivers (for example, directly from NVidia or AMD/ATi), or you can try using the free open source drivers instead.
Try selecting a graphical login session type that doesn't require/use graphics acceleration. To do this, log out, and on the graphical login screen click the Ubuntu logo or gear icon near your login name. A drop-down menu is shown. Change the selection from Ubuntu to Ubuntu 2D. This makes you use Unity 2D instead of Unity. (If you're using GNOME Shell, you can select GNOME Fallback / GNOME Classic instead.) If in doubt and there's a selection that says "no effects," pick that, as that's probably the safest.
This question has some more information about different graphical interfaces you can choose between in Ubuntu.
In newer versions of Ubuntu, you can choose between X.org and Wayland on the login screen. Whichever you've been using, try the other. Sometimes a problem with Wayland can be fixed by using X.org, or vice versa.
Report a bug.
Hopefully the information above has conveyed some general information about what could be causing this kind of problem. It should also serve to illuminate what kind of information might be useful for you to add to your question (depending on the specific details of the problem), to make it possible to get an even better answer. (Or to improve this answer with additional information specific to your situation.)
| How to fix non-responsive Ubuntu system? |
1,433,043,054,000 |
I work on two computers with one USB headset. I want to listen to both by piping the non-Linux computers' output into the Linux computer's line in (blue audio jack) and mixing the signal into the Linux computer's headset output using PulseAudio.
pavucontrol shows a "Built-in Audio Analog Stereo" Input Device which allows me to pick ports like "Line In" (selected), "Front Microphone", "Rear Microphone". I can see the device's volume meter reacting to audio playback on the non-Linux machine.
How do I make PulseAudio play that audio signal into my choice of Output Device?
|
1. Load the loopback module
pacmd load-module module-loopback latency_msec=5
creates a playback and a recording device.
2. Configure the devices in pavucontrol
In pavucontrol, in the Recording tab, set the "Loopback" device's from input device to the device which receives the line in signal.
In the Playback tab, set the "Loopback" device's on output device to the device through which you want to hear the line in signal.
3. Troubleshooting
If the audio signal has issues, remove the module with pacmd unload-module module-loopback and retry a higher latency_msec= value
Additional Notes
Your modern Mid-Range computer might easily be able to manage lower latency with the latency_msec=1 option:
pacmd load-module module-loopback latency_msec=1
This answer was made possible by this forum post. Thanks!
| Pipe/Mix Line In to Output in PulseAudio |
1,433,043,054,000 |
I know I can change some fundamental settings of the Linux console, things like fonts, for instance, with dpkg-reconfigure console-setup.
But I'd like to change things like blinkrate, color, and shape (I want my cursor to be a block, at all times). I've seen people accomplishing this. I just never had a chance to ask those people how to do that.
I don't mean terminal emulator windows, I mean the Linux text console, you reach with Ctrl+Alt+F-key
I'm using Linux Mint at the moment, which is a Debian derivate. I'd like to know how to do that in Fedora as well, though.
Edit: I might be on to something
I learned from this website, how to do the changes I need. But I'm not finished yet.
I've settled on using echo -e "\e[?16;0;200c" for now, but I've got a problem: when running applications like vim or irssi, or attaching a screen session, the cursor reverts back to being a blinking gray underscore.
And of course, it only works on this one tty all other text consoles are unaffected.
So how can I make those changes permanent? How can I populate them to other consoles?
|
GitHub Gist: How to change cursor shape, color, and blinkrate of Linux Console
I define the following cursor formatting settings in my .bashrc file (or /etc/bashrc):
##############
# pretty prompt and font colors
##############
# alter the default colors to make them a bit prettier
echo -en "\e]P0000000" #black
echo -en "\e]P1D75F5F" #darkred
echo -en "\e]P287AF5F" #darkgreen
echo -en "\e]P3D7AF87" #brown
echo -en "\e]P48787AF" #darkblue
echo -en "\e]P5BD53A5" #darkmagenta
echo -en "\e]P65FAFAF" #darkcyan
echo -en "\e]P7E5E5E5" #lightgrey
echo -en "\e]P82B2B2B" #darkgrey
echo -en "\e]P9E33636" #red
echo -en "\e]PA98E34D" #green
echo -en "\e]PBFFD75F" #yellow
echo -en "\e]PC7373C9" #blue
echo -en "\e]PDD633B2" #magenta
echo -en "\e]PE44C9C9" #cyan
echo -en "\e]PFFFFFFF" #white
clear #for background artifacting
# set the default text color. this only works in tty (eg $TERM == "linux"), not pts (eg $TERM == "xterm")
setterm -background black -foreground green -store
# http://linuxgazette.net/137/anonymous.html
cursor_style_default=0 # hardware cursor (blinking)
cursor_style_invisible=1 # hardware cursor (blinking)
cursor_style_underscore=2 # hardware cursor (blinking)
cursor_style_lower_third=3 # hardware cursor (blinking)
cursor_style_lower_half=4 # hardware cursor (blinking)
cursor_style_two_thirds=5 # hardware cursor (blinking)
cursor_style_full_block_blinking=6 # hardware cursor (blinking)
cursor_style_full_block=16 # software cursor (non-blinking)
cursor_background_black=0 # same color 0-15 and 128-infinity
cursor_background_blue=16 # same color 16-31
cursor_background_green=32 # same color 32-47
cursor_background_cyan=48 # same color 48-63
cursor_background_red=64 # same color 64-79
cursor_background_magenta=80 # same color 80-95
cursor_background_yellow=96 # same color 96-111
cursor_background_white=112 # same color 112-127
cursor_foreground_default=0 # same color as the other terminal text
cursor_foreground_cyan=1
cursor_foreground_black=2
cursor_foreground_grey=3
cursor_foreground_lightyellow=4
cursor_foreground_white=5
cursor_foreground_lightred=6
cursor_foreground_magenta=7
cursor_foreground_green=8
cursor_foreground_darkgreen=9
cursor_foreground_darkblue=10
cursor_foreground_purple=11
cursor_foreground_yellow=12
cursor_foreground_white=13
cursor_foreground_red=14
cursor_foreground_pink=15
cursor_styles="\e[?${cursor_style_full_block};${cursor_foreground_black};${cursor_background_green};c" # only seems to work in tty
# http://www.bashguru.com/2010/01/shell-colors-colorizing-shell-scripts.html
prompt_foreground_black=30
prompt_foreground_red=31
prompt_foreground_green=32
prompt_foreground_yellow=33
prompt_foreground_blue=34
prompt_foreground_magenta=35
prompt_foreground_cyan=36
prompt_foreground_white=37
prompt_background_black=40
prompt_background_red=41
prompt_background_green=42
prompt_background_yellow=43
prompt_background_blue=44
prompt_background_magenta=45
prompt_background_cyan=46
prompt_background_white=47
prompt_chars_normal=0
prompt_chars_bold=1
prompt_chars_underlined=4 # doesn't seem to work in tty
prompt_chars_blinking=5 # doesn't seem to work in tty
prompt_chars_reverse=7
prompt_reset=0
#start_prompt_coloring="\e[${prompt_chars_bold};${prompt_foreground_black};${prompt_background_green}m"
start_prompt_styles="\e[${prompt_chars_bold}m" # just use default background and foreground colors
end_prompt_styles="\e[${prompt_reset}m"
PS1="${start_prompt_styles}[\u@\h \W] \$${end_prompt_styles}${cursor_styles} "
##############
# end pretty prompt and font colors
##############
| How to change cursor shape, color, and blinkrate of Linux Console? |
1,433,043,054,000 |
I have read somewhere that one can put a file on a linux system into memory, and loading it will be superfast.
How do I do this? How do I verify the file is loaded from memory?
|
On Linux, you probably already have an tmpfs filesystem that you can write to at /dev/shm.
$ >/dev/shm/foo
$ df /dev/shm/foo
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 224088 0 224088 0% /dev/shm
This may use swap, however. For a true ramdisk (that won't swap), you need to use the ramfs filesystem.
mount ramfs -t ramfs /mountpoint
| How to place / store a file in memory on linux? |
1,433,043,054,000 |
If a Unix (Posix) process receives a signal, a signal handler will run.
What will happen to it in a multithreaded process? Which thread receives the signal?
In my opinion, the signal API should be extended to handle that (i.e. the thread of the signal handler should be able to be determined), but hunting for infos on the net I only found year long flames on the linux kernel mailing list and on different forums. As I understood, Linus' concept differed from the Posix standard, and first some compat layer was built, but now the Linux follows the posix model.
What is the current state?
|
The entry in POSIX on "Signal Generation and Delivery" in "Rationale: System Interfaces General Information" says
Signals generated for a process are delivered to only one thread. Thus, if more than one thread is eligible to receive a signal, one has to be chosen. The choice of threads is left entirely up to the implementation both to allow the widest possible range of conforming implementations and to give implementations the freedom to deliver the signal to the "easiest possible" thread should there be differences in ease of delivery between different threads.
From the signal(7) manual on a Linux system:
A signal may be generated (and thus pending) for a process as a whole
(e.g., when sent using kill(2)) or for a specific thread (e.g., certain
signals, such as SIGSEGV and SIGFPE, generated as a consequence of executing a specific machine-language instruction are thread directed, as are
signals targeted at a specific thread using pthread_kill(3)). A process-directed signal may be delivered to any one of the threads that does not
currently have the signal blocked. If more than one of the threads has the
signal unblocked, then the kernel chooses an arbitrary thread to which to
deliver the signal.
And in pthreads(7):
Threads have distinct alternate signal stack settings. However, a new
thread's alternate signal stack settings are copied from the thread that
created it, so that the threads initially share an alternate signal
stack (fixed in kernel 2.6.16).
From the pthreads(3) manual on an OpenBSD system (as an example of an alternate approach):
Signals handlers are normally run on the stack of the currently executing
thread.
(I'm currently not aware of how this is handled when multiple threads are executing concurrently on a multi-processor machine)
The older LinuxThread implementation of POSIX threads only allowed distinct single threads to be targeted by signals. From pthreads(7) on a Linux system:
LinuxThreads does not support the notion of
process-directed signals: signals may be sent only to specific threads.
| What happens to a multithreaded Linux process if it gets a signal? |
1,433,043,054,000 |
On a standard filesystem, we have:
/usr/games
/usr/lib/games
/usr/local/games
/usr/share/games
/var/games
/var/lib/games
Is this a joke, or is there some history behind this? What is it for? Why do we have separate and specialized directories for something like games?
|
It's just a bit of historical cruft. A long time ago, games were an optional part of the system, and might be installed by different people, so they lived in /usr/games rather than /usr/bin. Data such as high scores came to live in /var/games. As time went by, people variously put variable game data in /var/lib/games/NAME or /var/games/NAME and static game data in /usr/lib/NAME or /usr/games/lib/NAME or /usr/games/NAME or /usr/lib/games/NAME (and the same with share instead of lib for architecture-independent data). Nowadays, there isn't any compelling reason to keep games separate, it's just a matter of tradition.
| games directory? |
1,433,043,054,000 |
On the man page, it just says:
-m Job control is enabled.
But what does this actually mean?
I came across this command in a SO question, I have the same problem as OP, which is "fabric cannot start tomcat". And set -m solved this. The OP explained a little, but I don't quite understand:
The issue was in background tasks as they will be killed when the
command ends.
The solution is simple: just add "set -m;" prefix before command.
|
Quoting the bash documentation (from man bash):
JOB CONTROL
Job control refers to the ability to selectively stop
(suspend) the execution of processes and continue (resume)
their execution at a later point. A user typically employs
this facility via an interactive interface supplied jointly
by the operating system kernel's terminal driver and bash.
So, quite simply said, having set -m (the default for
interactive shells) allows one to use built-ins such as fg and bg,
which would be disabled under set +m (the default for non-interactive shells).
It's not obvious to me what the connection is between job control and
killing background processes on exit, however, but I can confirm that
there is one: running set -m; (sleep 10 ; touch control-on) & will
create the file if one quits the shell right after typing that
command, but set +m; (sleep 10 ; touch control-off) & will not.
I think the answer lies in the rest of the documentation for set -m:
-m Monitor mode. [...] Background pro‐
cesses run in a separate process group and a line con‐
taining their exit status is printed upon their comple‐
tion.
This means that background jobs started under set +m are not actual
"background processes" ("Background processes are those whose process
group ID differs from the terminal's"): they share the same process
group ID as the shell that started them, rather than having their own
process group like proper background processes. This explains the
behavior observed when the shell quits before some of its background
jobs: if I understand correctly, when quitting, a signal is sent to
the processes in the same process group as the shell (thus killing
background jobs started under set +m), but not to those of other
process groups (thus leaving alone true background processes started
under set -m).
So, in your case, the startup.sh script presumably starts a
background job. When this script is run non-interactively, such as
over SSH as in the question you linked to, job control is disabled,
the "background" job shares the process group of the remote shell, and
is thus killed as soon that shell exits. Conversely, by enabling job
control in that shell, the background job acquires its own process
group, and isn't killed when its parent shell exits.
| Can someone explain in detail what "set -m" does? |
1,433,043,054,000 |
Is there a method of slowing down the copy process on Linux?
I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp command.
Is this possible? (If yes, how?)
Edit: so, I'll add more context to what I'm trying to achieve.
I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again.
Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop).
Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
|
You can throttle a pipe with pv -qL (or cstream -t provides similar functionality)
tar -cf - . | pv -q -L 8192 | tar -C /your/usb -xvf -
-q removes stderr progress reporting.
The -L limit is in bytes.
More about the --rate-limit/-L flag from the man pv:
-L RATE, --rate-limit RATE
Limit the transfer to a maximum of RATE bytes per second.
A suffix of "k", "m", "g", or "t" can be added to denote
kilobytes (*1024), megabytes, and so on.
This answer originally pointed to throttle but that project is no longer available so has slipped out of some package systems.
| Make disk/disk copy slower |
1,433,043,054,000 |
I have been using a rsync script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB.
In order to sync those files, I have been using rsync command as follows:
rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/
The contents of proj.lst are as follows:
+ proj1
+ proj1/*
+ proj1/*/*
+ proj1/*/*/*.tar
+ proj1/*/*/*.pdf
+ proj2
+ proj2/*
+ proj2/*/*
+ proj2/*/*/*.tar
+ proj2/*/*/*.pdf
...
...
...
- *
As a test, I picked up two of those projects (8.5GB of data) and executed the command above. Being a sequential process, it took 14 minutes and 58 seconds to complete. So, for 1.2TB of data, it would take several hours.
If I would could have multiple rsync processes in parallel (using &, xargs or parallel), it would save me time.
I tried with below command with parallel (after cding to the source directory) and it took 12 minutes 37 seconds to execute:
parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: .
This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere.
How can I run multiple rsync processes in order to reduce the execution time?
|
Following steps did the job for me:
Run the rsync --dry-run first in order to get the list of files those would be affected.
$ rsync -avzm --stats --safe-links --ignore-existing --dry-run \
--human-readable /data/projects REMOTE-HOST:/data/ > /tmp/transfer.log
I fed the output of cat transfer.log to parallel in order to run 5 rsyncs in parallel, as follows:
$ cat /tmp/transfer.log | \
parallel --will-cite -j 5 rsync -avzm --relative \
--stats --safe-links --ignore-existing \
--human-readable {} REMOTE-HOST:/data/ > result.log
Here, --relative option (link) ensured that the directory structure for the affected files, at the source and destination, remains the same (inside /data/ directory), so the command must be run in the source folder (in example, /data/projects).
| Parallelise rsync using GNU Parallel |
1,433,043,054,000 |
Wanting to play around with Trusted Platform Module stuff, I installed TrouSerS and tried to start tcsd, but I got this error:
TCSD TDDL ERROR: Could not find a device to open!
However, my kernel has multiple TPM modules loaded:
# lsmod | grep tpm
tpm_crb 16384 0
tpm_tis 16384 0
tpm_tis_core 20480 1 tpm_tis
tpm 40960 3 tpm_tis,tpm_crb,tpm_tis_core
So, how do I determine if my computer is lacking TPM vs TrouSerS having a bug?
Neither dmidecode nor cpuid output anything about "tpm" or "trust". Looking in /var/log/messages, on the one hand I see rngd: /dev/tpm0: No such file or directory, but on the other hand I see kernel: Initialise system trusted keyrings and according to this kernel doc trusted keys use TPM.
EDIT: My computer's BIOS setup menus mention nothing about TPM.
Also, looking at /proc/keys:
# cat /proc/keys
******** I--Q--- 1 perm 1f3f0000 0 65534 keyring _uid_ses.0: 1
******** I--Q--- 7 perm 3f030000 0 0 keyring _ses: 1
******** I--Q--- 3 perm 1f3f0000 0 65534 keyring _uid.0: empty
******** I------ 2 perm 1f0b0000 0 0 keyring .builtin_trusted_keys: 1
******** I------ 1 perm 1f0b0000 0 0 keyring .system_blacklist_keyring: empty
******** I------ 1 perm 1f0f0000 0 0 keyring .secondary_trusted_keys: 1
******** I------ 1 perm 1f030000 0 0 asymmetri Fedora kernel signing key: 34ae686b57a59c0bf2b8c27b98287634b0f81bf8: X509.rsa b0f81bf8 []
|
TPMs don't necessarily appear in the ACPI tables, but the modules do print a message when they find a supported module; for example
[ 134.026892] tpm_tis 00:08: 1.2 TPM (device-id 0xB, rev-id 16)
So dmesg | grep -i tpm is a good indicator.
The definitive indicator is your firmware's setup tool: TPMs involve ownership procedures which are managed from the firmware setup. If your setup doesn't mention anything TPM-related then you don't have a TPM.
TPMs were initially found in servers and business laptops (and ChromeBooks, as explained by icarus), and were rare in desktops or "non-business" laptops; that’s changed over the last few years, and Windows 11 requires a TPM now. Anything supporting Intel TXT has a TPM.
| How to determine if computer has TPM (Trusted Platform Module) available |
1,433,043,054,000 |
Linux does not (yet) follow the POSIX.1 standard which says that a renice on a process affects "all system scope threads in the process", because according to the pthreads(7) doc "threads do not share a common nice value".
However, sometimes, it can be convenient to renice "everything" related to a given process (one example would be Apache child processes and all their threads). So,
how can I renice all threads belonging to a given process ?
how can I renice all child processes belonging to a given process ?
I am looking for a fairly easy solution.
I know that process groups can sometimes be helpful, however, they do not always match what I want to do: they can include a broader or different set of processes.
Using a cgroup managed by systemd might also be helpful, but even if I am interested to hear about it, I mostly looking for a "standard" solution.
EDIT: also, man (7) pthreads says "all of the threads in a process are placed in the same thread group; all members of a thread group share the same PID". So, is it even possible to renice something which doesn't have it's own PID?
|
Finding all PIDs to renice recursively
We need to get the PIDs of all processes ("normal" or "thread") which are descendant (children or in the thread group) of the to-be-niced process. This ought to be recursive (considering children's children).
Anton Leontiev answer's gives the hint to do so: all folder names in /proc/$PID/task/ are threads' PID containing a children file listing potential children processes.
However, it lacks recursivity, so here is a quick & dirty shell script to find them:
#!/bin/sh
[ "$#" -eq 1 -a -d "/proc/$1/task" ] || exit 1
PID_LIST=
findpids() {
for pid in /proc/$1/task/* ; do
pid="$(basename "$pid")"
PID_LIST="$PID_LIST$pid "
for cpid in $(cat /proc/$1/task/$pid/children) ; do
findpids $cpid
done
done
}
findpids $1
echo $PID_LIST
If process PID 1234 is the one you want to recursively nice, now you can do:
renice -n 15 -p $(/path/to/findchildren.sh 1234)
Side notes
Nice value or CPU shares ?
Please note that nowadays, nice values may not be so relevant "system-wide", because of automatic task grouping, especially when using systemd. Please see this answer for more details.
Difference between threads and processes
Note: this answer explains Linux threads precisely.
In short: the kernel only handles "runnable entities", that is, something which can be run and scheduled. Kernel wise, these entities are called processes. A thread, is just a kind of process that shares (at least) memory space and signal handlers with another one. Every such process has a system-wide unique identifier: the PID (Process ID).
As a result, you can renice each "thread" individually because they do have their own PID1.
1 See this answer for more information about PID (ProcessID) and TID difference (ThreadID).
| How to renice all threads (and children) of one process on Linux? |
1,433,043,054,000 |
The "command" column gets truncated by the width of the screen and I am unable to see the last part of it.
I have tried to reduce the font size so I can see a longer part of the command line but it still won't do.
|
top -bcn1 -w512
The elegant solution is to use the option -w [number]. According to the man page, the maximum width is 512 characters, so you will need a different solution for anything exceeding that. Presumably you also want to see the full length of the commands, so use the -c option. We need to run top in "batch mode", -b, or it will continue to cut off the commands with a "+". Batch mode kind of makes a mess because it prints out all the jobs every second, so we can use the -n1 option to print out just one instance.
See the man top page for more information.
| How do I get "top" command to wrap its output? |
1,433,043,054,000 |
To know when was a process started, my first guess was to check the time when /proc/<pid>/cmdline was written/modified the last time.
ps also shows a START field. I thought both of these sources would be the same. Sometimes they are not the same. How could that be?
|
On Linux at least, you can also do:
ps -o lstart= -p the-pid
to have a more useful start time.
Note however that it's the time the process was started, not necessarily the time the command that it is currently executing was invoked. Processes can (and generally do) run more than one command in their lifetime. And commands sometimes spawn other processes.
The mtimes of the files in /proc on Linux (at least) are generally the date when those files were instantiated, which would be the first time something tried to access them or list the directory content.
For instance:
$ sh -c 'date +%T.%N; sleep 3; echo /proc/"$$"/xx*; sleep 3; stat -c %y "/proc/$$/cmdline"'
13:39:14.791809617
/proc/31407/xx*
2013-01-22 13:39:17.790278538 +0000
Expanding /proc/$$/xx* caused the shell to read the content of /proc/$$ which caused the cmdline file to be instantiated.
See also: Timestamp of socket in /proc//fd
| When was a process started |
1,433,043,054,000 |
A drive is beginning to fail and I only know the device by its /dev/sdb device file designation. What are the ways that I can use to correlate that device file to an actual hardware device to know which drive to physically replace?
Bonus: What if I don't have /dev/disk/ and its sub directories on this installation? (Which, sadly, I don't)
|
You can look in /sys/block:
-bash-3.2$ ls -ld /sys/block/sd*/device
lrwxrwxrwx 1 root root 0 Jun 8 21:09 /sys/block/sda/device -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdb/device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdc/device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdd/device -> ../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
Or if you don't have /sys, you can look at /proc/scsi/scsi:
-bash-3.2$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: PepperC Model: Virtual Disc 1 Rev: 0.01
Type: CD-ROM ANSI SCSI revision: 03
| How do I correlate /dev/sd devices to the hardware they represent? |
1,433,043,054,000 |
I had troubles with the screen brightness control in my laptop and I fixed it by adding the acpi_osi=linux and acpi_backlight=vendorparameters to the filegrub.cfg.
I'd like to know what these parameters mean and why they work.
|
The kernel parameters are documented at kernel.org.
To understand what acpi_osi does, you roughly need to know how ACPI works.
ACPI consists of so-called tables that the BIOS loads into RAM before the operating system starts. Some of them simply contain information about essential devices on the mainboard in a fixed format, but some like the DSDT table contain AML code. This code is executed by the operating system and provides the OS with a tree structure describing many devices on the mainboard and callable functions that are executed by the OS when e.g. power saving is enabled. The AML code can ask the OS which OS it is by calling the _OSI function. This is often used by vendors to make workarounds e.g. around bugs in some Windows versions.
As many hardware vendors only test their products with the (at that time) latest version of Windows, the "regular" code paths without the workarounds are often buggy. Because of this Linux usually answers yes when asked if it's Windows. Linux also used to answer yes when asked if it's "Linux", but that caused BIOS vendors to work around bugs or missing functionality in the (at that time) latest Linux kernel version instead of opening bug reports or providing patches. When these bugs were fixed the workarounds caused unnecessary performance penalities and other problems for all later Linux versions.
acpi_osi=Linux makes Linux answer yes again when asked if it's "Linux" by the ACPI code, thus allowing the ACPI code to enable workarounds for Linux and/or disable workarounds for Windows.
acpi_backlight=vendor changes the order in which the ACPI drivers for backlights are checked. Usually Linux will use the generic video driver, when the ACPI DSDT provides a backlight device claiming standard compatibility and will only check other vendor specific drivers if such a device is not found. acpi_backlight=vendor reverses this order, so that the vendor specific drivers are tried first.
| What do the kernel parameters acpi_osi=linux and acpi_backlight=vendor do? |
1,433,043,054,000 |
The example I have is Minecraft. When running Bukkit on Linux I can remove or update the .jar files in the /plugins folder and simply run the 'reload' command.
In Windows, I have to take the whole server process down because it will complain that the .jar file is currently in use when I try to remove or replace it.
This is awesome to me, but why does it happen?
What is Linux doing differently here?
|
Linux deletes a file completely differently than the way Windows does. First, a brief explanation on how files are managed in the *unix native file systems.
The file is kept on the disk in the multilevel structure called i-node. Each i-node has an unique number on the single filesystem. The i-node structure keeps different information about a file, like its size, data blocks allocated for the file etc., but for the sake of this answer the most important data element is a link counter. The directories are the files that keep records about the files. Each record has the i-node number it refers to, the file name length and the file name itself. This scheme allows one to have 'pointers', i.e. 'links' to the same file in different places with different names. The link counter of the i-node actually keeps the number of links that refer to this i-node.
What happens when some process opens the file? First the open() function searches for the file record. Then it checks if the in-memory i-node structure for this i-node already exists. This may happen if some application already had this file opened. Otherwise, the system initializes a new in-memory i-node structure. Then the system increases the in-memory i-node structure open counter and returns to the application its file descriptor.
The Linux library call to delete a file is called unlink. This function removes the file record from a directory and decrements the i-node's link counter. If the system found that an in-memory i-node structure exists and its open counter is not zero then this call returns the control to the application. Otherwise it checks if the link-counter became zero and if it does then the system frees all blocks allocated for the i-node and the i-node itself and returns to the application.
What happens that an application closes a file? The function close() decrements the open counter and checks its value. If the value is non-zero the function returns to the application. Otherwise it checks if the i-node link counter is zero. If it is zero, it frees all blocks of the file and the i-node before returning to the application.
This mechanism allows you to "delete" a file while it is opened. At the same time the application that opened a file still has access to the data in the file. So, JRE, in your example, still keeps its version of file opened while there is another, updated version on the disk.
More over, this feature allows you to update the glibc(libc) - the core library of all applications - in your system without interrupting its normal operation.
Windows
20 years ago we did not know any other file system than FAT under DOS. This file system has a different structure and management principles. These principles do not allow you to delete a file when it is opened, so the DOS and lately Windows has to deny any delete requests on a file that is open. Probably NTFS would allow the same behavior as *nix file systems but Microsoft decided to maintain the habitual behavior of the file deletion.
This is the answer. Not short, but now you have the idea.
Edit:
A good read on sources of Win32 mess: https://web.archive.org/web/20190218083407/https://blogs.msdn.microsoft.com/oldnewthing/20040607-00/?p=38993
Credits to @Jon
| What is Linux doing differently that allows me to remove/replace files where Windows would complain the file is currently in use? |
1,433,043,054,000 |
After installing CentOS, I see several lines like
/dev/mapper/centos_jackpc--11-swap and
/dev/mapper/centos_jackpc--11-root when I issue fdisk -l.
What is the purpose of these? And why do they not show up for Ubuntu?
The full fdisk -l is shown here:
Disk /dev/sda: 250.0 GB, 250000000000 bytes
255 heads, 63 sectors/track, 30394 cylinders, total 488281250 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e3a37
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 287754239 143364096 8e Linux LVM
/dev/sda3 287756286 434180095 73211905 5 Extended
/dev/sda5 287756288 434180095 73211904 83 Linux
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x6c03e282
Device Boot Start End Blocks Id System
/dev/sdb1 63 2147504935 1073752436+ 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/rhel_jackpc-root: 104.9 GB, 104857600000 bytes
255 heads, 63 sectors/track, 12748 cylinders, total 204800000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/rhel_jackpc-root doesn't contain a valid partition table
Disk /dev/mapper/rhel_jackpc-swap: 41.9 GB, 41943040000 bytes
255 heads, 63 sectors/track, 5099 cylinders, total 81920000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/rhel_jackpc-swap doesn't contain a valid partition table
|
The entries in /dev/mapper are LVM logical volumes. You can think of these as Linux's native partition type. Linux can also use other partition types, such as PC (MBR or GPT) partitions.
Your disk is divided in MBR partitions, one of which (/dev/sda2) is an LVM physical volume. The LVM physical volume is the single constituent of the volume group rhel_jackpc, which contains two logical volumes: root (which is your CentOS system partition) and swap (which is your CentOS swap partition).
Ubuntu is installed directly on an MBR partition, presumably /dev/sda5.
fdisk -l lists information about all the block devices that could contain MBR partitions (or GPT partitions in recent versions of fdisk). It's technically possible, albeit highly unusual and rather pointless, to have PC partitions inside an LVM logical volume, so fdisk -l looks there and reports that it doesn't find a partition table. This is normal.
On Ubuntu, you wouldn't see anything about the LVM logical volume if the volume group is not activated. Since Ubuntu isn't using any of the volumes, it wouldn't activate the volume group.
Whether to use PC partitions or LVM volumes for a Linux installation is often merely a matter of convenience. There are things you can't do with PC partitions, such as spread them on multiple disks, or simply resize them and move them around easily, or create snapshots with them, so an installation over LVM is a lot more flexible. But if you don't need this flexibility, you can install directly onto PC partitions.
| /dev/mapper in fdisk |
1,433,043,054,000 |
In the magic sysrq key combinations, there is the combination alt+sysrq+r which, according to wikipedia, does the following:
Switch the keyboard from raw mode, the mode used by programs such as
X11 and svgalib, to XLATE mode
What is raw mode? and what is XLATE mode?
Can I switch back to raw mode once I have switched to XLATE mode?
How can I tell which mode my keyboard is in?
|
When you press a key on your keyboard, it sends a numeric code to the computer, called a scan code. The scan code tells the computer which key was pressed; for example, on a typical US keyboard, the A key sends the scan code 30 when you press it (and 158 when you release it). The keyboard driver reports these codes directly to applications when the keyboard is in raw mode (“raw” means unprocessed, straight off-the-keyboard). A few programs use raw mode and do their own keyboard processing; the X server is the most prominent one.
Most programs expect that when you press the A key, the program reads the character a (ASCII 97), and that when you press Shift+A the program reads A (ASCII 65), and when you press Ctrl+A the program reads the Ctrl+A character (ASCII 1). Keys that don't have associated characters send escape sequences, e.g. \e[A for Left, where \e is the ESC character (ASCII 27). The keyboard driver performs this translation when the keyboard is in ASCII mode, also called XLATE mode (short for “translate”). XLATE mode lets applications do character input, at the cost of not having access to such nuances as “Left Shift key pressed” or Ctrl+Shift+A as distinct from Ctrl+A.
The kbd_mode lets you switch between modes, and shows the current mode if you invoke it without any argument.
The magic SysRq key combinations are meant to recover from various nasty situations. One of the key combinations is Alt+SysRq+K to kill all programs on the current virtual console; if that program put the keyboard in raw mode, then you won't be able to type at the login prompt (which will often appear, depending on your system configuration). Pressing Alt+SysRq+R restores the usual (outside X) ASCII mode where you can type characters.
| What is raw keyboard mode? |
1,433,043,054,000 |
Is it possible to call a function which is declared below in bash?
Example
if [ "$input" = "yes" ]; then
YES_FUNCTION
elif [ "$input" = "no" ]; then
NO_FUNCTION
else
exit 0;
fi
YES_FUNCTION()
{
.....
.....
}
NO_FUNCTION()
{
.....
.....
}
|
Like others have said, you can't do that.
But if you want to arrange the code into one file so that the main program is at the top of the file, and other functions are defined below, you can do it by having a separate main function.
E.g.
#!/bin/sh
main() {
if [ "$1" = yes ]; then
do_task_this
else
do_task_that
fi
}
do_task_this() {
...
}
do_task_that() {
...
}
main "$@"; exit
When we call main at the end of file, all functions are already defined. Explicitly passing "$@" to main is required to make the command line arguments of the script visible in the function.
The explicit exit on the same line as the call to main is not mandatory, but can be used to prevent a running script from getting messed up if the script file is modified. Without it, the shell would try to continue reading commands from the script file after main returns. (see How to read the whole shell script before executing it?)
| call function declared below |
1,433,043,054,000 |
Rather than using mount | grep, I'd like to use mount -l -t bind, but that doesn't work, and -t none shows all mounts.
|
Bind mounts are not a filesystem type, nor a parameter of a mounted filesystem; they're parameters of a mount operation. As far as I know, the following sequences of commands lead to essentially identical system states as far as the kernel is concerned:
mount /dev/foo /mnt/one; mount --bind /mnt/one /mnt/two
mount /dev/foo /mnt/two; mount --bind /mnt/two /mnt/one
So the only way to remember what mounts were bind mounts is the log of mount commands left in /etc/mtab. A bind mount operation is indicated by the bind mount option (which causes the filesystem type to be ignored). But mount has no option to list only filesystems mounted with a particular set of sets of options. Therefore you need to do your own filtering.
mount | grep -E '[,(]bind[,)]'
</etc/mtab awk '$4 ~ /(^|,)bind(,|$)/'
Note that /etc/mtab is only useful here if it's a text file maintained by mount. Some distributions set up /etc/mtab as a symbolic link to /proc/mounts instead; /proc/mounts is mostly equivalent to /etc/mtab but does have a few differences, one of which is not tracking bind mounts.
One piece of information that is retained by the kernel, but not shown in /proc/mounts, is when a mount point only shows a part of the directory tree on the mounted filesystem. In practice this mostly happens with bind mounts:
mount --bind /mnt/one/sub /mnt/partial
In /proc/mounts, the entries for /mnt/one and /mnt/partial have the same device, the same filesystem type and the same options. The information that /mnt/partial only shows the part of the filesystem that's rooted at /sub is visible in the per-process mount point information in /proc/$pid/mountinfo (column 4). Entries there look like this:
12 34 56:78 / /mnt/one rw,relatime - ext3 /dev/foo rw,errors=remount-ro,data=ordered
12 34 56:78 /sub /mnt/partial rw,relatime - ext3 /dev/foo rw,errors=remount-ro,data=ordered
| List only bind mounts |
1,433,043,054,000 |
I have a disk with two partitions: sda1 and sda2. I would like change the number of sda1 to sda2 and sda2 to sda1.
It's possible but I don't remember the procedure. i.e. My first partition will be sda2 and the second sda1, so I need to specify a manual order, not an automatic ordering like in fdisk -> x -> f.
How can I change the order? Links to manuals or tutorials are also welcome.
Thanks.
The reason: I have an application that needs to read data from sda1 but the data is in sda2. Changing the partition table is the fastest fix for this issue. The system isn't critical but I don't want to keep the system halted for too much time.
Update: the fdisk version of OpenBSD includes this functionality.
|
FYI, it is a bad idea and you can lose everything. If you still want to do it, here are the steps:
Don't do it. If this doesn't help, then:
Use the sfdisk tool:
First, make a backup of the partition table using
sfdisk -d /dev/sda > sda.out
Then go for it:
sfdisk /dev/sda -O sda-partition-sectors.save
You will see something like this
Checking that no-one is using this disk right now ...
OK
Disk /dev/sda: 1018 cylinders, 124 heads, 62 sectors/track
Old situation:
Units = cylinders of 3936256 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 5 6- 23063+ 83 Linux
/dev/sda2 6 1017 1012 3890128 83 Linux
/dev/sda3 0 - 0 0 0 Empty
/dev/sda4 0 - 0 0 0 Empty
Input in the following format; absent fields get a default value.
<start> <size> <type [E,S,L,X,hex]> <bootable [-,*]> <c,h,s> <c,h,s>
Usually you only need to specify <start> and <size> (and perhaps <type>).
/dev/sda1 :
Now it is asking you to give the new details for the 'sda1' partition. So you have to give the numbers of sda2 here. So, I put '6 1012' here and press Enter:
/dev/sda1 :6 1012
/dev/sda1 6 1017 1012 3890128 83 Linux
/dev/sda2 :
Now check if the numbers printed after you pressed Enter are exactly the same as those printed earlier for sda2. If it is okay, continue with giving the new numbers for sda2:
/dev/sda2 :0
/dev/sda2 0+ 5 6- 23063+ 83 Linux
/dev/sda3 :
This time it was enough to enter "0" in my case - but you have to make sure the numbers aren't messed up in yours.
Next, continue with the other partitions in the same manner. If you already reached the end of the disk, pressing Enter is enough. Finally, check again that all the numbers are okay and save the partition table (or not). If you messed something up, have a look at man sfdisk and the descriptions of '-d', '-O' and '-I' options.
Notice also, that once you've made the crazy changes, you might need to run 'sync' so that the partitions are re-read before you try to mount them.
| Change the number of the partition from sda1 to sda2 |
1,433,043,054,000 |
I have been using tcsh for a long time now. But whenever I am searching for something, I often find that the methods specified are bash specific. Even the syntax for the shell scripts is different for the two.
From what I have experienced searching and learning on the internet, bash seems to be the more common shell used. Even the number of questions on this site tagged bash are way more (five times more currently) than the number of questions tagged tcsh.
So, I am wondering whether I should switch to bash. What do you think?
Why should I stick to tcsh OR why should I move over to bash?
|
After learning bash I find that tcsh is a bit of a step backwards. For instance what I could easily do in bash I'm finding it difficult to do in tcsh. My question on tcsh. The Internet support and documentation is also much better for bash and very limited for tcsh. The number of O'Reilly books on bash are great but I have found nothing similar for tcsh.
| Which shell should I use - tcsh vs bash? [closed] |
1,433,043,054,000 |
How frequently is the proc file system updated on Linux? Is it 20 milliseconds (time quantum)?
|
The information that you read from the proc filesystem is not stored on any media (not even in RAM), so there is nothing to update.
The purpose of the proc file system is to allow userspace programs to obtain or set kernel data using the simple and familiar file system semantics (open, close, read, write, lseek), even though the data that is read or written doesn't reside on any media. This design decision was deemed better (e.g. human readable and easily scriptable) for getting and setting data whose format could not be specified in advance than implementing something such as ASN1 encoded OIDs, which also would have worked fine.
The data that you see when you read from the proc filesystem is generated on-the-fly when you do a read from the begining of a file. That is, doing the read causes the data to be generated by a kernel callback function that is specific to the file you are reading. Doing an lseek to the begining of the file and reading again causes another call to the callback that generates the data again. Similarly, when you write to a writable file in the proc filesystem, a callback function is called that parses the input and sets kernel variables. The input data in it's raw form isn't stored.
The above is just a slightly more verbose way of saying what Hauke Laging states so succinctly. I suggest that you accept his answer.
| How frequently is the proc file system updated on Linux? |
1,433,043,054,000 |
For example, I want to give my colleagues write access to certain directory. Let's assume that subdirectories in it had access rights 775, files 664, and also there were some executable files in the dir - 775.
Now I want to add write permissions. With chmod, I could try something like
chmod o+w -R mydir/
But that's not cool, since I don't want to make the dir world-writable - I want give access only to certain users, so I want to use ACL. But is there an easy way to set those permissions? As I see it, I need to tackle at least three cases (dirs, files, executable files) separately:
find -type d -exec setfacl -m u:colleague:rwx {} \;
find -type f -executable -exec setfacl -m u:colleague:rwx {} \;
find -type f \! -executable -exec setfacl -m u:colleague:rw {} \;
It seems quite a lot of code lines for such a simple task. Is there a better way?
|
setfacl has a recursive option (-R) just like chmod:
-R, --recursive
Apply operations to all files and directories recursively. This
option cannot be mixed with `--restore'.
it also allows for the use of the capital-x X permission, which means:
execute only if the file is a directory or already has
execute permission for some user (X)
so doing the following should work:
setfacl -R -m u:colleague:rwX .
(all quotes are from man setfacl for acl-2.2.52 as shipped with Debian)
| How do I set permissions recursively on a dir (with ACL enabled)? |
1,433,043,054,000 |
My problem is that with
lsof -p pid
I can find out the list of opened file of a process whose process id is pid. But is there a way to find out the file offset of each accessed file ?
Please give me some suggestions ?
|
On linux, you can find the position of the file descriptor number N of process PID in /proc/$PID/fdinfo/$N. Example:
$ cat /proc/687705/fdinfo/36
pos: 26088
flags: 0100001
The same file can be opened several times with different positions using several file descriptors, so you'll have to choose the relevant one in the case there are more than one. Use:
$ readlink /proc/$PID/fd/$N
to know what is the file to which the corresponding file descriptor is attached (it might not be a file, in this case the symlink is dangling).
| How to find out the file offset of an opened file? |
1,433,043,054,000 |
Let’s say I have 50 USB flash drives.
I assume they get to be /dev/sda to /dev/sdz. What comes after /dev/sdz?
|
It will go to /dev/sdaa, /dev/sdab, /dev/sdac, etc.
Here is a comment from the source code:
/**
* sd_format_disk_name - format disk name
* @prefix: name prefix - ie. "sd" for SCSI disks
* @index: index of the disk to format name for
* @buf: output buffer
* @buflen: length of the output buffer
*
* SCSI disk names starts at sda. The 26th device is sdz and the
* 27th is sdaa. The last one for two lettered suffix is sdzz
* which is followed by sdaaa.
*
* This is basically 26 base counting with one extra 'nil' entry
* at the beginning from the second digit on and can be
* determined using similar method as 26 base conversion with the
* index shifted -1 after each digit is computed.
*
* CONTEXT:
* Don't care.
*
* RETURNS:
* 0 on success, -errno on failure.
*/
https://github.com/torvalds/linux/blob/master/drivers/scsi/sd.c#L3303-L3324
| What happens when Linux goes out of letters for drives? |
1,433,043,054,000 |
On occasion process substitution will not work as expected. Here is an example:
Input:
gcc <(echo 'int main(){return 0;}')
Output:
/dev/fd/63: file not recognized: Illegal seek
collect2: error: ld returned 1 exit status
Input:
But it works as expected when used with a different command:
grep main <(echo 'int main(){return 0;}')
Output:
int main(){return 0;}
I have noticed similar failures with other commands (i.e. the command expecting the file from the process substitution can't use /dev/fd/63 or similar). This failure with gcc is just the most recent. Is there some general rule that I should be aware of to determine when process substitution will fail in this way and should not be used?
I am using this BASH version on Ubuntu 12.04 (I've also seen this in arch and debian):
GNU bash, version 4.3.11(1)-release (i686-pc-linux-gnu)
|
Process substitution results in a special file (like /dev/fd/63 in your example) that behaves like the read end of a named pipe. This file can be opened and read, but not written, not seeked.
Commands that treat their arguments as pure streams work while commands that expect to seek in files they are given (or write to them) won't work. The kind of command that will work is what is usually considered a filter: cat, grep, sed, gzip, awk, etc... An example of a command that won't work is an editor like vi or a file operation like mv.
gcc wants to be able to perform random access on its input files to detect what language they are written in. If you instead give gcc a hint about the input file's language, it's happy to stream the file:
gcc -x c <(echo 'int main(){return 0;}')
The simpler more straightforward form without process substitution also works:
echo 'int main(){return 0;}' | gcc -x c -
Note that this is not specific to bash. All shells that support process substitution behave the same way.
| Why does BASH process substitution not work with some commands? |
1,433,043,054,000 |
I'm running a Ubuntu based web server (Apache, MySQL) on a 512MB VPS. This is more than sufficient for the website it is running (small forum).
As I wanted to add some protection against viruses I installed ClamAV and use it to scan uploaded files as part of the upload handling script (PHP).
I'm running the clamav-daemon service so the definitions don't have to be loaded every time a file is scanned. One downside to this practise seems to be the "huge" amount of memory used by clamav-daemon service: >200 MB. This already resulted in the service being forced to stop and the uploads being rejected.
I can simply upgrade the memory of the VPS to 1024MB, but I want to know if there is a way to reduce the memory usage of ClamAV by e.g. not loading unwanted definitions.
|
ClamAV holds the search strings using the classic string (Boyer Moore) and regular expression (Aho Corasick) algorithms. Being algorithms from the 1970s they are extemely memory efficient.
The problem is the huge number of virus signatures. This leads to the algorithms' datastructures growing quite large.
You can't send those datastructures to swap, as there are no parts of the algorithms' datastructures accessed less often than other parts. If you do force pages of them to swap disk, then they'll be referenced moments later and just swap straight back in. (Technically we say "the random access of the datastructure forces the entire datastructure to be in the process's working set of memory".)
The datastructures are needed if you are scanning from the command line or scanning from a daemon.
You can't use just a portion of the virus signatures, as you don't get to choose which viruses you will be sent, and thus can't tell which signatures you will need.
Here's the memory used on a 32-bit machine running Debian Wheezy and it's clamd.
# ps_mem.py
Private + Shared = RAM used Program
281.7 MiB + 422.5 KiB = 282.1 MiB clamd
Edit: I see someone suggests setting the resident set size. If this succeeds then having a resident set size less than the working set size will lead to the process thrashing to and from swap. This will lower the entire system performance substantially. In any case the Linux manual page for setrlimit(RLIMIT_RSS, ...) says that setting the resident set size is no longer supported and never had any effect on processes which chose not to call madvise(MADV_WILLNEED, ...).
| How to reduce ClamAV memory usage? |
1,320,067,648,000 |
Why isn't there a unified package manager that acts as an interface between the end-user and the underlying low-level package manager (apt, yast, pacman, etc.)?
Is it hard to do and therefore not practical, or is there a genuine obstacle making it impossible to do?
|
First of all, there is. The problem is not that there is no unified package manager, the problem is there are ten of them – seriously.
Let's take my favorite: poldek. It's a user front end for package management that can run on several different distros and manage either rpm or deb packages. Poldek doesn't do the stuff rpm does (it leaves that to rpm) and just sends the right commands without the user having to figure out all that mess.
But the problems don't stop there. Everybody has a different idea of what a user front end is supposed to look like and how it should function and what options it should expose. So other people have written their own. Actually many of the package front end managers people use in common distros today are able to handle more than one backend.
In the end, however, the problem (or advantage) is people like things to function exactly the way they want, not in some meta-fashion that tries to satisfy everybody only to fail to really make anybody happy. This is the reason we have umpteen gazillion distros in the first place. It's the reason we have so many different Desktop Environments and Window Managers (and the fact those are actually different kinds of things at all).
There are still outstanding proposals for ways of writing universal packages or having a manager that understands them all or having an api for converting one to the other ... but in the end Unix is best when used according to its philosophy ... each tool does one thing and does it well.
Any time you have a tool that tries to do more than one thing, it ends up being not as good at one of them. For example, poldek sucks at handling deb package dependencies.
| Why isn't there a truly unified package manager for Linux? |
1,320,067,648,000 |
I know about lsmod, but how do I figure out which driver does what?
|
$ readlink /sys/class/net/wlan0/device/driver
../../../../bus/pci/drivers/ath5k
In other words, the /sys hierarchy for the device (/sys/class/net/$interface/device) contains a symbolic link to the /sys hierarchy for the driver. There you'll also find a symbolic link to the /sys hierarchy for the module, if applicable. This applies to most devices, not just wireless interfaces.
| How to find out which Wi-Fi driver is installed? |
1,320,067,648,000 |
A few years ago I recall using the terminal and reading a tutorial in the Linux manual (using man) on how a computer worked after it was turned on. It walked you through the whole process explaining the role of the BIOS, ROM, RAM and OS on this process.
Which page was this, if any? How can I read it again?
|
You're thinking of the boot(7) manual (man 7 boot) and/or the bootup(7) manual (man 7 bootup). Those are the manuals I can think of on (Ubuntu) Linux that best fits your description.
These manuals are available on the web (see links above), but the definite text is what's available on the system that you are using. If a web-based manual says one thing but the manual on your system says another thing, then the manual on your system is the more correct one for you. This goes for all manuals.
See also the "See also" section in those manuals.
This other question may also be of interest: How does the Linux or Unix " / " get mounted during bootup?
For a non-Linux take on the boot process, the OpenBSD first-stage system bootstrap (biosboot(8)) and second-stage bootstrap (boot(8)) manuals, followed by rc(8), may be interesting.
| Which man page describes the process of a computer turning on? |
1,320,067,648,000 |
I'm running BOINC on my old netbook, which only has 2 GB of RAM onboard, which isn't enough for some tasks to run. As in, they refuse to, seeing how low on RAM the device is.
I have zRAM with backing_dev and zstd algorithm enabled, so in reality, lack of memory is never an issue, and in especially tough cases I can always just use systemd-run --scope -p (I have successfully ran programs that demanded +16 GB of RAM using this)
How can I make BOINC think that my laptop has more than 2 GB of RAM installed, so that I could run those demanding tasks?
|
After some thinking, I did this:
Started with nano /proc/meminfo
Changed MemTotal, MemFree, MemAvailable, SwapTotal and SwapFree to desired values and saved to ~./meminfo
Gave the user boinc password sudo passwd boinc and shell -- sudo nano /etc/passwd , found the line boinc:x:129:141:BOINC core client,,,:/var/lib/boinc-client:/usr/sbin/nologin and changed the /usr/sbin/nologin part to /bin/bash
Then I faked RAM info using examples from here Recover from faking /proc/meminfo
unshare -m bash #unshares mount spaces, for specific program "bash" only (and for whatever you want to launch from it)
mount --bind ~./meminfo /proc/meminfo #substitutes real meminfo data with fake one
and confirmed with free that it worked
total used free shared buff/cache available
Mem: 2321456 21456 2300000 0 0 2300000
Swap: 5000000 1000000 4000000
Then switched to user su - boinc and just launched the program with
boinc --check_all_logins --redirectio --dir /var/lib/boinc-client
BOINC Manager can be launched then as usual
Total success, tasks which previously refused to run, started to download and then ran with no complications
| How can I fake the amount of installed RAM for a specific program in Linux? |
1,320,067,648,000 |
The error is showing itself like this:
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer rendering failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer features failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer attributes failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer rendering failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer features failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer attributes failure: An invalid handle value was provided.
Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error:
It has consumed my whole SSD.
|
I think the error is caused by VLC. Try using another Media Player.
| I don't know what is producing the gigabytes of error in syslog |
1,320,067,648,000 |
I have a hard disk in my computer that I use to make backups of my data. I do not use this disk otherwise.
How can I stop this disk from spinning once my backup is finished? Also how would I make it spin back up again before the backup takes place later on?
The drive is a regular SATA drive.
|
Umount the filesystem and then run hdparm -S 1 /dev/sdb to set it to spin down after five seconds (replace /dev/sdb with the actual device for the hard disk). This will minimize the power used and heat generated by the hard disk.
| Shutdown my (backup) hard disk on Linux when I don't use it |
1,320,067,648,000 |
I have a software RAID5 array (Linux md) on 4 disks.
I would like to replace one of the disks with a new one, without putting the array in a degraded state, and if possible, online. How would that be possible?
It's important because I don't want to:
take the risk of stressing the other disks so one may crash during rebuild,
take the risk of being in a "no-parity state" so I don't have a safety net for some time.
I suppose doing so online is too much asking and I should just raw copy (dd) the data of the old disk to the new one offline and then replace it, but I think it is theoretically possible...
Some context: Those disks have all been spinning almost continuously for more than 5.5 years. They still work perfectly for the moment and they all pass the (long) SMART self-test. However, I have reasons to think that one of those 4 disks will not last much longer (supposed predictive failure).
|
Using mdadm 3.3+
Since mdadm 3.3 (released 2013, Sep 3), if you have a 3.2+ kernel, you can proceed as follows:
# mdadm /dev/md0 --add /dev/sdc1
# mdadm /dev/md0 --replace /dev/sdd1 --with /dev/sdc1
sdd1 is the device you want to replace, sdc1 is the preferred device to do so and must be declared as a spare on your array.
The --with option is optional, if not specified, any available spare will be used.
Older mdadm version
Note: You still need a 3.2+ kernel.
First, add a new drive as a spare (replace md0 and sdc1 with your RAID and disk device, respectively):
# mdadm /dev/md0 --add /dev/sdc1
Then, initiate a copy-replace operation like this (sdd1 being the failing device):
# echo want_replacement > /sys/block/md0/md/dev-sdd1/state
Result
The system will copy all readable blocks from sdd1 to sdc1. If it comes to an unreadable block, it will reconstruct it from parity. Once the operation is complete, the former spare (here: sdc1) will become active, and the failing drive will be marked as failed (F) so you can remove it.
Note: credit goes to frostschutz and Ansgar Esztermann who found the original solution (see the duplicate question).
Older kernels
Other answers suggest:
Johnny's approach: convert array to RAID6, "replace" the disk, then back to RAID5,
Hauke Laging's approach: briefly remove the disk from the RAID5 array, make it part of a RAID1 (mirror) with the new disk and add that mirror drive back to the RAID5 array (theoretical)...
| How to safely replace a not-yet-failed disk in a Linux RAID5 array? |
1,320,067,648,000 |
Pasted below this question is a sample of a /etc/hosts file from a Linux (CentOS) and a Windows machine. The Linux file has two tabbed entries after the IP address (that is localhost.localdomain localhost) and Windows has only one. If I want to edit the hosts file in Windows to have the machine name (etest) instead of localhost, I simply replace the word localhost with the machine name I want. The machine need not be part of a domain.
In a Linux machine, the two entries localhost.localdomain and localhost seems to indicate that I will need the machine to be part of a domain. Is this true?
Can I simply edit both entries to etest so that it will read:
127.0.0.1 etest etest
or is it required that I substitute one entry with a domain name?
Additionally, please let me know what the second line of the /etc/hosts file on the Linux machine is for.
::1 localhost6.localdomain6 localhost6
hosts file on a Linux machine:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
hosts file on a windows machine:
# Copyright (c) 1993-1999 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
127.0.0.1 localhost
|
You always want the 127.0.0.1 address to resolve first to localhost. If there is a domain you can use that too, but then make sure localhost is listed second. If you want to add aliases for your machine that will lookup to the loopback address you can keep adding them as space separated values on that line. Specifying a domain here is optional, but don't remove "localhost" from the options.
| Format of /etc/hosts on Linux (different from Windows?) |
1,320,067,648,000 |
Can anybody explain(hopefully with a picture), how is the linux graphics stack organised? I hear all the time about X/GTK/GNOME/KDE etc., but I really don't have any idea what they actually do and how they interact with each other and other portions of the stack. How do Unity and Wayland fit in?
|
The X Window System uses a client-server architecture. The X server runs on the machine that has the display (monitors + input devices), while X clients can run on any other machine, and connect to the X server using the X protocol (not directly, but rather by using a library, like Xlib, or the more modern non-blocking event-driven XCB). The X protocol is designed to be extensible, and has many extensions (see xdpyinfo(1)).
The X server does only low level operations, like creating and destroying windows, doing drawing operations (nowadays most drawing is done on the client and sent as an image to the server), sending events to windows, ... You can see how little an X server does by running X :1 & (use any number not already used by another X server) or Xephyr :1 & (Xephyr runs an X server embedded on your current X server) and then running xterm -display :1 & and switching to the new X server (you may need to setup X authorization using xauth(1)).
As you can see, the X server does very little, it doesn't draw title bars, doesn't do window minimization/iconification, doesn't manage window placement... Of course, you can control window placement manually running a command like xterm -geometry -0-0, but you will usually have an special X client doing the above things. This client is called a window manager. There can only be one window manager active at a time. If you still have open the bare X server of the previous commands, you can try to run a window manager on it, like twm, metacity, kwin, compiz, larswm, pawm, ...
As we said, X only does low level operations, and doesn't provide higher level concepts as pushbuttons, menus, toolbars, ... These are provided by libraries called toolkits, e.g: Xaw, GTK, Qt, FLTK, ...
Desktop environments are collections of programs designed to provide a unified user experience. So desktop environments typically provides panels, application launchers, system trays, control panels, configuration infrastructure (where to save settings). Some well known desktop environments are KDE (built using the Qt toolkit), Gnome (using GTK), Enlightenment (using its own toolkit libraries), ...
Some modern desktop effects are best done using 3d hardware. So a new component appears, the composite manager. An X extension, the XComposite extension, sends window contents to the composite manager. The composite manager converts those contents to textures and uses 3d hardware via OpenGL to compose them in many ways (alpha blending, 3d projections, ...).
Not so long ago, the X server talked directly to hardware devices. A significant portion of this device handling has been moving to the OS kernel: DRI (permitting access to 3d hardware by X and direct rendering clients), evdev (unified interface for input device handling), KMS (moving graphics mode setting to the kernel), GEM/TTM (texture memory management).
So, with the complexity of device handling now mostly outside of X, it became easier to experiment with simplified window systems. Wayland is a window system based on the composite manager concept, i.e. the window system is the composite manager. Wayland makes use of the device handling that has moved out of X and renders using OpenGL.
As for Unity, it's a desktop environment designed to have a user interface suitable for netbooks.
| How is the linux graphics stack organised? |
1,320,067,648,000 |
We have two systems with similar hardware (main point being the processor, let us say a standard intel core 2 duo).
One is running (insert your linux distro here: Ubuntu will be used henceforth), and the other is running let's say Mac OS X.
One compiles an equivalent program, Let us say something like:
int main()
{
int cat = 33;
int dog = 5*cat;
return dog;
}
The code is extremely simple, because I don't want to consider the implications of shared libraries yet.
When compiled on the respective systems. Is not the main difference between the output a matter of ELF vs Mach-O? If one were to strip each binary of the formatting, leaving a flat binary, wouldn't the disassembled machine instructions be the same? (with perhaps a few differences depending on the compilers habits/tendencies).
If one were to develop a program to repackage the flat binary produced from our Ubuntu system, in the Mach-O formatting, would it run in the Mac OS X system? Then, if one only had the compiled binary of the supposed program above, and one had this mystical tool for repackaging flat binaries, would simple programs be able to run on the Mac OS X system?
Now let us take it a bit further.
We now have a program with source such as:
#include <stdio.h>
int main()
{
printf("I like tortoises, but not porpoises");
return 0;
}
Assuming this program is compiled and statically linked, would our magical program still be able to repackage the raw binary in the Mach-O format and have it work on mac os X? Seeing as it would not need to rely on any other binaries, (for which the mac system would not have in this case)
And now for the final level;
What if we used this supposed program to convert all of the necessary shared libraries to the Mach-O format, and then instead compiled the program above with dynamic linking. Would the program still succeed to run?
That should be it for now, obviously each step of absurdity relies on the previous base, to even make sense. so If the very first pillar gets destroyed, I doubt there would be much merit to the remaining tiers.
I definitely would not even go as far as to think of this with programs with GUI's in mind. Windowing systems would likely be a whole other headache. I am only considering command line programs at this stage.
Now, I invite the world to correct me,and tell me everything that is wrong with my absurd line of thinking.
|
You forget one crucial thing, namely that your program will have to interact with the operating system to do anything interesting.
The conventions are different between Linux and OS X so the same binary cannot run as-is without essentially having a chunk of operating system dependent code to be able to interact with it. Many of these things are provided through libraries, which you then need to link in, and that means your program needs to be linkable, and linking is also different between the two systems.
And so it goes on and on. What on the surface sounds like doing the same thing is very different in the actual details.
| Binary compatibility between Mac OS X and Linux |
1,320,067,648,000 |
I want to launch the wine executable (Version 2.12), but I get the following error ($=shell prompt):
$ wine
bash: /usr/bin/wine: No such file or directory
$ /usr/bin/wine
bash: /usr/bin/wine: No such file or directory
$ cd /usr/bin
$ ./wine
bash: ./wine: No such file or directory
However, the file is there:
$ which wine
/usr/bin/wine
The executable definitely is there and no dead symlink:
$ stat /usr/bin/wine
File: /usr/bin/wine
Size: 9712 Blocks: 24 IO Block: 4096 regular file
Device: 802h/2050d Inode: 415789 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2017-07-13 13:53:00.000000000 +0200
Modify: 2017-07-08 03:42:45.000000000 +0200
Change: 2017-07-13 13:53:00.817346043 +0200
Birth: -
It is a 32-bit ELF:
$ file /usr/bin/wine
/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped
I can get the dynamic section of the executable:
$ readelf -d /usr/bin/wine
Dynamic section at offset 0x1efc contains 27 entries:
Tag Type Name/Value
0x00000001 (NEEDED) Shared library: [libwine.so.1]
0x00000001 (NEEDED) Shared library: [libpthread.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
0x0000001d (RUNPATH) Library runpath: [$ORIGIN/../lib32]
0x0000000c (INIT) 0x7c000854
0x0000000d (FINI) 0x7c000e54
[more addresses without file names]
However, I cannot list the shared object dependencies using ldd:
$ ldd /usr/bin/wine
/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory
strace shows:
execve("/usr/bin/wine", ["wine"], 0x7fff20dc8730 /* 66 vars */) = -1 ENOENT (No such file or directory)
fstat(2, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0
write(2, "strace: exec: No such file or di"..., 40strace: exec: No such file or directory
) = 40
getpid() = 23783
exit_group(1) = ?
+++ exited with 1 +++
Edited to add suggestion by @jww: The problem appears to happen before dynamically linked libraries are requested, because no ld debug messages are generated:
$ LD_DEBUG=all wine
bash: /usr/bin/wine: No such file or directory
Even when only printing the possible values of LD_DEBUG, the error occurs instead
$ LD_DEBUG=help wine
bash: /usr/bin/wine: No such file or directory
Edited to add suggestion of @Raman Sailopal: The problem seems to lie within the executable, as copying the contents of /usr/bin/wine to another already created file produces the same error
root:bin # cp cat testcmd
root:bin # testcmd --help
Usage: testcmd [OPTION]... [FILE]...
Concatenate FILE(s) to standard output.
[rest of cat help page]
root:bin # dd if=wine of=testcmd
18+1 records in
18+1 records out
9712 bytes (9.7 kB, 9.5 KiB) copied, 0.000404061 s, 24.0 MB/s
root:bin # testcmd
bash: /usr/bin/testcmd: No such file or directory
What is the problem or what can I do to find out which file or directory is missing?
uname -a:
Linux laptop 4.11.3-1-ARCH #1 SMP PREEMPT Sun May 28 10:40:17 CEST 2017 x86_64 GNU/Linux
|
This:
$ file /usr/bin/wine
/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped
Combined with this:
$ ldd /usr/bin/wine
/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory
Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
| Linux executable fails with "File not found" even though the file is there and in PATH |
1,320,067,648,000 |
Situation: I need a filesystem on thumbdrives that can be used across Windows and Linux.
Problem: By default, the common FS between Windows and Linux are just exFAT and NTFS (at least in the more updated kernels)
Question: In terms of performance on Linux (since my base OS is Linux), which is a better FS?
Additional information: If there are other filesystems that you think is better and satisfies the situation, I am open to hearing it.
EDIT 14/4/2020: ExFAT is being integrated into the Linux kernel and may provide better performance in comparison to NTFS (which I have learnt since that the packages that read-write to NTFS partitions are not the fastest [granted, it is a great interface]). Bottom line is still -- if you need the journal to prevent simple corruptions, go NTFS.
EDIT 18/9/2021: NTFS is now being integrated into the Linux kernel (soon), and perhaps this will mean that NTFS performance will be much faster due to the lesser overhead than when it was a userland module.
EDIT 15/6/2022: The NTFS3 kernel driver is officially part of the Linux Kernel as of version 5.15 (Released November 2021). Will do some testing and update this question with results.
|
NTFS is a Microsoft proprietary filesystem. All exFAT patents were released to the Open Invention Network and it has a fully functional in-kernel Linux driver since version 5.4 (2019).[1] exFat, also called FAT64, is a very simple filesystem, practically an extension of FAT32, due to its simplicity, it's well implemented in Linux and very fast.
But due to its easy structure, it's easily affected by fragmentation, so performance can easily decrease with the use.
exFAT doesn't support journaling thus meaning it needs full checking in case of unclean shutdown.
NTFS is slower than exFAT, especially on Linux, but it's more resistant to fragmentation. Due to its proprietary nature it's not as well implemented on Linux as on Windows, but from my experience it works quite well. In case of corruption, NTFS can easily be repaired under Windows (even for Linux there's ntfsfix) and there are lots of tools able to recover lost files.
Personally, I prefer NTFS for its reliability. Another option is to use ext4, and mount under Windows with extfsd, ext4 is better on Linux, but the driver is not well implemented on Windows. Extfsd doesn't fully support journaling, so there is a risk to write under Windows, but ext is easier to repair under Linux than exFAT.
| exFAT vs NTFS on Linux |
1,320,067,648,000 |
rootwait and rootdelay are used in situations when the filesystem is not immediately available, for example if it's detected asynchroneously or mounted via usb. The thing is, it should be obvious based on the root bootarg if that's the case or not, so why can't the kernel realize automatically that it needs to wait for the filesystem to appear? Are there some technical constraints preventing this automatization from being implemented?
|
Sometimes the OS can't distinguish a peripheral that's slow to respond from a peripheral that's not there or completely hosed. The most obvious example is a root filesystem coming from the network (TFTP, NFS) where a slow network link or an overloaded server are difficult to distinguish from a severed network link or a crashed server. A timeout tells the kernel when to give up.
This can also happen with disks that are slow to spin up, RAID arrays that need to be verified and so on. rootdelay instructs the kernel not to give up immediately if the device isn't available. The kernel can't know whether a SCSI drive is a local disk or some kind of RAID bay.
rootwait is provided to wait indefinitely. It's not always desirable, for example a system may want to fall back to a different root filesystem if the normal one takes too long to respond.
| What's the point of rootwait/rootdelay? |
1,320,067,648,000 |
I performed an ls -la on directory on my CentOS 6.4 server here and the permissions for a given file came out as:
-rwxr-xr-x.
I understand what -rwxr-xr-x means, what I don't understand is the . after the last attribute.
Can someone explain it to me? Is it harmful in any way? Can it be removed?
|
GNU ls uses a . character to indicate a file with an SELinux
security context, but no other alternate access method.
-- From ls man page (info coreutils 'ls invocation').
| What does a dot after the file permission bits mean? |
1,320,067,648,000 |
/proc/sys/vm/swappiness is nice, but I want a knob that is per process like /proc/$PID/oom_adj. So that I can make certain processes less likely than others to have any of their pages swapped out. Unlike memlock(), this doesn't prevent a program from being swapped out. And like nice, the user by default can't make their programs less likely, but only more likely to get swapped. I think I had to call this /proc/$PID/swappiness_adj.
|
You can configure swappiness per cgroup:
http://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt
http://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
For an easier introduction to cgroups, with examples, see
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/ch01.html
| How to set per process swapiness for linux? |
1,320,067,648,000 |
The Linux kernel swaps out most pages from memory when I run an application that uses most of the 16GB of physical memory. After the application finishes, every action (typing commands, switching workspaces, opening a new web page, etc.) takes very long to complete because the relevant pages first need to be read back in from swap.
Is there a way to tell the Linux kernel to copy pages from swap back into physical memory without manually touching (and waiting for) each application? I run lots of applications so the wait is always painful.
I often use swapoff -a && swapon -a to make the system responsive again, but this clears the pages from swap, so they need to be written again the next time I run the script.
Is there a kernel interface, perhaps using sysfs, to instruct the kernel to read all pages from swap?
Edit: I am indeed looking for a way to make all of swap swapcached. (Thanks derobert!)
[P.S.
serverfault.com/questions/153946/… and serverfault.com/questions/100448/… are related topics but do not address the question of how to get the Linux kernel to copy pages from swap back into memory without clearing swap.]
|
Based on memdump program originally found here I've created a script to selectively read specified applications back into memory. remember:
#!/bin/bash
declare -A Q
for i in "$@"; do
E=$(readlink /proc/$i/exe);
if [ -z "$E" ]; then
#echo skipped $i;
continue;
fi
if echo $E | grep -qF memdump; then
#echo skipped $i >&2;
continue;
fi
if [ -n "${Q[${E}]}" ]; then
#echo already $i >&2;
continue;
fi
echo "$i $E" >&2
memdump $i 2> /dev/null
Q[$E]=$i
done | pv -c -i 2 > /dev/null
Usage: something like
# ./remember $(< /mnt/cgroup/tasks )
1 /sbin/init
882 /bin/bash
1301 /usr/bin/hexchat
...
2.21GiB 0:00:02 [ 1.1GiB/s] [ <=> ]
...
6838 /sbin/agetty
11.6GiB 0:00:10 [1.16GiB/s] [ <=> ]
...
23.7GiB 0:00:38 [ 637MiB/s] [ <=> ]
#
It quickly skips over non-swapped memory (gigabytes per second) and slows down when swap is needed.
| Making Linux read swap back into memory |
1,320,067,648,000 |
In any linux system I have access to (a couple of Archlinuxes, an Ubuntu, a Debian Sid and a Gentoo) there are the following 4 files in /etc/, all ending with a dash:
/etc/group-
/etc/gshadow-
/etc/passwd-
/etc/shadow-
On the internet they say that these are just backup files, updated to the next to last change.
Now I'm wondering: who's creating those files? Is it my editor? Is it the application editing those files (gpasswd, useradd, groupadd and so on)? Is it something at a lower level (maybe even a kernel module)?
|
The backup files are created by the program that modifies your /etc/group or /etc/passwd files like useradd, groupadd and the like created as a safety precaution in case files get corrupted during edit. Kernel never touches those files.
| Who creates /etc/{group,gshadow,passwd,shadow}-? |
1,320,067,648,000 |
I'm wanting to understand the Linux init process better in order to netboot a system over ceph rather than nfs.
In the process I've come across two forms of switching root. One called switch_root, and the other called pivot_root. These scripts being run from an in memory filesystem (initramfs) obtained via tftp using the pxe boot process.
When would you use one over the other? I've seen both used in some init script's placed in root.
|
I found a wonderful explanation here. However, let me try to put in a shorter format of what I understood in the answer.
Shorter Version
While the system boots, it needs an early userspace. It can be
achieved using either initramfs or initrd.
initrd is loaded into ramdisk which is an actual FILE SYSTEM.
initramfs is not a file system.
For initrd pivot_root is used and for initramfs switch_root is used.
Longer Version
Now, to the detailed explanation of what I had put above.
While both an initramfs and an initrd serve the same purpose, there
are 2 differences. The most obvious difference is that an initrd is
loaded into a ramdisk. It consists of an actual filesystem (typically
ext2) which is mounted in a ramdisk. An initramfs, on the other hand,
is not a filesystem. It is simply a (compressed) cpio archive (of type
newc) which is unpacked into a tmpfs. This has a side-effect of making
the initramfs a bit more optimized and capable of loading a little
earlier in the kernel boot process than an initrd. Also, the size of
the initramfs in memory is smaller, since the kernel can adapt the
size of the tmpfs to what is actually loaded, rather than relying on
predefined ramdisk sizes, and it can also clean up the ram that was
used whereas ramdisks tend to remain in use (due to details of the
pivot_root implementation).
There is also another side-effect difference: how the root device (and
switching to it) is handled. Since an initrd is an actual filesystem
unpacked into ram, the root device must actually be the ramdisk. For
an initramfs, there is a kernel "rootfs" which becomes the tmpfs that
the initramfs is unpacked into (if the kernel loads an initramfs; if
not, then the rootfs is simply the filesystem specified via the root=
kernel boot parameter), but this interim rootfs should not be
specified as the root= boot parameter (and there wouldn't be a way to
do so, since there's no device attached to it). This means that you
can still pass your real root device to the kernel when using an
initramfs. With an initrd, you have to process what the real root
device is yourself. Also, since the "real" root device with an initrd
is the ramdisk, the kernel has to really swith root devices from one
real device (the ramdisk) to the other (your real root). In the case
of an initramfs, the initramfs space (the tmpfs) is not a real device,
so the kernel doesn't switch real devices. Thus, while the command
pivot_root is used with an initrd, a different command has to be used
for an initramfs. Busybox provides switch_root to accomplish this,
while klibc offers new_root.
| When would you use pivot_root over switch_root? |
1,320,067,648,000 |
I am completely new to Linux.
I know that dmesg and journalctl record commands invoked by my operating-system, but why do 2 recorders exist, what types of messages should I expect to see within each of them, and what are the differences in their life cycles?
|
They are two totally different things.
On most systems that I'm aware of that has dmesg, it is sometimes a command and sometimes a log file in /var/log, and may be both. The log contains messages produced by the kernel. This will usually include the various device probe messages during the boot sequence as well as any further messages outputted by the kernel during the running of the system.
Depending on what "journal" refers to, I suppose it way be different things. The journal that first springs to my mind is the journal of a journaled filsystem. This journal contains the various transactions made to a particular partition (part of a disk) and allows the system to replay disk operations consistently in the case of a system crash. This journal is not generally accessible to users.
If "journal" refers to journalctl, then the two are similar, but not the same. journalctl has a --dmesg option that makes it mimic dmesg.
Compare the manuals for journalctl and dmesg on your system.
| What is the difference between dmesg and journalctl [closed] |
1,320,067,648,000 |
When I run netstat --protocol unix or lsof -U I see that some unix socket paths are prepended with @ symbol, for example, @/tmp/dbus-qj8V39Yrpa. Then when I run ls -l /tmp I don't see file named dbus-qj8V39Yrpa there.
The question is what does that prepended @ symbol denote? And second related question, is -- where can I actually find that unix socket file (@/tmp/dbus-qj8V39Yrpa) on the filesystem?
|
The @ probably indicates a socket held in an abstract namespace which doesn't belong to a file in the filesystem.
Quoting from The Linux Programming Interface by Michael Kerrisk:
57.6 The Linux Abstract Socket Namespace
The so-called abstract namespace is a Linux-specific feature that
allows us to bind a UNIX domain socket to a name without that name
being created in the file system. This provides a few potential
advantages:
We don’t need to worry about possible collisions with existing names in the file system.
It is not necessary to unlink the socket pathname when we have finished using the socket. The abstract name is automatically removed
when the socket is closed.
We don’t need to create a file-system pathname for the socket. This may be useful in a chroot environment, or if we don’t have write
access to a file system.
To create an abstract binding, we specify the first byte of the
sun_path field as a null byte (\0).
[...]
Displaying a leading null byte to denote such type of a socket may be difficult, so that is maybe the reason for the leading @ sign.
| What does the @ symbol denote in the beginning of a unix domain socket path in Linux? |
1,320,067,648,000 |
So when i try to use the Xorg command as a normal user, this is the error that it gives me :
/usr/lib/xorg/Xorg.wrap: Only console users are allowed to run the X server
but i don't understand, what are the "console users"? and when i switch to root it gives me another error :
_XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
(EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
(EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
(EE)
(EE) Server terminated with error (1). Closing log file.
So what is going on and what are the reason for each of these errors?
UPDATE: and the output of the command netstat -ln | grep -E '[.]X|:6[0-9][0-9][0-9] is :
unix 2 [ ACC ] STREAM LISTENING 18044 @/tmp/.X11-unix/X0
unix 2 [ ACC ] STREAM LISTENING 47610 @/tmp/.X11-unix/X1
unix 2 [ ACC ] STREAM LISTENING 18045 /tmp/.X11-unix/X0
unix 2 [ ACC ] STREAM LISTENING 47611 /tmp/.X11-unix/X1
|
/usr/lib/xorg/Xorg.wrap: Only console users are allowed to run the X server
but i don't understand, what are the "console users"?
It means you need to be running from the Linux text console, it actually does not matter what user you are. (Except that root is always allowed). Confusing :).
There are two different examples of switching to the Linux text console (and back) here, depending on exactly how your system is configured:
Switch to a text console in Fedora
The details can vary, as to which numbered consoles (Ctrl+Alt+F1, Ctrl+Alt+F2, etc) allow a text login, and which ones are used for graphical sessions (or not used at all).
I keep getting the message: "Cannot establish any listening sockets..."
You get an error message like:
_XSERVTransSocketINETCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
Fatal server error:
Cannot establish any listening sockets - Make sure an X server isn't already running
This problem is very similar to the previous one. You will get this message possibly because the lock file was removed somehow or some other program which doesn't create a lock file is already listening on this port. You can check this by doing a netstat -ln. Xservers usually listen at tcp port 6000+, therefore if you have started your Xserver with the command line option :1 it will be listening on port 6001.
Please check the article above for further information.
As this says, there is more information about what :0, :1, :2 mean, immediately above the quoted section:
https://www.x.org/wiki/FAQErrorMessages/#index5h2
(Note that you are using a more modern X server config, which does not listen on any TCP ports. This is why your error happens in _XSERVTransSocketUNIXCreateListener, instead of _XSERVTransSocketInetCreateListener. But the principle is exactly the same).
When i tried Xorg :2 in my virtual machine with Kali, the screen went black, why did this happen?
A-ha, yes :-D. Xorg is a graphics server. If you want to show some graphics on it, you need to run some client programs.
Xorg also starts up with an empty cursor nowadays. It's deliberately featureless, to avoid flashes / inconsistencies when starting your graphical stuff. This has changed - when I first used Xorg, the default background and cursor were quite obtrusive. If you want to see what that looked like, you can pass the -retro option :-).
Traditionally - and I think this is the behaviour with Xwrapper - Xorg would grab an unused console and switch to it. In this case you can switch back to your previous console (see above). Of course you can switch back again to the Xorg server, once you find which number console it grabbed :-).
If you are running a virtual machine on Linux, your VM will provide some method to inject the key combination Ctrl+Alt+F1 or whatever, because pressing that key combination probably switches consoles on your real machine.
I would tell you to compare startx -- :2, which (hopefully) launches some clients as well as an X server :-). However, the most popular modern GUIs now explicitly do not support multiple sessions. So you must make sure to logout your existing GUI session, before you run startx. Otherwise, it might look like it works, but then go wrong in weird ways that you don't understand.
| Error when trying to use Xorg: Only console users are allowed to run the X server? |
1,320,067,648,000 |
When I write Bash code, for example to copy files, if a file doesn't exist, in the terminal I see an error similar to file not found. If the user running the script doesn't have the necessary permissions, the error is similar to permission denied.
Basically, independently from the programming language used to write the code, the code to copy files will ask the OS (Linux in my case) to do that. If something goes wrong, the OS will return the appropriate error (number and message).
Is there a command I can run to list all the standard error codes?
|
The errno command can do this. From man errno:
DESCRIPTION
errno looks up errno macro names, errno codes, and the corresponding descriptions. For example, if given ENOENT on a Linux system, it prints
out the code 2 and the description "No such file or directory". If given the code 2, it prints ENOENT and the same description.
OPTIONS
-l, --list
List all errno values.
So, to see all of them, run:
$ errno -ls
EPERM 1 Operation not permitted
ENOENT 2 No such file or directory
ESRCH 3 No such process
EINTR 4 Interrupted system call
EIO 5 Input/output error
ENXIO 6 No such device or address
E2BIG 7 Argument list too long
ENOEXEC 8 Exec format error
EBADF 9 Bad file descriptor
ECHILD 10 No child processes
EAGAIN 11 Resource temporarily unavailable
ENOMEM 12 Cannot allocate memory
EACCES 13 Permission denied
EFAULT 14 Bad address
ENOTBLK 15 Block device required
EBUSY 16 Device or resource busy
EEXIST 17 File exists
EXDEV 18 Invalid cross-device link
ENODEV 19 No such device
ENOTDIR 20 Not a directory
EISDIR 21 Is a directory
EINVAL 22 Invalid argument
ENFILE 23 Too many open files in system
EMFILE 24 Too many open files
ENOTTY 25 Inappropriate ioctl for device
ETXTBSY 26 Text file busy
EFBIG 27 File too large
ENOSPC 28 No space left on device
ESPIPE 29 Illegal seek
EROFS 30 Read-only file system
EMLINK 31 Too many links
EPIPE 32 Broken pipe
EDOM 33 Numerical argument out of domain
ERANGE 34 Numerical result out of range
EDEADLK 35 Resource deadlock avoided
ENAMETOOLONG 36 File name too long
ENOLCK 37 No locks available
ENOSYS 38 Function not implemented
ENOTEMPTY 39 Directory not empty
ELOOP 40 Too many levels of symbolic links
EWOULDBLOCK 11 Resource temporarily unavailable
ENOMSG 42 No message of desired type
EIDRM 43 Identifier removed
ECHRNG 44 Channel number out of range
EL2NSYNC 45 Level 2 not synchronized
EL3HLT 46 Level 3 halted
EL3RST 47 Level 3 reset
ELNRNG 48 Link number out of range
EUNATCH 49 Protocol driver not attached
ENOCSI 50 No CSI structure available
EL2HLT 51 Level 2 halted
EBADE 52 Invalid exchange
EBADR 53 Invalid request descriptor
EXFULL 54 Exchange full
ENOANO 55 No anode
EBADRQC 56 Invalid request code
EBADSLT 57 Invalid slot
EDEADLOCK 35 Resource deadlock avoided
EBFONT 59 Bad font file format
ENOSTR 60 Device not a stream
ENODATA 61 No data available
ETIME 62 Timer expired
ENOSR 63 Out of streams resources
ENONET 64 Machine is not on the network
ENOPKG 65 Package not installed
EREMOTE 66 Object is remote
ENOLINK 67 Link has been severed
EADV 68 Advertise error
ESRMNT 69 Srmount error
ECOMM 70 Communication error on send
EPROTO 71 Protocol error
EMULTIHOP 72 Multihop attempted
EDOTDOT 73 RFS specific error
EBADMSG 74 Bad message
EOVERFLOW 75 Value too large for defined data type
ENOTUNIQ 76 Name not unique on network
EBADFD 77 File descriptor in bad state
EREMCHG 78 Remote address changed
ELIBACC 79 Can not access a needed shared library
ELIBBAD 80 Accessing a corrupted shared library
ELIBSCN 81 .lib section in a.out corrupted
ELIBMAX 82 Attempting to link in too many shared libraries
ELIBEXEC 83 Cannot exec a shared library directly
EILSEQ 84 Invalid or incomplete multibyte or wide character
ERESTART 85 Interrupted system call should be restarted
ESTRPIPE 86 Streams pipe error
EUSERS 87 Too many users
ENOTSOCK 88 Socket operation on non-socket
EDESTADDRREQ 89 Destination address required
EMSGSIZE 90 Message too long
EPROTOTYPE 91 Protocol wrong type for socket
ENOPROTOOPT 92 Protocol not available
EPROTONOSUPPORT 93 Protocol not supported
ESOCKTNOSUPPORT 94 Socket type not supported
EOPNOTSUPP 95 Operation not supported
EPFNOSUPPORT 96 Protocol family not supported
EAFNOSUPPORT 97 Address family not supported by protocol
EADDRINUSE 98 Address already in use
EADDRNOTAVAIL 99 Cannot assign requested address
ENETDOWN 100 Network is down
ENETUNREACH 101 Network is unreachable
ENETRESET 102 Network dropped connection on reset
ECONNABORTED 103 Software caused connection abort
ECONNRESET 104 Connection reset by peer
ENOBUFS 105 No buffer space available
EISCONN 106 Transport endpoint is already connected
ENOTCONN 107 Transport endpoint is not connected
ESHUTDOWN 108 Cannot send after transport endpoint shutdown
ETOOMANYREFS 109 Too many references: cannot splice
ETIMEDOUT 110 Connection timed out
ECONNREFUSED 111 Connection refused
EHOSTDOWN 112 Host is down
EHOSTUNREACH 113 No route to host
EALREADY 114 Operation already in progress
EINPROGRESS 115 Operation now in progress
ESTALE 116 Stale file handle
EUCLEAN 117 Structure needs cleaning
ENOTNAM 118 Not a XENIX named type file
ENAVAIL 119 No XENIX semaphores available
EISNAM 120 Is a named type file
EREMOTEIO 121 Remote I/O error
EDQUOT 122 Disk quota exceeded
ENOMEDIUM 123 No medium found
EMEDIUMTYPE 124 Wrong medium type
ECANCELED 125 Operation canceled
ENOKEY 126 Required key not available
EKEYEXPIRED 127 Key has expired
EKEYREVOKED 128 Key has been revoked
EKEYREJECTED 129 Key was rejected by service
EOWNERDEAD 130 Owner died
ENOTRECOVERABLE 131 State not recoverable
ERFKILL 132 Operation not possible due to RF-kill
EHWPOISON 133 Memory page has hardware error
ENOTSUP 95 Operation not supported
| What are the standard error codes in Linux? |
1,320,067,648,000 |
I want to print the value of /dev/stdin, /dev/stdout and /dev/stderr.
Here is my simple script :
#!/bin/bash
echo your stdin is : $(</dev/stdin)
echo your stdout is : $(</dev/stdout)
echo your stderr is : $(</dev/stderr)
i use the following pipes :
[root@localhost home]# ls | ./myscript.sh
[root@localhost home]# testerr | ./myscript.sh
only $(</dev/stdin)seems to work , I've also found on some others questions people using :"${1-/dev/stdin}" tried it without success.
|
stdin, stdout, and stderr are streams attached to file descriptors 0, 1, and 2 respectively of a process.
At the prompt of an interactive shell in a terminal or terminal emulator, all those 3 file descriptors would refer to the same open file description which would have been obtained by opening a terminal or pseudo-terminal device file (something like /dev/pts/0) in read+write mode.
If from that interactive shell, you start your script without using any redirection, your script will inherit those file descriptors.
On Linux, /dev/stdin, /dev/stdout, /dev/stderr are symbolic links to /proc/self/fd/0, /proc/self/fd/1, /proc/self/fd/2 respectively, themselves special symlinks to the actual file that is open on those file descriptors.
They are not stdin, stdout, stderr, they are special files that identify what files stdin, stdout, stderr go to (note that it's different in other systems than Linux that have those special files).
reading something from stdin means reading from file descriptor 0 (which will point somewhere within the file referenced by /dev/stdin).
But in $(</dev/stdin), the shell is not reading from stdin, it opens a new file descriptor for reading on the same file as the one open on stdin (so reading from the start of the file, not where stdin currently points to).
Except in the special case of terminal devices open in read+write mode, stdout and stderr are usually not open for reading. They are meant to be streams that you write to. So reading from the file descriptor 1 will generally not work. On Linux, opening /dev/stdout or /dev/stderr for reading (as in $(</dev/stdout)) would work and would let you read from the file where stdout goes to (and if stdout was a pipe, that would read from the other end of the pipe, and if it was a socket, it would fail as you can't open a socket).
In our case of the script run without redirection at the prompt of an interactive shell in a terminal, all of /dev/stdin, /dev/stdout and /dev/stderr will be that /dev/pts/x terminal device file.
Reading from those special files returns what is sent by the terminal (what you type on the keyboard). Writing to them will send the text to the terminal (for display).
echo $(</dev/stdin)
echo $(</dev/stderr)
will be the same. To expand $(</dev/stdin), the shell will open that /dev/pts/0 and read what you type until you press ^D on an empty line. They will then pass the expansion (what you typed stripped of the trailing newlines and subject to split+glob) to echo which will then output it on stdout (for display).
However in:
echo $(</dev/stdout)
in bash (and bash only), it's important to realise that inside $(...), stdout has been redirected. It is now a pipe. In the case of bash, a child shell process is reading the content of the file (here /dev/stdout) and writing it to the pipe, while the parent reads from the other end to make up the expansion.
In this case when that child bash process opens /dev/stdout, it is actually opening the reading end of the pipe. Nothing will ever come from that, it's a deadlock situation.
If you wanted to read from the file pointed-to by the scripts stdout, you'd work around it with:
{ echo content of file on stdout: "$(</dev/fd/3)"; } 3<&1
That would duplicate the fd 1 onto the fd 3, so /dev/fd/3 would point to the same file as /dev/stdout.
With a script like:
#! /bin/bash -
printf 'content of file on stdin: %s\n' "$(</dev/stdin)"
{ printf 'content of file on stdout: %s\n' "$(</dev/fd/3)"; } 3<&1
printf 'content of file on stderr: %s\n' "$(</dev/stderr)"
When run as:
echo bar > err
echo foo | myscript > out 2>> err
You'd see in out afterwards:
content of file on stdin: foo
content of file on stdout: content of file on stdin: foo
content of file on stderr: bar
If as opposed to reading from /dev/stdin, /dev/stdout, /dev/stderr, you wanted to read from stdin, stdout and stderr (which would make even less sense), you'd do:
#! /bin/sh -
printf 'what I read from stdin: %s\n' "$(cat)"
{ printf 'what I read from stdout: %s\n' "$(cat <&3)"; } 3<&1
printf 'what I read from stderr: %s\n' "$(cat <&2)"
If you started that second script again as:
echo bar > err
echo foo | myscript > out 2>> err
You'd see in out:
what I read from stdin: foo
what I read from stdout:
what I read from stderr:
and in err:
bar
cat: -: Bad file descriptor
cat: -: Bad file descriptor
For stdout and stderr, cat fails because the file descriptors were open for writing only, not reading, the the expansion of $(cat <&3) and $(cat <&2) is empty.
If you called it as:
echo out > out
echo err > err
echo foo | myscript 1<> out 2<> err
(where <> opens in read+write mode without truncation), you'd see in out:
what I read from stdin: foo
what I read from stdout:
what I read from stderr: err
and in err:
err
You'll notice that nothing was read from stdout, because the previous printf had overwritten the content of out with what I read from stdin: foo\n and left the stdout position within that file just after. If you had primed out with some larger text, like:
echo 'This is longer than "what I read from stdin": foo' > out
Then you'd get in out:
what I read from stdin: foo
read from stdin": foo
what I read from stdout: read from stdin": foo
what I read from stderr: err
See how the $(cat <&3) has read what was left after the first printf and doing so also moved the stdout position past it so that the next printf outputs what was read after.
| echo or print /dev/stdin /dev/stdout /dev/stderr |
1,320,067,648,000 |
In Linux, it seems that filesystem time is always some milliseconds behind system time, leading to inconsistencies if you want to check if a file has been modified before or after a given time in very narrow time ranges (milliseconds).
In any Linux system with a filesystem that supports nanosecond resolution (I tried with ext4 with 256-byte inodes and ZFS), if you try to do something like:
date +%H:%M:%S.%N; echo "hello" > test1; stat -c %y test1 | cut -d" " -f 2
the second output value (file modification time) is always some milliseconds behind the first one (system time), e.g.:
17:26:42.400823099
17:26:42.395348462
while it should be the other way around, since the file test1 is modified after calling the date command.
You can get the same result in python:
import os, time
def test():
print(time.time())
with open("test1", "w") as f:
f.write("hello")
print(os.stat("test1").st_mtime)
test()
1698255477.3125281
1698255477.3070245
Why is it so, and is there a way to avoid it, so that system time is consistent with filesystem time? The only workaround I found so far is to get filesystem "time" (whatever that means in practice) by creating a dummy temporary file and getting its modification time, like this:
def get_filesystem_time():
"""
get the current filesystem time by creating a temporary file and getting
its modification time.
"""
with tempfile.NamedTemporaryFile() as f:
return os.stat(f.name).st_mtime
but I wonder if there is a cleaner solution.
|
The time used for file timestamps is the time at the last timer tick, which is always slightly in the past. The current_time function in inode.c calls ktime_get_coarse_real_ts64:
/**
* current_time - Return FS time
* @inode: inode.
*
* Return the current time truncated to the time granularity supported by
* the fs.
*
* Note that inode and inode->sb cannot be NULL.
* Otherwise, the function warns and returns time without truncation.
*/
struct timespec64 current_time(struct inode *inode)
{
struct timespec64 now;
ktime_get_coarse_real_ts64(&now);
if (unlikely(!inode->i_sb)) {
WARN(1, "current_time() called with uninitialized super_block in the inode");
return now;
}
return timestamp_truncate(now, inode);
}
and the latter is part of a family of functions documented as follows:
These are quicker than the non-coarse versions, but less accurate, corresponding to CLOCK_MONOTONIC_COARSE and CLOCK_REALTIME_COARSE in user space, along with the equivalent boottime/tai/raw timebase not available in user space.
The time returned here corresponds to the last timer tick, which may be as much as 10ms in the past (for CONFIG_HZ=100), same as reading the 'jiffies' variable. These [functions] are only useful when called in a fast path and one still expects better than second accuracy, but can't easily use 'jiffies', e.g. for inode timestamps. Skipping the hardware clock access saves around 100 CPU cycles on most modern machines with a reliable cycle counter, but up to several microseconds on older hardware with an external clocksource.
Note the specific mention of inode timestamps.
I’m not aware of any way of avoiding this entirely, short of modifying the kernel. You can reduce the impact by increasing CONFIG_HZ. There has been a recent proposal to improve this, which is still being worked on.
| Why is filesystem time always some msecs behind system time in Linux? |
1,320,067,648,000 |
The man page for cp(1) says
--no-clobber do not overwrite an existing file
However, wouldn't the following scenario be possible?
cp checks the file existence, let's assume the file doesn't exist (yet)
Some other process writes to the same path, so now there is data written to the previously not existing file
Since cp isn't aware of the now existing file, it overwrites the data
Is cp --no-clobber vulnerable to this race condition? And if not, how does cp avoid the situation above?
|
cp isn’t vulnerable to this race condition. When --no-clobber is set, it checks whether the destination already exists; if it determines it doesn’t, and it should therefore proceed with the copy, it remembers that it’s supposed to copy to a new file. When the time comes to open the destination file, it opens it with flags which enforce its creation, O_CREAT and O_EXCL; the operating system then checks that the file doesn’t exist while opening it, and fails (EEXIST) if it does.
| Is `cp --no-clobber` vulnerable to race condition? |
1,320,067,648,000 |
I want to get the current CPUPower governor.
When I type cpupower frequency-info I get a lot of information. I just want to get the governor, just like "ondemand" with no more information, to use its value in a program.
|
The current governor can be obtained as follows:
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Note that cpu* will give you the scaling governor of all your cores and not just e.g. cpu0.
This solution might be system dependent, though. I'm not 100% sure this is portable.
| How to get current CPUPower governor |
1,320,067,648,000 |
How do I stop a program running at startup in Linux. I want to remove some apps from startup to allow them to be managed by supervisord e.g apache2
|
Depending on your distro use the chkconfig or update-rc.d tool to enable/disable system services.
On a redhat/suse/mandrake style system:
sudo chkconfig apache2 off
On Debian:
sudo update-rc.d -f apache2 remove
Checkout their man pages for more info.
| Stop program running at startup in Linux |
1,320,067,648,000 |
I have a laptop with a multi guesture touchpad. My touchpad never works in any Linux distro such as Ubuntu, Fedora, openSUSE, Linux Mint, Knoppix, Puppy, Slitaz and lots more. I have tried lots of things but nothing worked. I have been struggling with the Synaptics drivers for over one year but it doesn't work either.
Then somewhere I read about the i8042.nomux kernel option. So I booted Ubuntu with following options:
i8042.nomux=1 i8042.reset
This made my touchpad work on all variants of Ubuntu and its derivatives like Linux Mint.
I am eager to know about these options. If I knew what it does exactly, I would be able to use my touchpad in all linux distros, as this option only works with Ubuntu.
|
This is an arcane option, only necessary on some rare devices (one of which you have). The only documentation is one line in the kernel parameters list.
The i8042 controller controls PS/2 keyboards and mice in PCs. It seems that on your laptop, both the keyboard and the touchpad are connected through that chip.
From what I understand from the option name and a brief skim of the source code (don't rely on this to write an i8042 driver!), some i8042 chips are capable of multiplexing data coming from multiple pointing devices. The traditional PS/2 interface only provides for one keyboard and one mouse; modern laptops often have a two or more of a touchpad, a trackstick and an external PS/2 plug. Some controllers follow the active PS/2 multiplexing specification, which permit up to 4 devices; the data sent by each device carries an indication of which device it comes from.
The Linux driver tries to find out whether the i8042 controller supports multiplexing, but sometimes guessing wrongly. With the i8042.nomux=1 parameter, the driver does not try to detect whether the controller supports multiplexing and assumes that it doesn't. With the i8042.reset parameter, the driver resets the controller when starting, which may be useful to disable multiplexing mode if the controller does support it but in a buggy way.
| What does the 'i8042.nomux=1' kernel option do during booting of Ubuntu? |
1,320,067,648,000 |
Is there some way I can check which of my processes the kernel has killed? Sometimes I log onto my server and find that something that should've run all night just stopped 8 hours in and I'm unsure if it's the applications doing or the kernels.
|
If the kernel killed a process (because the system ran out of memory), there will be a kernel log message. Check in /var/log/kern.log (on Debian/Ubuntu, other distributions might send kernel logs to a different file, but usually under /var/log under Linux).
Note that if the OOM-killer (out-of-memory killer) triggered, it means you don't have enough virtual memory. Add more swap (or perhaps more RAM).
Some process crashes are recorded in kernel logs as well (e.g. segmentation faults).
If the processes were started from cron, you should have a mail with error messages. If the processes were started from a shell in a terminal, check the errors in that terminal. Run the process in screen to see the terminal again in the morning. This might not help if the OOM-killer triggered, because it might have killed the cron or screen process as well; but if you ran into the OOM-killer, that's the problem you need to fix.
| Where can I see a list of kernel killed processes? |
1,320,067,648,000 |
Is there a tool available for Linux systems that can measure the "quality" of entropy on the system?
I know how to count the entropy:
cat /proc/sys/kernel/random/entropy_avail
And I know that some systems have "good" sources of entropy (hardware entropy keys), and some don't (virtual machines).
But is there a tool that can provide a metric as to the "quality" of the entropy on the system?
|
http://www.fourmilab.ch/random/ works for me.
sudo apt-get install ent
head -c 1M /dev/urandom > /tmp/out
ent /tmp/out
| Tool for measuring entropy quality? |
1,320,067,648,000 |
I find that in order to re-mount a USB stick, I have to physically disconnect it, and then re-connect it. How can I do this without such tiring physical action?
|
From my experience in Ubuntu, when you "eject" a USB stick from within Nautilus, the device actually disappears from the system. I'm not sure why this is, but neither Nautilus nor the command line can get it back. I guess the logic is that once you eject a USB stick you don't want it back, but are going to disconnect it.
The way I work around this (when needed), is by using umount instead of Nautilus. You could also just call sync to flush the filesystem buffers to the disk.
Just found a thread which has more info : http://ubuntuforums.org/showthread.php?t=1477247
So basically either a) Rebuild nautilus from source without that patch (and keep it up to date when you update your system...) or b) use another file manager (at least when unmounting ^^).
| How to re-mount a USB stick after unmounting from Nautilus without disconnecting it? |
1,320,067,648,000 |
I am upgrading the internal SATA hard drive on my laptop from a 40G drive to a 160G drive. I have a Linux/Ubuntu desktop which has a SATA card. I would actually like to do the same thing for a couple CentOS & FreeBSD boxes at work, and it seems this would have the same solution.
I've heard that I can use DD to mirror the 40G partition to the 160G drive, or that I can save the 40G partition as an image on my local system, and then copy that 40G image to the 160G drive.
Can anyone describe how I may do this? Do I need any other utilities, such as gparted
|
Your first task would be to connect both disks to an existing Linux system or connect the new disk to the original system.
You must be very careful since it is very simple to copy the blank disk on top of the good disk!
To end up with the boot sectors and all, you would do something like:
dd if=/dev/hdx of=/dev/hdy
Where hdx is your 40G disk and hdy is your 160G disk. You will notice there are no partition numbers like /dev/hdx1. This copies the entire disk, partition info and all.
Your new disk will just like the old disk, 40G allocated. It should boot right up when placed back in your laptop. Hope you used LVM? Otherwise hope you did not use all the partitions? Getting past this point requires a lot more info.
Another solution is to dump each individual partition. This requires a lot more situation awareness since you will need to recreate the boot information.
All of this is best used for cloning computers, not upgrading hard disks. It is much better to restore to a new installation using your backups.
| How can I use DD to migrate data from an old drive to a new drive? |
1,320,067,648,000 |
Security researchers have published on the Project Zero a new vulnerability called Spectre and Meltdown allowing a program to steal information from a memory of others programs. It affects Intel, AMD and ARM architectures.
This flaw can be exploited remotely by visiting a JavaScript website. Technical details can be found on redhat website, Ubuntu security team.
Information Leak via speculative execution side channel attacks (CVE-2017-5715, CVE-2017-5753, CVE-2017-5754 a.k.a. Spectre and Meltdown)
It was discovered that a new class of side channel attacks impact most processors, including processors from Intel, AMD, and ARM. The attack allows malicious userspace processes to read kernel memory and malicious code in guests to read hypervisor memory. To address the issue, updates to the Ubuntu kernel and processor microcode will be needed. These updates will be announced in future Ubuntu Security Notices once they are available.
Example Implementation in JavaScript
As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs.
My system seem to be affected by the spectre vulnerability. I have compiled and executed this proof-of-concept (spectre.c).
System information:
$ uname -a
4.13.0-0.bpo.1-amd64 #1 SMP Debian 4.13.13-1~bpo9+1 (2017-11-22) x86_64 GNU/Linux
$ cat /proc/cpuinfo
model name : Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz
$gcc --version
gcc (Debian 6.3.0-18) 6.3.0 20170516
How to mitigate the Spectre and Meldown vulnerabilities on Linux systems?
Further reading: Using Meltdown to steal passwords in real time.
Update
Using the Spectre & Meltdown Checker after switching to the 4.9.0-5 kernel version following @Carlos Pasqualini answer because a security update is available to mitigate the cve-2017-5754 on debian Stretch:
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel: NO (only 31 opcodes found, should be >= 70)
> STATUS: VULNERABLE (heuristic to be improved when official patches become available)
CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
* Hardware (CPU microcode) support for mitigation: NO
* Kernel support for IBRS: NO
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* Mitigation 2
* Kernel compiled with retpoline option: NO
* Kernel compiled with a retpoline-aware compiler: NO
> STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)
CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
> STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)
Update Jan 25 , 2018
The spectre-meltdown-checker script is officially packaged by debian , it is available for Debian Stretch through backports repository , Buster and Sid.
Update 05/22/2018
Speculative Store Bypass (SSB) – also known as Variant 4
Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis.
Rogue System Register Read (RSRE) – also known as Variant 3a
Systems with microprocessors utilizing speculative execution and that perform speculative reads of system registers may allow unauthorized disclosure of system parameters to an attacker with local user access via a side-channel analysis.
Edit July 27 , 2018
NetSpectre: Read Arbitrary Memory over Network
In this paper, we present NetSpectre, a new attack based on
Spectre variant 1, requiring no attacker-controlled code on the
target device, thus affecting billions of devices. Similar to a local
Spectre attack, our remote attack requires the presence of a Spectre
gadget in the code of the target. We show that systems containing
the required Spectre gadgets in an exposed network interface or API
can be attacked with our generic remote Spectre attack, allowing to
read arbitrary memory over the network. The attacker only sends
a series of crafted requests to the victim and measures the response
time to leak a secret value from the victim’s memory.
|
Alan Cox shared a link from AMD's blog:
https://www.amd.com/en/corporate/speculative-execution
Variant One: Bounds Check Bypass
Resolved by software / OS updates to be made available by system
vendors and manufacturers. Negligible performance impact expected.
Variant Two: Branch Target Injection
Differences in AMD architecture mean there is a near zero risk of
exploitation of this variant. Vulnerability to Variant 2 has not been
demonstrated on AMD processors to date.
Variant Three: Rogue Data Cache Load
Zero AMD vulnerability due to AMD architecture differences.
It would be good to have confirmation of these AMD's statements by a third party though.
The 'mitigation' on affected systems, would require a new kernel and a reboot, but on many distributions there is not yet released packages with the fixes:
https://www.cyberciti.biz/faq/patch-meltdown-cpu-vulnerability-cve-2017-5754-linux/
Debian:
https://security-tracker.debian.org/tracker/CVE-2017-5715
https://security-tracker.debian.org/tracker/CVE-2017-5753
https://security-tracker.debian.org/tracker/CVE-2017-5754
Other sources of information I found:
https://lists.bufferbloat.net/pipermail/cerowrt-devel/2018-January/011108.html
https://www.reddit.com/r/Amd/comments/7o2i91/technical_analysis_of_spectre_meltdown/
| How to mitigate the Spectre and Meltdown vulnerabilities on Linux systems? |
1,320,067,648,000 |
What sets the size of the tmpfs? (On my machine it resides in /dev/shm)
I can see its entry in /etc/fstab, but no notation of its size.
When checking with df -h, it seems to be half the size of the physical memory installed in the system.
Is this the default behavior?
Also, what happens if it gets full? Does it expand dynamically forcing other running programs into swap? Does tmpfs itself moves into swap partition?
Finally, what takes priority into the memory tmpfs or applications? i.e., if I have tmpfs sufficiently full (like 40% of the physical memory) and I have programs that requires 70% of the physical memory, which one gets the priority?
|
What sets the size of the tmpfs? (On my machine it resides in /dev/shm) I can see its entry in /etc/fstab, but no notation of its size.
The kernel documentation covers this underneath the mount options:
size: The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock
(Emphasis mine)
Also, what happens if it gets full?
As referenced above if you've committed too much to tmpfs your machine will deadlock. Otherwise (if it's just reached its hard limit) it returns ENOSPC just like any other filesystem.
Finally, what takes priority into the memory tmpfs or applications? i.e., if I have tmpfs sufficiently full (like 40% of the physical memory) and I have programs that requires 70% of the physical memory, which one gets the priority?
It's similar to the contention between programs. The pages most used will tend to be in physical memory while the least used pages will tend to be swapped out.
If you need to ensure the pages are always in physical memory you can use ramfs which is similar but doesn't enforce a size limit and doesn't swap.
| What sets the size of tmpfs? What happens when its full? |
1,320,067,648,000 |
I have this issue with Lenovo Thinkcentre Edge. Its keyboard has Fn key, which acts in my Ubuntu (with Fluxbox) as if it is always "active/pressed".
I can't use standard F1-F12 keys unless I hold down this stupid key. You see, I'm a programmer so it's really pain to me.
So I decided to remap function keys with xev and xmodmap
I remapped F1-F3 and 'till this point everything is fine, but F4 does some kind of window minimization. When I run xev and hit F4, I don't get a reply from the program with a keycode and stuff, instead the window is minimized and when I maximize the window again there is no response from the key.
Important info: The function of Fn key can't be disabled in the BIOS.
So the question is: Do you have ANY idea how to solve my mystery?
EDIT:
# content of .fluxbox/keys
# click on the desktop to get menus
OnDesktop Mouse1 :HideMenus
OnDesktop Mouse2 :WorkspaceMenu
OnDesktop Mouse3 :RootMenu
# scroll on the desktop to change workspaces
OnDesktop Mouse4 :PrevWorkspace
OnDesktop Mouse5 :NextWorkspace
# scroll on the toolbar to change current window
OnToolbar Mouse4 :PrevWindow {static groups} (iconhidden=no)
OnToolbar Mouse5 :NextWindow {static groups} (iconhidden=no)
# alt + left/right click to move/resize a window
OnWindow Mod1 Mouse1 :MacroCmd {Raise} {Focus} {StartMoving}
OnWindowBorder Move1 :StartMoving
OnWindow Mod1 Mouse3 :MacroCmd {Raise} {Focus} {StartResizing NearestCorner}
OnLeftGrip Move1 :StartResizing bottomleft
OnRightGrip Move1 :StartResizing bottomright
# alt + middle click to lower the window
OnWindow Mod1 Mouse2 :Lower
# control-click a window's titlebar and drag to attach windows
OnTitlebar Control Mouse1 :StartTabbing
# double click on the titlebar to shade
OnTitlebar Double Mouse1 :Shade
# left click on the titlebar to move the window
OnTitlebar Mouse1 :MacroCmd {Raise} {Focus} {ActivateTab}
OnTitlebar Move1 :StartMoving
# middle click on the titlebar to lower
OnTitlebar Mouse2 :Lower
# right click on the titlebar for a menu of options
OnTitlebar Mouse3 :WindowMenu
# alt-tab
Mod1 Tab :NextWindow {groups} (workspace=[current])
Mod1 Shift Tab :PrevWindow {groups} (workspace=[current])
# cycle through tabs in the current window
Control Tab :NextTab
Control Shift Tab :PrevTab
# go to a specific tab in the current window
Mod4 1 :Tab 1
Mod4 2 :Tab 2
Mod4 3 :Tab 3
Mod4 4 :Tab 4
Mod4 5 :Tab 5
Mod4 6 :Tab 6
Mod4 7 :Tab 7
Mod4 8 :Tab 8
Mod4 9 :Tab 9
# open a terminal
Mod1 F1 :Exec x-terminal-emulator
# open a dialog to run programs
Mod1 F2 :Exec fbrun
# volume settings, using common keycodes
# if these don't work, use xev to find out your real keycodes
176 :Exec amixer sset Master,0 1+
174 :Exec amixer sset Master,0 1-
160 :Exec amixer sset Master,0 toggle
# current window commands
Mod1 F4 :Close
Mod1 F5 :Kill
# open the window menu
Mod1 space :WindowMenu
# exit fluxbox
Control Mod1 Delete :Exit
# change to previous/next workspace
Control Mod1 Left :PrevWorkspace
Control Mod1 Right :NextWorkspace
# change to a specific workspace
Control F1 :Workspace 1
Control F2 :Workspace 2
Control F3 :Workspace 3
Control F4 :Workspace 4
#osobni
Mod4 d :ShowDesktop
Mod4 m :Maximize
Mod4 f :Exec firefox
Mod4 u :Exec unison-gtk
Mod4 e :Exec eclipse
Mod4 t :Exec thunderbird
Mod4 q :Exec qutim
Mod4 s :Exec skype
Ubuntu is 12.04 LTS, kernel
3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
|
Press Fn + Num Lock to disable it.
| Switch Fn key state |
1,320,067,648,000 |
I have a few thousand files that are individually GZip compressed (passing of course the -n flag so the output is deterministic). They then go into a Git repository. I just discovered that for 3 of these files, Gzip doesn't produce the same output on macOS vs Linux. Here's an example:
macOS
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
6e145c6239e64b7e28f61cbab49caacbe0dae846ce33d539bf5c7f2761053712 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
3562fd9f1d18d52e500619b4a5d5dfa709f5da8601b9dd64088fb5da8de7b281 -
$ gzip --version
Apple gzip 272.250.1
Linux
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
10ac8b80af8d734ad3688aa6c7d9b582ab62cf7eda6bc1a0f08d6159cad96ddc -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
cbf249e3a35f62a4f3b13e2c91fe0161af5d96a58727d17cf7a62e0ac3806393 -
$ gzip --version
gzip 1.6
Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.
Copyright (C) 1993 Jean-loup Gailly.
This is free software. You may redistribute copies of it under the terms of
the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
There is NO WARRANTY, to the extent permitted by law.
Written by Jean-loup Gailly.
How is this possible? I thought the GZip implementation was completely standard?
UPDATE: Just to confirm that macOS and Linux versions do produce the same output most of the time, both OSes output the same hash for:
$ echo "Vive la France" | gzip --fast -n | shasum -a 256
af842c0cb2dbf94ae19f31c55e05fa0e403b249c8faead413ac2fa5e9b854768 -
|
Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data.
| GZip doesn't produce the same compressed result on macOS vs Linux |
1,320,067,648,000 |
While I was learning about cpu load, I came to know that it depends on the number of cores. If I have 2 cores then load 2 will give 100% cpu utilization.
So I tried to find out cores.( I already know that system has 2 cores, 4 threads so 2 virtual cores Check here about processor).So I ran cat /proc/cpuinfo
Which gave me
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
stepping : 1
microcode : 0x17
cpu MHz : 774.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 3591.40
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
stepping : 1
microcode : 0x17
cpu MHz : 1600.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 3591.40
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
stepping : 1
microcode : 0x17
cpu MHz : 800.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 3591.40
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
stepping : 1
microcode : 0x17
cpu MHz : 774.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 3591.40
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
Now I am totally confused. It shows 4 processors, with 2 cpu cores.
Can anyone explain this output?
Once my cpu load was 3.70, Is this maximum load? Still at that time cpu was at <50%.
What about turbo boost? Are all cores are turbo boosted or only physical?
Any method in Ubuntu to get current cpu frequency to see if the processor is on turbo boost or not?
Load was to 3.70 about 100%. But CPU usage wasn't 100% because of IO response time. This does not means that IO device will be at maximum speed, but io device will be 100% busy, which sometimes affects applications using IO ex: music may break.
|
The words “CPU”, “processor” and “core” are used in somewhat confusing ways. They refer to the processor architecture. A core is the smallest independent unit that implements a general-purpose processor; a processor is an assemblage of cores (on some ARM systems, a processor is an assemblage of clusters which themselves are assemblages of cores). A chip can contain one or more processors (x86 chips contain a single processor, in this sense of the word processor).
Hyperthreading means that some parts of a core are duplicated. A core with hyperthreading is sometimes presented as an assemblage of two “virtual cores” — meaning not that each core is virtual, but that the plural is virtual because these are not actually separate cores and they will sometimes have to wait while the other core is making use of a shared part.
As far as software is concerned, there is only one concept that's useful almost everywhere: the notion of parallel threads of execution. So in most software manuals, the terms CPU and processor are used to mean any one piece of hardware that executes program code. In hardware terms, this means one core, or one virtual core with hyperthreading.
Thus top shows you 4 CPUs, because you can have 4 threads executing at the same time. /proc/cpuinfo has 4 entries, one for each CPU (in that sense). The processor numbers (which are the number of the cpuNUMBER entries in /sys/devices/system/cpu) correspond to these 4 threads.
/proc/cpuinfo is one of the few places where you get information about what hardware implements these threads of execution:
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
means that cpu0 is one of 4 threads inside physical component (processor) number 0, and that's in core 0 among 2 in this processor.
| Number of processors in /proc/cpuinfo |
1,320,067,648,000 |
It seems to me a swap file is more flexible.
|
A swap file is more flexible but also more fallible than a swap partition. A filesystem error could damage the swap file. A swap file can be a pain for the administrator, since the file can't be moved or deleted. A swap file can't be used for hibernation. A swap file was slightly slower in the past, though the difference is negligible nowadays.
The advantage of a swap file is not having to decide the size in advance. However, under Linux, you still can't resize a swap file online: you have to unregister it, resize, then reregister (or create a different file and remove the old one). So there isn't that much benefit to a swap file under Linux, compared to a swap partition. It's mainly useful when you temporarily need more virtual memory, rather than as a permanent fixture.
| Why does Linux use a swap partition rather than a file? |
1,320,067,648,000 |
When using a tty login shell by entering Ctrl-Alt-F1 from an Ubuntu 12.04 install on a laptop the keyboard seems overly sensitive and if my finger lingers for a moment on a button I end up with repeats of the same letter. Is there a way to adjust keyboard sensitivity that would influence the keyboard response when accessing a login shell from a tty instance?
|
It is called 'keyboard auto repeat rate' and you can set it with kbdrate Mine is set to:
$ sudo kbdrate
Typematic Rate set to 10.9 cps (delay = 250 ms)
You can set same with:
$ sudo kbdrate -r 10.9 -d 250
Typematic Rate set to 10.9 cps (delay = 250 ms)
Check the manual page for exact options:
man kbdrate
Unsure where the default setting is done, but /etc/rc.local, your .bash_profile, .profile or .bashrc sounds like a good place.
| Adjusting keyboard sensitivity in a command line terminal? |
1,320,067,648,000 |
I'm trying to force a newly created user to change a password at the first time login using ssh. For security reasons I want to give him a secure password until he logs in for the first time. I did the following so far:
useradd -s /bin/bash -m -d /home/foo foo
passwd foo
Doing chage -d 0 foo only gives me the the error Your account has expired; please contact your system administrator on ssh login.
|
change the age of password to 0 day
syntax chage -d 0 {user-name}
In this case
chage -d0 foo
This works for me over ssh also
| How do I force a user to change a password at the first time login using ssh? |
1,320,067,648,000 |
I used the 'useradd' command to create a new account, but I did so without specifying the password. Now, when the user tries to log in, it asks him for a password. If I didn't set it up initially, how do I set the password now?
|
Easiest way to do this from the command line is to use the passwd command with root privileges.
passwd username
From man 1 passwd
NAME
passwd - update user's authentication token
SYNOPSIS
passwd [-k] [-l] [-u [-f]] [-d] [-n mindays] [-x maxdays]
[-w warndays] [-i inactivedays] [-S] [--stdin] [username]
DESCRIPTION
The passwd utility is used to update user's authentication token(s).
After you set the user password, you can force the user to change it on next login using the chage command (also with root privileges) which expires the password.
chage -d 0 username
When the user successfully authenticates with the password you set, the user will automatically be prompted to change it. After a successful password change, the user will be disconnected, forcing re-authentication with the new password.
See man 1 chage for more information on password expiry.
| How do I set the password of a new user after the account has already been created? |
1,320,067,648,000 |
I'm working on a small control panel for my server. I need a command that will say if httpd is running or stopped.
Will probably be using the same code for other services as well.
|
Most people run their httpd (Apache, Nginx, etc) through an init system. That's almost certainly the case if you've installed from a package. Almost all of these init systems have a method work working out if it's running. In my case I'm using nginx which ships a SysV-style init script and that accepts a status argument, like so:
$ /etc/init.d/nginx status
* nginx is running
Obviously if you're running a different httpd, script or init system, you're going to have a slightly different syntax but unless you're manually launching the httpd yourself (which feels like the worst idea in the world), you're probably using a nice, managed start-up script that will allow you to query the status.
slm's answer has more about this sort of init querying but the problem with trusting that is it only really tells you if a process is still running. Your httpd's main process could be running but in some way deadlocked. It makes a lot of sense to skip simple init tests and move on to behavioural tests.
One thing we know about httpds is they listen. Usually on port *:80, but if yours doesn't, you can adapt the code following code. Here I'm just awking the output of netstat to see if it's listening on the right port.
$ sudo netstat -ntlp | awk '$4=="0.0.0.0:80"'
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2079/nginx
We could also check which process is running too to make sure the right httpd is running. We could do all sorts of checks. Depends how paranoid you want to be :)
But even that is only a reflection of an httpd. Want to really test it? Well let's test it.
$ wget --spider -S "http://localhost" 2>&1 | awk '/HTTP\// {print $2}'
200
I'm just looking at the response code (200 means "A-Okay!") but again, we could dig in and actually test the output to make sure it's being generated correctly.
But even this isn't that thorough. You're checking localhost and it's reporting 200, nothing wrong? What if beavers chewed through the network cable that supplies the httpd (but not the rest of the system)? Then what?! You're reporting uptime when you're actually down. Few things look more unprofessional than incorrect status data.
So let's talk to an external server (ideally on a completely different connection, in another galaxy far, far away) and ask it to query our server:
$ ssh tank 'wget --spider -S "http://bert" 2>&1' | awk '/HTTP\// {print $2}'
200
By this point, any issues have reported are either in-app issues (which can have their own error -handling and -reporting, or they're at the client's end).
A combination of these tests can help nail down where the issue is too.
| How to find out if httpd is running or not via command line? |
1,320,067,648,000 |
Based on what I have read, when a terminal is in raw mode, the characters are not processed by the terminal driver, but are sent straight through.
I set the terminal in raw mode using the command stty raw, and I noticed that the output is indented to the right each time until there is no more room. This is what I mean:
Why is this behavior happening?!
|
One of the stty settings (onlcr) tells the terminal driver to convert newline (which is actually ASCII line-feed) to carriage-return plus line-feed.
Unix-like systems just write a newline to end lines, letting the terminal driver do the right thing (convert newline to carriage-return plus line-feed).
Carriage-return "goes left" and line-feed "goes down".
When you set the terminal to raw mode, newline will no longer be converted to carriage-return plus line-feed. Lacking the carriage-returns, you get that staircase effect.
| Unexpected indentation behaviour when I set the terminal to raw mode – why is this happening? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.