date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,549,798,909,000 |
Right now I have the following:
# this function is meant for future script expansions
# its purpose is clear, i.e. to clean up some temp files
# now, it is doing nothing, just a special null command
cleanup_on_signal() { :; }
# define functions to handle signals
# treat them as errors with appropriate messages
# example calls:
# kill -15 this_script_name # POSIX, all shells compatible
# kill -TERM this_script_name # Bash and alike - newer shells
signal_handler_HUP() { cleanup_on_signal; print_error_and_exit "\\ntrap()" "Caught SIGHUP (1).\\n\\tClean-up finished.\\n\\tTerminating. Bye!"; }
signal_handler_INT() { cleanup_on_signal; print_error_and_exit "\\ntrap()" "Caught SIGINT (2).\\n\\tClean-up finished.\\n\\tTerminating. Bye!"; }
signal_handler_QUIT() { cleanup_on_signal; print_error_and_exit "\\ntrap()" "Caught SIGQUIT (3).\\n\\tClean-up finished.\\n\\tTerminating. Bye!"; }
signal_handler_ABRT() { cleanup_on_signal; print_error_and_exit "\\ntrap()" "Caught SIGABRT (6).\\n\\tClean-up finished.\\n\\tTerminating. Bye!"; }
signal_handler_TERM() { cleanup_on_signal; print_error_and_exit "\\ntrap()" "Caught SIGTERM (15).\\n\\tClean-up finished.\\n\\tTerminating. Bye!"; }
# use the above functions as signal handlers;
# note that the SIG* constants are undefined in POSIX,
# and numbers are to be used for the signals instead
trap 'signal_handler_HUP' 1; trap 'signal_handler_INT' 2; trap 'signal_handler_QUIT' 3; trap 'signal_handler_ABRT' 6; trap 'signal_handler_TERM' 15
I want the script to terminate tidily on shutdown, which right now it does.
But I opened one suggestion of a colleague to issue a question on CTRL+C instead of quitting to shell.
I don't want to turn off the machine, I don't do that often, anyway:
What signal is sent to running programs / scripts on shutdown?
|
While on shutdown the running processes are first told to stop by init(from sendsigs on old implementations, according to @JdeBP)/systemd.
The remaining processes, if any, are sent a SIGTERM. The ones that ignore SIGTERM or do not finish on time, are shortly thereafter sent a SIGKILL by init/systemd.
Those actions are meant to guarantee a stable/clean shutdown (as possible).
Out of curiosity, see the report of a related (old) systemd bug:
Bug 1352264 - systemd immediately sends SIGKILL after SIGTERM during shutdown
systemd immediately sends SIGKILL after SIGTERM during shutdown,
there's no window of opportunity for processes to terminate
Also from shutdown.c/main():
disable_coredumps();
log_info("Sending SIGTERM to remaining processes...");
broadcast_signal(SIGTERM, true, true, arg_timeout);
log_info("Sending SIGKILL to remaining processes...");
broadcast_signal(SIGKILL, true, false, arg_timeout);
Also from sysvinit 2.94 sources/init.c, here is the code around a SIGTERM round. If any process(es) were sent a SIGTERM, test each second during 5 seconds to see if there are any processes remaining. Leave the wait if in one of those tests none process is found, or send SIGKILL to the remaining process(es) after the 5 seconds are up.
switch(round) {
case 0: /* Send TERM signal */
if (talk)
initlog(L_CO,
"Sending processes configured via /etc/inittab the TERM signal");
kill(-(ch->pid), SIGTERM);
foundOne = 1;
break;
case 1: /* Send KILL signal and collect status */
if (talk)
initlog(L_CO,
"Sending processes configured via /etc/inittab the KILL signal");
kill(-(ch->pid), SIGKILL);
break;
}
talk = 0;
}
/*
* See if we have to wait 5 seconds
*/
if (foundOne && round == 0) {
/*
* Yup, but check every second if we still have children.
*/
for(f = 0; f < sleep_time; f++) {
for(ch = family; ch; ch = ch->next) {
if (!(ch->flags & KILLME)) continue;
if ((ch->flags & RUNNING) && !(ch->flags & ZOMBIE))
break;
}
if (ch == NULL) {
/*
* No running children, skip SIGKILL
*/
round = 1;
foundOne = 0; /* Skip the sleep below. */
break;
}
do_sleep(1);
}
}
}
/*
* Now give all processes the chance to die and collect exit statuses.
*/
if (foundOne) do_sleep(1)
for(ch = family; ch; ch = ch->next)
if (ch->flags & KILLME) {
if (!(ch->flags & ZOMBIE))
initlog(L_CO, "Pid %d [id %s] seems to hang", ch->pid,
ch->id);
else {
INITDBG(L_VB, "Updating utmp for pid %d [id %s]",
ch->pid, ch->id);
ch->flags &= ~RUNNING;
if (ch->process[0] != '+')
write_utmp_wtmp("", ch->id, ch->pid, DEAD_PROCESS, NULL);
}
}
/*
* Both rounds done; clean up the list.
*/
| What signal is sent to running programs / scripts on shutdown? |
1,549,798,909,000 |
Related to this.
I'd like to take advantage of an OS switch to upgrade to BTRFS.
BTRFS claims to offer a lot (data-loss resiliency, self-healing if RAID, checksumming of metadata and data, compression, snapshots). But it's slow when used with fsync-intensive programs such as dpkg (I know eatmydata and the crappy apt-btrfs-snapshot programs) and I won't setup a RAID :p.
EXT4 allow metadata check-summing only and doesn't compress data.
In 6 years, I had to reinstall my OS twice because of HDD corruption (after flight trips). The first making the laptop unbootable, the second bunch of corruptions was identified thanks to a corrupted film and then md5sum check of the OS binaries. (SMART tells me the disk is sane). The lappy currently behave quite strangely. I don't know if the hardware or the software is to blame but I suspect the hardware (it all began right after a flight, once again).
Would you advise to switch to BTRFS for a laptop because of data compression and check-summing or should I stick with EXT4?
(I don't care about which is "best" relative to whatever variable but I have almost no experience with BTRFS and would like some feedback)
EDIT:
Let's be clearer:
BTRFS is still flagged as experimental, I know, but SUSE says it shouldn't anymore. So does Oracle (I know who Oracle is). And a bunch of distributions already propose BTRFS for installation and most of them are planning to switch to it in the next few months.
Two facts:
Backups of corrupted data are worthless. I don't understand why I seem to be the only one to bother. Isn't that common sense? In the meanwhile:
Stop telling me I should do backups: I already do.
Stop implying backups are just enough to keep my data safe except if you are willing to give me TBs of free space to do years worth of backups.
A corrupted file =/=> Linux complaining. So:
Don't assume your system/data are sane just because the OS is booting.
I hope you understand that I prefer (meta)data checksumming to an over-engineered and bloated piece of software that would inconveniently do half as a good job as BTRFS to check the data integrity.
Is that more clear now that I am not asking for which FS is "better"? The question is, given that I regularly do backups, is BTRFS still too experimental to be used for its data-integrity checking functions or should I stick to EXT4?
|
I agree with vonbrand, btrfs is not yet to the maturity level of ext* or XFS or JFS to name a few. I would not use it on a laptop with precious data unless I have a reliable backup that can be done also on the go.
Btrfs can detect corruptions but it won't do anything more than reporting the detection unless you have an available uncorrupted copy of the same data, which means you either need RAID or duplication of data on the volume.
That said, I am considering using it (using RAID-1) for one machine, but I also do have Crashplan running on this machine!
For a long time, I have been using JFS on my laptop. One reason was the lower CPU usage compare to XFS or ext3 when doing file operations. I have never verified if it saved power consumption as well, but that was my assumption. I found JFS pretty stable and safe, never lost data while using it.
| Should a laptop user switch from ext4 to btrfs? |
1,549,798,909,000 |
Could some one direct me to a command to measure TLB misses on LINUX, please? Is it okay to consider (or approximate) minor page faults as TLB misses?
|
You can use perf to access the hardware performance counters:
$ perf stat -e dTLB-load-misses,iTLB-load-misses /path/to/command
e.g. :
$ perf stat -e dTLB-load-misses,iTLB-load-misses /bin/ls > /dev/null
Performance counter stats for '/bin/ls':
5,775 dTLB-load-misses
1,059 iTLB-load-misses
0.001897682 seconds time elapsed
| Command to measure TLB misses on LINUX? |
1,549,798,909,000 |
Asked on serverfault but didn't get enough attention, so reposted here, with the hope some people here know the answer.
There is another question discussing about umounting rbind mounts, but the solution has unwanted effect. Consider the following directory layout:
.
├── A_dir
│ └── mount_b
├── B_dir
│ └── mount_c
└── C_dir
Now I bind C_dir to B_dir/mount_c and rbind B_dir to A_dir/mount_b:
[hidden]$ sudo mount --bind C_dir B_dir/mount_c
[hidden]$ sudo mount --rbind B_dir A_dir/mount_b
[hidden]$ mount | grep _dir | wc -l
3
Now umount A_dir/mount_b will fail, which is not surprising. According to the answers everywhere on the web, we need to umount A_dir/mount_b/mount_c first then umount A_dir/mount_b. However, umount A_dir/mount_b/mount_c will also unmount B_dir/mount_c, which is unwanted:
[hidden]$ sudo umount A_dir/mount_b/mount_c
[hidden]$ mount | grep _dir | wc -l
1
Now my question is, how do I unmount A_dir/mount_b but leaving B_dir unaffected, i.e. there is still a bind B_dir/mount_c to C_dir?
EDIT: this problem doesn't seem to appear in Ubuntu. More specifically, it works fine on my Ubuntu 14.04 but not working on Fedora 23 and CentOS 7. Why there is the difference and what's the work around for Fedora and CentOS?
EDIT: some more information on the actual problem that I am trying to solve. I tried to create a sandbox and used --rbind to mount the /dev and /proc to the sandbox. When destroying the sandbox, it seems I can't cleanly destroy it because unmounting <sandbox-root>/dev/pts in FC23 and CentOS7 will unmount /dev/pts, after which my shell and SSH connections hang and I have to reboot the machine. That's why I am asking if there is a way to unmount the --rbind mounts without affecting submounts.
|
I found the solution myself. I simply just need to use --make-rslave to make any changes in A_dir/mount_b not propagate back to B_dir:
sudo mount --make-rslave A_dir/mount_b
sudo umount -R A_dir/mount_b
See mount man page section The shared subtree operations.
| Unmount a rbind mount without affecting the original mount |
1,549,798,909,000 |
I've actually got two scenarios to apply this:
Multiseat Desktop: two network connections both with internet gateways and two accounts doing bandwidth-intensive tasks on each. I want to split them up so one account only uses eth0 and the second account only uses eth1.
Server: I have two IPs on a server and I want to make sure the mail user only sends email from the second IP (eth0:1 alias)
The second can probably be IPTabled (I just don't know how) to route email traffic through that interface but the first will be dealing with all sorts of traffic so needs to be user-based. If there is a user-based solution, I could apply this in both places.
|
You'll want to use the iptables owner module and perhaps some clever packet mangling.
owner This module attempts to match
various characteristics of the packet
creator, for locally-generated
packets. It is only valid in the
OUTPUT chain, and even then some
packets (such as ICMP ping responses)
may have no owner, and hence never
match.
--uid-owner userid Matches if the packet was created by a process with
the given effective (numerical) user
id.
--gid-owner groupid Matches if the packet was created by a process with
the given effective (numerical) group
id.
--pid-owner processid Matches if the packet was created by a process with
the given process id.
--sid-owner sessionid Matches if the packet was created by a process in the
given session group.
| Can I limit a user (and their apps) to one network interface? |
1,549,798,909,000 |
I created a test service under /etc/systemd/system which is the correct path to create custom unit files.
[root@apollo system]# cat sample.service
[Unit]
Description=This is my test service
Wants=chronyd.service
After=chronyd.service
[Service]
Type=forking
ExecStart=/root/sample.sh
[Install]
WantedBy=multiuser.target chronyd.service
#RequiredBy=multiuser.target chronyd.service
#Alias=xyz
[root@apollo system]# pwd
/etc/systemd/system
[root@apollo system]#
I made sure systemd is aware by running "systemctl daemon-reload". I was also able to stop/start the service.
When I tried to mask it, it shows me this error:
[root@apollo system]# systemctl mask sample.service
Failed to execute operation: File exists
[root@apollo system]#
That is because systemd is trying to create a symlink using this command:
ln -s /dev/null /etc/systemd/system/sample.service
Since sample.service already exists inside /etc/systemd/system, the command will fail unless systemd will use "ln -fs".
So meaning we cannot mask any unit files we create under /etc/systemd/system?
I tried to move sample.service to /usr/lib/systemd/system and I was able to mask it because it was able to create a symlink under /etc/systemd/system without any hindrance.
Has anybody experience this? Do you think this is a bug?
|
There is not a way to mask services which have service files in /etc/systemd/system without first removing the file from there. This is intentional design.
You can disable the service by using systemctl disable servicename.service which will have the same effect as masking it in many cases.
The post by the author of systemd Three Levels of Off has more detail on the differences between stop, disable and mask in systemd.
| How can we mask service whose unit file is located under /etc/systemd/system? |
1,549,798,909,000 |
When launching an application through the command line I successfully use:
gourmet --gourmet-directory $HOME/my/custom/path/
But it does not work when trying to replicate this behaviour on a .desktop file with:
Exec=gourmet --gourmet-directory $HOME/my/custom/path/ %F
I am probably missing something very basic here, but I cannot get my head around this. Any help would be much appreciated.
|
Only command line options with one hyphen are possible in the Exec field.
Exec=sh -c "gourmet --gourmet-directory $HOME/my/custom/path/ %F"
should work.
| How to pass argument in .desktop file |
1,549,798,909,000 |
What's the difference between the txqueuelen setting that can be applied with either:
ifconfig eth4 txqueuelen 5000
ip link set eth4 txqueuelen 5000
And the tx ring size setting that can be applied with:
ethtool -G eth4 tx 4096
How do these relate to the global /proc/sys/net/core/wmem* settings?
I'm on RHEL6.
|
The net.core.wmem_default and wmem_max settings control the initial and maximum sizes of TX socket buffers in bytes. While the queue itself is just a linked list of skb pointers, the kernel also keeps track of the total byte-size consumed by the skb's as they're added and removed from the socket buffer. The wmem_default sysctl sets the default initial ceiling for new sockets (net/core/sock.c:sock_init_data()). Applications are allowed to increase the size of their sockets' buffers and wmem_max is the ceiling for that functionality (net/core/sock.c:sock_setsockopt()).
When a packet has been removed from a socket buffer and finds its way through the kernel networks stack, it's placed on a transmission queue for an interface to wait to be loaded onto the NIC itself. The txqueuelen set by the ifconfig or ip commands is number of frames allowed per kernel transmission queue for the queuing discipline (net/sched/sch_generic.c:pfifo_fast_enqueue()).
Finally, ethtool -G uses ioctl to set the number of ring entries for the ring buffer on the NIC itself.
| Difference between txqueuelen and ethtool tx |
1,549,798,909,000 |
I am beginner in device driver programming.
I don't get the difference between device drivers and device files in Linux.
Can anyone explain the difference?
|
A device driver is a piece of software that operates or controls a particular type of device. On modern, monolithic kernel operating systems these are typically part of the kernel. Many monolithic kernels, including Linux, have a modular design, allowing for executable modules to be loaded at runtime. Device drivers commonly utilize this feature, although nothing prevents the device drivers to be compiled into the kernel image.
A device file is an interface for a device driver that appears in a file system as if it were an ordinary file. In Unix-like operating systems, these are usually found under the /dev directory and are also called device nodes. A device file can represent character devices, which emit a stream data one character at a time, or block devices which allow random access to blocks of data.
Device nodes are created by the mknod system call. The kernel resource exposed by the device node is identified by a major and minor number. Typically the major number identifies the device driver and the minor number identifies a particular device the driver controls.
What the device file appears to contain depends on what the device drivers exposes through the device file. For instance, the character device file which represents the mouse, /dev/input/mice exposes the movement of the mouse as a character stream, whereas the block device file representing a hard disk, such as /dev/sda, exposes the addressable regions of memory of the device. Some devices files also take input, allowing user-space applications to communicate with the device by writing to its device file.
| Difference between Device file and device drivers |
1,549,798,909,000 |
What is a resolution of jiffie in Linux Kernel?
according to current timer source (cat /sys/devices/system/clocksource/clocksource0/current_clocksource), Linux uses TSC and has nanosecond resolution
according to http://lxr.free-electrons.com/source/include/linux/jiffies.h jiffie is not smaller than 1us, but can be larger.
Is there a way to determine its current resolution.
|
If you take a look at the man page man 7 time
The value of HZ varies across kernel versions and hardware platforms.
On i386 the situation is as follows: on kernels up to and
including 2.4.x, HZ was 100, giving a jiffy value of 0.01 seconds;
starting with 2.6.0, HZ was raised to 1000, giving a jiffy of 0.001
seconds. Since kernel 2.6.13, the HZ value is a kernel
configuration parameter and can be 100, 250 (the default) or 1000,
yielding a jiffies value of, respectively, 0.01, 0.004, or 0.001
seconds. Since kernel 2.6.20, a further frequency is available: 300, a
number that divides evenly for the common video frame rates (PAL, 25
HZ; NTSC, 30 HZ).
The times(2) system call is a special case. It reports times with a
granularity defined by the kernel con‐ stant USER_HZ. User-space
applications can determine the value of this constant
using sysconf(_SC_CLK_TCK).
You can inquire the CLK_TCK constant:
$ getconf CLK_TCK
100
This tells you the value of HZ, i.e. 100. This value is the number of jiffies in a second.
References
How does USER_HZ solve the jiffy scaling issue?
time.h - time types
| what is a resolution of jiffie in Linux Kernel |
1,549,798,909,000 |
I'm a newcomer to Unix and Linux, and I've been trying to get up to speed on everything. One of the guides I've used is the "Unix and Linux System Administration Handbook"
It's a pretty great book, and I'm enjoying reading through it, but I'm really confused by all the things that are on the cover. While this may not be a typical question for Unix & Linux Stack Exchange, I don't think it is necessarily a bad question so I'm going to go out on a limb and ask:
What do all the crazy things on the cover of this book represent?
Attached is an image of the book and a reference key. Thanks for the history lesson.
Flag
Flag (Finland's for Linus Torvalds?)
Bird / Cake
Guy in a lab coat with a baseball bat
Gnome
Clam
Python
Cowboy and two cats
Penguin
Gorilla
Old dude on another boat sailing away and apparently flipping off this boat
Two guys carved in wood with a shield
Octopus / Monster
Filing cabinet
Clock
Indian
Girl with a book
Computer using a canon
Window frame
Lady, fishing pole, and boot
Apple Core
Less and More
Bar of Soap / Can of Spam
Periscope
Heart / Valentine
Monster with Maracas
ps: What is the significance of the ship?
|
The artist Lisa Haney has provided an explanation on her blog. [Click through because the back cover has more...]
Some of the more colourful include:
6 Bash and Perl & Shell
9 The Linux penguin forcing the Windows gorilla to walk the plank
11 Evi Nemeth makes gesture
| What do all the pictures on the front of the "Unix and Linux System Administration Handbook" represent? [duplicate] |
1,549,798,909,000 |
When I try to switch to root using sudo -i i get the error /var/tmp/sclDvf3Vx: line 8: -i: command not found... However, su - works which I will continue to use. I'm by no means a linux system administrator so the environment is still pretty foggy to me. I guess my questions are:
Why is the error being thrown?
What's the difference between the two commands?
Why would you use one over the other?
Update:
I'm using CentOS version: CentOS release 6.6 (Final)
Here's the output from some commands I was asked to run, in the comments below.
type sudo : sudo is /opt/centos/devtoolset-1.1/root/usr/bin/sudo
sudo -V : /var/tmp/sclIU7gkA: line 8: -V: command not found
grep'^root:' /etc/passwd : root:x:0:0:root:/root:/bin/bash
Update:
This was added to my non-root user's ~/.bashrc a while back because i needed C++11 support. When I comment it out, re-ssh in, I can run sudo -i just fine without any errors.
if [ "$(gcc -dumpversion)" != "4.7.2" ]; then
scl enable devtoolset-1.1 bash
fi
|
From the comments and your further investigations it looks like your devtoolset is modifying the PATH. Unfortunately that includes what appears to be an old or broken sudo command.
It would be worth trying to modify the devtoolset include in your .bashrc like this, and then logging back in again:
if [ "$(gcc -dumpversion)" != "4.7.2" ]; then
scl enable devtoolset-1.1 bash
PATH=/usr/bin:$PATH # We need a working sudo
fi
| sudo -i returns an error |
1,549,798,909,000 |
I'm hoping to get experience-based suggestions on how to go about debugging suspend-to-RAM issue. Advice specific to my situation(detailed below) would be great, but I am also interested in general advice about how to debug such issues.
The problem:
Often, when I attempt to suspend my machine, it gets stuck in a "not suspended but not awake" state. Often the screen will be completely black but sometimes it will have the following error message on it:
GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0)
Also, this state will also be accompanied by the fans kicking into high gear. The only way to get it out of this state is to manually power off the laptop.
Some Information
$ uname -a
Linux baltar 2.6.35-22-generic #34-Ubuntu SMP Sun Oct 10 09:26:05 UTC 2010 x86_64 GNU/Linux
$ lsb_release -a
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
I've taken a look at /var/log/dmesg and /var/log/pm-suspend.log, but I don't know what I'm looking for and nothing stands out. I'm unsure if it is related, but I did find a lot of the following in /var/log/kern.log:
EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600
|
Do you have an Intel graphics chipset? I was getting what sounds like the same problem on my ThinkPad X200s running Ubuntu 10.10, and this workaround (from 2008!) fixed it for me:
http://ubuntuforums.org/showpost.php?p=6105510&postcount=12
| How can I debug a Suspend-to-RAM issue on Linux? |
1,549,798,909,000 |
Let's imagine that I'm installing with RPM packages A, B and C. They are installed in the same order. And suddenly in the middle of installing B there is a power cut.
1) regarding state after turning on: What happens to this transaction? Will it be resumed? Or maybe RPM will remove all packages and files from that transaction?
2) regarding user actions: does RPM require user action to do above things or it checks it automatically at computer start?
RPM transctions are described mainly in terms of dependency error or error when computer is still running...
|
This is, in many ways, a too broad question, but here are some facts:
downloaded packages via yum or dnf are cached until a yum clean packages or dnf clean packages operation removes them.
downloaded packages via rpm will sit there until manually removed (unless downloaded in an ephemeral /tmp filesystem, in which case they will be lost after a reboot)
Yet, the answer depends on several things:
were you in the middle of a yum or dnf transaction? or was it a direct rpm command? for the former case, yum-complete-transaction will attempt to finish all pending actions. For the latter case, again, it depends on what was the exact stage of the installation that was taking place during the power outage. You can always try to run rpm --force -Uvh $package to reinstall a package regardless its current state. The worst case scenario in this case would be a broken rpm package.
are your hypothetical packages one or more of: grub, kernel, initramfs, dracut, lvm or any package that would give you access to your root filesystem? in this case, the most probable result is an unbootable system that needs to be repaired by other means, e.g. PXE booting into a systemrescue image.
The amount of different cases that could happen depending on the packages involved and the dependencies among them makes it impossible to know beforehand what exactly would happen.
| What happens to RPM transaction when it is interrupted in the middle? |
1,549,798,909,000 |
The file /etc/udev/rules.d/70-persistent-net.rules is auto-generated on a Linux system with udev, if it does not exist, during reboot. But I would like to know how to create this rules file (with a command) without rebooting the server.
I was Googling around for a while and found that the rules file is generated by this script:
/lib/udev/write_net_rules
However, it is impossible to run this script from command line, since (I assume) it wants to be started by udev, with some environment variables set properly. Starting it manually prints error message "missing $INTERFACE". Even if I set env variable INTERFACE=eth0 prior the starting of the script, it still prints error "missing valid match". Not to mention I have two interfaces (eth0 and eth1) and I want the rules file generated for both.
I was also thinking to trigger udev events like this, hoping it will start the script from udev itself, but nothing changes:
udevadm trigger --type=devices --action=change
So, does anybody know how to regenerate the persistent net rules in file /etc/udev/rules.d/70-persistent-net.rules without reboot?
|
According to man page --action=change is the default value for udevadm.
-c, --action=ACTION
Type of event to be triggered. The default value is change.
Therefore you better try --action=add instead. It should help:
/sbin/udevadm trigger --type=devices --action=add
| How to regenerate 70-persistent-net.rules without reboot? |
1,549,798,909,000 |
This is a really weird one and all the research I've done so far isn't panning out.
I'm trying to connect to a Windows share from CentOS 7.5.1804 to Windows Server 2008 R2 (no snickering and let's stay on topic please) share. This server:
has not been promoted to a domain controller
resides on a flat network
Everyone has read/write to the share (I changed this for troubleshooting)
the share is named MyShare
When I run this command from Linux:
smbclient -L <IP> -U Administrator
I get this:
Sharename Type Comment
--------- ---- -------
ADMIN$ Disk Remote Admin
C$ Disk Default share
IPC$ IPC Remote IPC
MyShare Disk
Users Disk
Reconnecting with SMB1 for workgroup listing.
Connection to <IP> failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)
Failed to connect with SMB1 -- no workgroup available
Weird. It throws an error but still lists all the shares. Googling "NT_STATUS_RESOURCE_NAME_NOT_FOUND" hasn't yielded a lot of info.
Since the share was found, I pressed on with:
mount -v -t cifs //<IP>/MyShare /mnt -o username=Administrator
It returns this:
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
So I read the man page and this can not use mount.cifs: mount error(2): No such file or directory
...and started thinking I need to specy the version or ntlm level.
I tried this:
mount -v -t cifs //<IP>/MyShare /mnt -o username=Administrator, vers=2.0
and
mount -v -t cifs //<IP>/MyShare /mnt -o username=Administrator, sec=ntlmv2
and they both error out because of incorrect syntax... but that's what was supplied as an example on that webpage and in the man page!
Any suggestions how to get the mount command working would be greatly appreciated. Thanks!
|
Try to create a new folder:
mkdir /media/MGoBlue93/cifsShare
And mount to it, I think that this issue is related to permissions, and you do not have any to mount to /mnt.
| Linux to Windows - can list smb shares but cannot connect |
1,549,798,909,000 |
I'm currently testing gpg --genkey on a Linux VM. Unfortunately, this software seems to rely on /dev/random to gather entropy and politely asks the user to manually type screens after screens of cryptographically random input so it may eventually end-up with generating a key, and I've found no command-line parameter to tell it to use another file as entropy source (the guy on this video encounters the very same issue...).
However, the user should be free to choose to use /dev/urandom instead since there is nothing wrong with it. It is there mainly as a reminiscence of older PRNG algorithms which were weaker from a cryptographic point of view. For instance, while NetBSD manpage grants that the distinction may still be useful at very early booting stage, it describes such distinction as "folklore" and an "imaginary theory that defends only against fantasy threat models". Not anybody agrees with either the amount of entropy required by this command nor the fact that entropy is something which is actually actually consumed as stated in GPG manpage ("PLEASE, don't use this command unless you know what you are doing, it may remove precious entropy from the system!").
I've read about people installing the rngd daemon and configure it to use /dev/urandom as entropy source to feed /dev/random, but I find such practice heavily dirty.
I tried to workaround the problem in the FreeBSD way by removing /dev/random and linking it to /dev/urandom instead:
rm /dev/random
ln -s /dev/urandom /dev/random
I see this as a setting telling "I trust /dev/urandom as entropy source".
I feared I would encounter some error of some kind, but this seems to provide the expected result since the command now returns successfully immediately.
My question is: is there any known, practical and wrong side-effect of linking /dev/random to /dev/urandom on Linux systems as done by default on FreeBSD systems? Or could one envisage to set this permanently (in a script at the end of the boot process for instance) in case of repetitive issues due to /dev/random locking some service?
|
See Myths about urandom, there is no known attack on /dev/urandom that would not also be an attack on /dev/random. The main problem that a Linux system has is when it cloned and run as several VMs without resetting the saved entropy pool after the cloning. That is a corner case that tangential to what you want.
| Is it wrong to link /dev/random to /dev/urandom on Linux? |
1,549,798,909,000 |
I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat.
/etc/apt/sources.d
Means I have half a dozen files to parse instead of just one.
AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2).
Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse.
I must add, I love change, however, ONLY when there are benefits introduced by the change.
Edit after first response:
It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries.
Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ...
It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file.
Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin.
It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories.
I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file.
|
On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:
For sources.list.d/
# to add
if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
fi
# to delete
if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
rm -f /etc/apt/sources.list.d/some_repo.list
fi
Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.
For sources.list
# to add. Respect commented out lines. Bonus points for uncommenting
# line instead of adding a new line
if [[ -z $( grep -E '\s*[^#]\s*some repo line for apt' /etc/apt/sources.list ) ]];then
echo 'some repo line for apt' >> /etc/apt/sources.list
fi
# to delete. Delete whether commented out or not. Bonus for not
# deleting if commented out, thus respecting the user's wishes
sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list
The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.
To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:
cat /etc/sources.list
So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.
Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.
However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.
Bulk Edits
Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file
Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.
As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.
| What is the benefit of /etc/apt/sources.list.d over /etc/apt/sources.list |
1,549,798,909,000 |
I need to hide some sensitive arguments to a program I am running, but I don't have access to the source code. I am also running this on a shared server so I can't use something like hidepid because I don't have sudo privileges.
Here are some things I have tried:
export SECRET=[my arguments], followed by a call to ./program $SECRET, but this doesn't seem to help.
./program `cat secret.txt` where secret.txt contains my arguments, but the almighty ps is able to sniff out my secrets.
Is there any other way to hide my arguments that doesn't involve admin intervention?
|
As explained here, Linux puts a program's arguments in the program's data space, and keeps a pointer to the start of this area. This is what is used by ps and so on to find and show the program arguments.
Since the data is in the program's space, it can manipulate it. Doing this without changing the program itself involves loading a shim with a main() function that will be called before the real main of the program. This shim can copy the real arguments to a new space, then overwrite the original arguments so that ps will just see nuls.
The following C code does this.
/* https://unix.stackexchange.com/a/403918/119298
* capture calls to a routine and replace with your code
* gcc -Wall -O2 -fpic -shared -ldl -o shim_main.so shim_main.c
* LD_PRELOAD=/.../shim_main.so theprogram theargs...
*/
#define _GNU_SOURCE /* needed to get RTLD_NEXT defined in dlfcn.h */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <dlfcn.h>
typedef int (*pfi)(int, char **, char **);
static pfi real_main;
/* copy argv to new location */
char **copyargs(int argc, char** argv){
char **newargv = malloc((argc+1)*sizeof(*argv));
char *from,*to;
int i,len;
for(i = 0; i<argc; i++){
from = argv[i];
len = strlen(from)+1;
to = malloc(len);
memcpy(to,from,len);
memset(from,'\0',len); /* zap old argv space */
newargv[i] = to;
argv[i] = 0;
}
newargv[argc] = 0;
return newargv;
}
static int mymain(int argc, char** argv, char** env) {
fprintf(stderr, "main argc %d\n", argc);
return real_main(argc, copyargs(argc,argv), env);
}
int __libc_start_main(pfi main, int argc,
char **ubp_av, void (*init) (void),
void (*fini)(void),
void (*rtld_fini)(void), void (*stack_end)){
static int (*real___libc_start_main)() = NULL;
if (!real___libc_start_main) {
char *error;
real___libc_start_main = dlsym(RTLD_NEXT, "__libc_start_main");
if ((error = dlerror()) != NULL) {
fprintf(stderr, "%s\n", error);
exit(1);
}
}
real_main = main;
return real___libc_start_main(mymain, argc, ubp_av, init, fini,
rtld_fini, stack_end);
}
It is not possible to intervene on main(), but you can intervene on the standard C library function __libc_start_main, which goes on to call main. Compile this file shim_main.c as noted in the comment at the start, and run it as shown. I've left a printf in the code so you check that it is actually being called. For example, run
LD_PRELOAD=/tmp/shim_main.so /bin/sleep 100
then do a ps and you will see a blank command and args being shown.
There is still a small amount of time that the command args may be visible. To avoid this, you could, for example, change the shim to read your secret from a file and add it to the args passed to the program.
| Hide arguments to program without source code |
1,549,798,909,000 |
I am trying to clone a 500 GB SSD to a 1TB SSD. For some reason, it keeps failing when the data being copied reaches 8GB. This is the third 1TB SSD I've tried this on and they all get stuck at the same place. I've ran the following command:
dd if=/dev/sda of=/dev/sdb bs=1024k status=progress
I've also tried to clone the drive using Clonezilla which fails at the same spot. I used GParted to reformat the drive and set it to a EXT4 file system but it still gets stuck at the same spot. Sda is internal and sdb is plugged in externally.
The error I'm getting says:
7977443328 bytes (8.0 GB, 7.4 GB) copied, 208s, 38.4 MB/s
dd: error reading '/dev/sda': Input/output error
7607+1 records in
7607+1 records out
Thanks to @roaima for the answer below. I was able to run ddrescue and it copied most of the data over. I took the internal SSD out and connected both the new and old SSDs to a CentOS box via USB3. I ran the following:
ddrescue -v /dev/sdb /dev/sdc tmp --force
It ran for over 15 hours. It stopped overnight. But the good thing is it picked back up where it left off when I ran the command again.
I used screen so that I wouldn't be locked into a single session the second time around :) . I used Ctrl+c to exit the ddrescue command after 99.99% of the data was rescued since it was hanging there for hours. I was able to boot from the new drive and it booted right up. Here is the state where I exited the ddrescue:
Initial status (read from mapfile)
rescued: 243778 MB, tried: 147456 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 474344 MB, non-trimmed: 1363 kB, current rate: 0 B/s
ipos: 474341 MB, non-trimmed: 0 B, current rate: 0 B/s
opos: 474341 MB, non-scraped: 522752 B, average rate: 8871 kB/s
non-tried: 0 B, bad-sector: 143360 B, error rate: 0 B/s
rescued: 500107 MB, bad areas: 123, run time: 8h 1m 31s
pct rescued: 99.99%, read errors: 354, remaining time: 14h 31m
time since last successful read: 6m 7s
Scraping failed blocks... (forwards)^C
Interrupted by user
Hopefully this helps others. I think my old drive was starting to fail. Hopefully no data was lost. Now on to resizing the LUKS partition :)
|
The error is, dd: error reading '/dev/sda': Input/output error, which tells you that the problem is reading the source disk and not writing to the destination. You can replace the destination disk as many times as you like and it won't resolve the issue of reading the source.
Instead of using dd, consider rescuing the data off the disk before it dies completely. Either copy the files using something like rsync or cp, or take an image copy with ddrescue.
ddrescue -v /dev/sda /dev/sdb /some/path/not/on/sda_or_sdb
The last parameter points to a relatively small temporary file (the map file) that is on neither /dev/sda nor /dev/sdb. It could be on an external USB memory stick if you have nothing else.
The ddrescue command understands that a source disk may be faulty. It reads relatively large blocks at a time until it hits an error, and at that point it marks the section for closer inspection and smaller copy attempts. The map file is used to allow for restarts and continuations in the event that your source disk locks up and the system has to be restarted. It'll do its best to copy everything it can.
Once you've copied the disk, your /dev/sdb will appear to have partitions corresponding only to the original disk's size. You can use fdisk or gparted/parted to fix that up afterwards.
If you had an error copying data you should first use one of the fsck family to check and fix the partitions. For example, e2fsck -f /dev/sdb1.
| Why does my dd Full Disk Copy Keep failing at 8 GB? |
1,549,798,909,000 |
Also, it seems that Linux has a much friendlier user interface.
Has Unix been trying to "keep up"?
|
Let me throw in some more arguments why the transition is rather slow (but there is definitely one):
First of all it is sometimes very difficult for Customers to switch from one UNIX vendor to another. Even if you jump from, let's say, SuSE to RedHat, there are plenty of things that differ from an administrator's point of view. When going from an AIX (or HP/UX or Solaris ...) to any Linux, things differ even more. As a customer you have to check whether it pays off to migrate your environment.
Normally there's a whole bunch of 3rd party software involved and it's not a trivial task to verify if everything is available for the target environment. If software has to be replaced due to the OS migration it has to be checked whether it is compatible with the existing company framework.
If self-developed software is involved, the SW has to be ported. Often this fails immediately at step #1: The target OS has not all the needed libraries or the used development framework.
Also it's not very cheap to train the SysOp and SysEng teams to a new platform. Years of experience may be rendered worthless (depending on the depth of experience), new best practices have to be (re-)evaluated and some SysEngs may even leave the company because they want to go on with their *NIX derivate instead of switching.
The total cost of a migration is immense in large environments. You may easily calculate 1-2 years of planning, doing, UAT test, stability tests, disaster tests - all involving a lot of people (all of who want to be paid) which are drawn away from their daily tasks.
Considering all this, one may understand why companies stay with their current vendor and prefer just to upgrade existing environments. From what I've experienced, new systems get their chance when it comes to building up new environments.
But after all: there aren't many ClosedSource-Unices left out there. AIX, HP/UX and Solaris are the big vendors left (OS/X if you count Desktop systems in). As I think of it, I even don't know if IRIS is still alive...
I've removed already written sentences about that user-interface saying before hitting the post-button as this would end up in a flame war :-)
| Why is Unix still used if Linux is based off of it, and Linux is free? |
1,549,798,909,000 |
In Linux, every single entity is considered as FILE. If I do vim <cd-Name> then, vim will open the directory content into it's editor, because, it do not differentiate between file and directories.
But today while working, I encountered a thing, which I am curious to know about.
I planned to open a file from nested directory
vim a/b/c/d/file
But instead of vim, I typed
cd a/b/c/d/
and hit the TAB twice, but command was showing only the available directory of the "d" directory rather files.
Don't the cd command honour "everything is a file"? Or am I missing something ?
|
The "Everything is a file" phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say "Everything is a file descriptor". Linus Torvalds himself corrected it again a bit more precisely: "Everything is a stream of bytes".
However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that.
To your question: Of course there are different file types. In linux there are 7 file types. The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB) only lists directories, because the stat() system call (see man 2 stat) returns a struct with a field called st_mode. The POSIX standard defines what that field can contain:
S_ISREG(m) is it a regular file?
S_ISDIR(m) directory?
S_ISCHR(m) character device?
S_ISBLK(m) block device?
S_ISFIFO(m) FIFO (named pipe)?
S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.)
S_ISSOCK(m) socket? (Not in POSIX.1-1996.)
The cd command completion function just displays "files" where the S_ISDIR flag is set.
| Everything is a file? |
1,549,798,909,000 |
I'm trying to change the order of lines in a specific pattern. Working with a file with many lines (ex. 99 lines). For every three lines, I would like the second line to be the third line, and the third to be the second line.
EXAMPLE.
1- Input:
gi_1234
My cat is blue.
I have a cat.
gi_5678
My dog is orange.
I also have a dog.
...
2- Output:
gi_1234
I have a cat.
My cat is blue.
gi_5678
I also have a dog.
My dog is orange.
...
|
Using awk and integer maths:
awk 'NR%3 == 1 { print } NR%3 == 2 { delay=$0 } NR%3 == 0 { print; print delay; delay=""} END { if(length(delay) != 0 ) { print delay } }' /path/to/input
The modulus operator performs integer division and returns the remainder, so for each line, it will return the sequence 1, 2, 0, 1, 2, 0 [...]. Knowing that, we just save the input on lines where the modulus is 2 for later -- to wit, just after printing the input when it's zero.
| Change the order of lines in a file |
1,549,798,909,000 |
Can anyone explain in details what is going on with the following. Let's imagine I am mounting a directory with noexec option as follows:
mount -o noexec /dev/mapper/fedora-data /data
So to verify this I ran mount | grep data:
/dev/mapper/fedora-data on /data type ext4 (rw,noexec,relatime,seclabel,data=ordered)
Now within /data I'm creating a simple script called hello_world as follows:
#!/bin/bash
echo "Hello World"
whoami
So I made the script executable by chmod u+x hello_world (this will however have no effect on a file system with noexec options) and I tried running it:
# ./hello_world
-bash: ./hello_world: Permission denied
However, prepanding bash to the file yields to:
# bash hello_world
Hello World
root
So then I created a simple hello_world.c with the following contents:
#include <stdio.h>
int main()
{
printf("Hello World\n");
return 0;
}
Compiled it using cc -o hello_world hello_world.c
Now running:
# ./hello_world
-bash: ./hello_world: Permission denied
So I tried to run it using
/lib64/ld-linux-x86-64.so.2 hello_world
The error:
./hello_world: error while loading shared libraries: ./hello_world: failed to map segment from shared object: Operation not permitted
So this is of course true since ldd returns the following:
ldd hello_world
ldd: warning: you do not have execution permission for `./hello_world'
not a dynamic executable
On another system where noexec mount option doesn't apply I see:
ldd hello_world
linux-vdso.so.1 (0x00007ffc1c127000)
libc.so.6 => /lib64/libc.so.6 (0x00007facd9d5a000)
/lib64/ld-linux-x86-64.so.2 (0x00007facd9f3e000)
Now my question is this: Why does running a bash script on a file system with noexec option work but not a c compiled program? What is happening under the hood?
|
What's happening in both cases is the same: to execute a file directly, the execute bit needs to be set, and the filesystem can't be mounted noexec. But these things don't stop anything from reading those files.
When the bash script is run as ./hello_world and the file isn't executable (either no exec permission bit, or noexec on the filesystem), the #! line isn't even checked, because the system doesn't even load the file. The script is never "executed" in the relevant sense.
In the case of bash ./hello_world, well, The noexec filesystem option just plain isn't as smart as you'd like it to be. The bash command that's run is /bin/bash, and /bin isn't on a filesystem with noexec. So, it runs no problem. The system doesn't care that bash (or python or perl or whatever) is an interpreter. It just runs the command you gave (/bin/bash) with the argument which happens to be a file. In the case of bash or another shell, that file contains a list of commands to execute, but now we're "past" anything that's going to check file execute bits. That check isn't responsible for what happens later.
Consider this case:
$ cat hello_world | /bin/bash
… or for those who do not like Pointless Use of Cat:
$ /bin/bash < hello_world
The "shbang" #! sequence at the beginning of a file is just some nice magic for doing effectively the same thing when you try to execute the file as a command. You might find this LWN.net article helpful: How programs get run.
| Executing a bash script or a c binary on a file system with noexec option |
1,549,798,909,000 |
I’m trying to change the password that is asked when running sudo in Ubuntu. Running sudo passwd or sudo passwd root does give me the two new password prompts and it successfully changes the password.
But then I can still use my old password when running sudo again for something else. I do have a user with the exact same password but I don’t know if that makes a difference. I enabled the root user and I can see the new password does work with the root user account.
So the root password is changed but not the password for sudo.
How do I change the sudo password?
|
You're changing root's password. sudo wants your user's password.
To change it, try plain passwd, without arguments or running it through sudo.
Alternately, you can issue:
$ sudo passwd <your username>
| Changing root password does not change sudo password |
1,549,798,909,000 |
How can I print the numerical ASCII values of each character in a text file. Like cat, but showing the ASCII values only... (hex or decimal is fine).
Example output for a file containing the word Apple (with a line feed) might look like:
065 112 112 108 101 013 004
|
The standard command for that is od, for octal dump (though with options, you can change from octal to decimal or hexadecimal...):
$ echo Apple | od -An -vtu1
65 112 112 108 101 10
Note that it outputs the byte value of every byte in the file. It has nothing to do with ASCII or any other character set.
If the file contains a A in a given character set, and you would like to see 65, because that's the byte used for A in ASCII, then you would need to do:
< file iconv -f that-charset -t ascii | od -An -vtu1
To first convert that file to ascii and then dump the corresponding byte values. For instance Apple<LF> in EBCDIC-UK would be 193 151 151 147 133 37 (301 227 227 223 205 045 in octal).
$ printf '\301\227\227\223\205\045' | iconv -f ebcdic-uk -t ascii | od -An -vtu1
65 112 112 108 101 10
| How do I print the (numerical) ASCII values of each character in a file? |
1,549,798,909,000 |
We want to calculate the first numbers that we get from du
du -b /tmp/*
6 /tmp/216c6f99-6671-4865-b8bc-7205f5388752_resources
668669 /tmp/hadoop7887078727316788325.tmp
6 /tmp/hadoop-hdfs
42456 /tmp/hive
32786 /tmp/hsperfdata_hdfs
6 /tmp/hsperfdata_hive
32786 /tmp/hsperfdata_root
262244 /tmp/hsperfdata_yarn
so final sum will be
sum=6+668669+6+42456+32786+6+32786+262244
echo $sum
How we can do it by awk or perl one liners?
|
In AWK:
{ sum += $1 }
END { print sum }
So
du -b /tmp/* | awk '{ sum += $1 } END { print sum }'
Note that the result won’t be correct if the directories under /tmp have subdirectories themselves, because du produces running totals on directories and their children.
du -s will calculate the sum for you correctly (on all subdirectories and files in /tmp, including hidden ones):
du -sb /tmp
and du -c will calculate the sum of the listed directories and files, correctly too:
du -cb /tmp/*
| sum all numbers from "du" |
1,549,798,909,000 |
When I wanted to create a hard link in my /home directory in root mode, Linux showed the following error message:
ln: failed to create hard link ‘my_sdb’ => ‘/dev/sda1’: Invalid cross-device link
The above error message is shown below:
# cd /home/user/
# ln /dev/sda1 my_sdb
But I could only create a hard link in the /dev directory, and it was not possible in other directories.
Now, I want to know how to create a hard link from an existing device file (like sdb1) in /home directory (or other directories) ?
|
But I could only create a hard link in the /dev directory and it was not possible in other directories.
As shown by the error message, it is not possible to create a hard link across different filesystems; you can create only soft (symbolic) links.
For instance, if your /home is in a different partition than your root partition, you won't be able to hard link /tmp/foo to /home/user/.
Now, as @RichardNeumann pointed out, /dev is usually mounted as a devtmpfs filesystem. See this example:
[dr01@centos7 ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos_centos7-root 46110724 3792836 42317888 9% /
devtmpfs 4063180 0 4063180 0% /dev
tmpfs 4078924 0 4078924 0% /dev/shm
tmpfs 4078924 9148 4069776 1% /run
tmpfs 4078924 0 4078924 0% /sys/fs/cgroup
/dev/sda1 1038336 202684 835652 20% /boot
tmpfs 815788 28 815760 1% /run/user/1000
Therefore you can only create hard links to files in /dev within /dev.
| Why I can't create a hard link from device file in other than /dev directory? |
1,549,798,909,000 |
So if I've got a variable
VAR='10 20 30 40 50 60 70 80 90 100'
and echo it out
echo "$VAR"
10 20 30 40 50 60 70 80 90 100
However, further down the script I need to reverse the order of this variable so it shows as something like
echo "$VAR" | <code to reverse it>
100 90 80 70 60 50 40 30 20 10
I tried using rev and it literally reversed everything so it came out as
echo "$VAR" | rev
001 09 08 07 06 05 04 03 02 01
|
On GNU systems, the reverse of cat is tac:
$ tac -s" " <<< "$VAR " # Please note the added final space.
100 90 80 70 60 50 40 30 20 10
| Reversing a variable's contents by words |
1,549,798,909,000 |
I installed Arch a couple of days ago. Just realized the date/time were off by a day and one hour.
I changed it using timedatectl set-time. Then used hwclock --systohc to set the hardware clock. After that I was not able to enter some sites like Gmail because of https certificate errors. I tried changing the time back but it did not work.
I rebooted and then had problems because the partitions had mounted on a different time so I used fsck /dev/sda on my partitions and I was able to boot up. Right now the clock is not a problem but I really need to check my mail. I had to use Facebook to log in to stackexchange cringe.
Help?
This is what Gmail's error page say:
The server's security certificate is not yet valid!
You attempted to reach gmail.com, but the server presented a certificate that is not yet valid. No information is available to indicate whether that certificate can be trusted. Chromium cannot reliably guarantee that you are communicating with gmail.com and not an attacker. Your computer's clock is currently set to Tuesday, January 10, 2012 12:14:47 PM. Does that look right? If not, you should correct your system's clock and then refresh this page.
You cannot proceed because the website operator has requested heightened security for this domain.
|
I used the ntp solution in this article. Updated against a time server.
I was getting an error at first. You have to stop ntp before using a time server. If it can't find a server you have to specify it, in my case I used: sudo ntpdate 0.us.pool.ntp.org. That did it.
| I messed up my system clock in Arch Linux |
1,549,798,909,000 |
Assume that we have two disks, one master SATA and one master ATA. How will they show up in /dev?
|
Depending on your SATA driver and your distribution's configuration, they might show up as /dev/hda and /dev/hdb, or /dev/hda and /dev/sda, or /dev/sda and /dev/sdb. Distributions and drivers are moving towards having everything hard disk called sd?, but PATA drivers traditionally used hd? and a few SATA drivers also did.
The device names are determined by the udev configuration. For example, on Ubuntu 10.04, the following lines from /lib/udev/rules.d/60-persistent-storage.rules make all ATA hard disks appear as /dev/sd* and all ATA CD drives appear as /dev/sr*:
# ATA devices with their own "ata" kernel subsystem
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="ata", IMPORT{program}="ata_id --export $tempnode"
# ATA devices using the "scsi" subsystem
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="scsi", ATTRS{vendor}=="ATA", IMPORT{program}="ata_id --export $tempnode"
| Names for ATA and SATA disks in Linux |
1,549,798,909,000 |
we have 100% on /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg08_root 20G 20G 132K 100% /
so when I do lvextend we get the following errors
# lvextend -L+5G /dev/mapper/vg08_root
Couldn't create temporary archive name.
Volume group "vg00" metadata archive failed.
how to resolve this?
|
You may be able to circumvent the space requirement for this operation by disabling the metadata backup with the -A|--autobackup option:
lvextend -An -L+5G /dev/mapper/vg08_root
If you do this, follow the operation with a vgcfgbackup to capture the new state.
Post-mortem note:
Since the ultimate goal was to expand the logical volume and resize the encapsulated filesystem, a one-step operation could have been used:
lvextend -An -L+5G --resizefs /dev/mapper/vg08_root
In this case, the filesystem type would have been automatically deduced, avoiding trying to use resize2fs in lieu of `xfs_growfs'.
| LVM + Couldn't create temporary archive name |
1,549,798,909,000 |
In order to ssh into my work computer from home, let's call it C I have to do the following:
ssh -t user@B ssh C
B is a server that I can connect to from home but C can only be connected to from B. This works fine.
If I want to copy a file that is on C to my home computer using scp, what command do I need from my home computer?
|
I’d suggest the following in your .ssh/config:
Host C
User user
ProxyCommand ssh -W %h:%p user@B
I’t much safer if host B is untrusted, and works for scp and sftp.
| How to scp via an intermediate machine? [duplicate] |
1,549,798,909,000 |
I have a data list, like
12345
23456
67891
-20000
200
600
20
...
Assume the size of this data set (i.e. lines of file) is N. I want to randomly draw m lines from this data file. Therefore, the output should be two files, one is the file including these m lines of data, and the other one includes N-m lines of data.
Is there a way to do that using a Linux command?
|
This might not be the most efficient way but it works:
shuf <file> > tmp
head -n $m tmp > out1
tail -n +$(( m + 1 )) tmp > out2
With $m containing the number of lines.
| Randomly draw a certain number of lines from a data file |
1,549,798,909,000 |
When I visited the kernel.org website to download the latest Linux kernel, I noticed a package named 2.6.37-rc5 in the repository. What is the meaning of the "rc5" at the end?
|
Release Candidate.
By convention, whenever an update for a program is almost ready, the test version is given a rc number. If critical bugs are found, that require fixes, the program is updated and reissued with a higher rc number. When no critical bugs remain, or no additional critical bugs are found, then the rc designation is dropped.
| Meaning of "rc5" in "linux kernel 2.6.37-rc5" |
1,549,798,909,000 |
I have a number of servers to SSH into, and some of them, being behind different NATs, may require an SSH tunnel. Right now I'm using a single VPS for that purpose. When that number reaches 65535 - 1023 = 64512, and the VPS runs out of ports to attach tunnels to, do I spin up another VPS, or do I simply attach an additional IP address to the existing VPS?
In other words, is a 65535 limit set per a Linux machine, or per a network interface? This answer seems to say it's per an IP address in general, and per IPv4 address specifically. So does a 5-tuple mean that introducing a new IP address will warrant a new tuple, therefore resetting the limit? And if IPv4 is the case, is it different for IPv6?
|
You most certainly can have multiple processes listening on the same port if they are bound to different IPs.
Here's a demonstration using nc:
% nc -l 127.0.0.1 1234 &
[1] 24985
% nc -l 192.168.1.178 1234 &
[2] 24988
% netstat -an | grep 1234
tcp4 0 0 192.168.1.178.1234 *.* LISTEN
tcp4 0 0 127.0.0.1.1234 *.* LISTEN
As you see, I started nc twice in listen mode, one bound to 127.0.0.1, the other to 192.168.1.178 (which happen to be two of the IP addresses on that computer), both using port 1234.
netstat then shows two listening sockets.
I made the test on macOS, but on Linux you could add -p to netstat to show the two distinct processes. On macOS you can use lsof -nP to show the same thing.
Note that since you are opening a "hole" in a security layer, you probably don't want to bind to an externally reachable (public) IP address, otherwise anyone can connect to that IP+port and reach the remote system which apparently needed to be protected.
You should use only loopback IP addresses (127.0.0.1, 127.0.0.2...) or private IP addresses on a private network reachable only by trusted systems.
For completeness, let's specify that an active TCP connection is defined by a 4-tuple (local IP, local port, remote IP, remote port), but a listening socket is indeed defined only by local IP and port. Connections established to that socket will get the full 4-tuple.
| Can I have a single server listen on more than 65535 ports by attaching an IPv4 address |
1,586,636,159,000 |
I'm looking for the commands that will tell me the allocation quantum on drives formatted with ext4 vs btrfs.
Background: I am using a backup system that allows users to restore individual files. This system just uses rsync and has no server-side software, backups are not compressed. The result is that I have some 3.6TB of files, most of them small.
It appears that for my data set storage is much less efficient on a btrfs volume under LVM than it is on a plain old ext4 volume, and I suspect this has to do with the minimum file size, and thus the block size, but I have been unable to figure out how to get those sizes for comparison purposes. The btrfs wiki says that it uses the "page size" but there's nothing I've found on obtaining that number.
|
You'll want to look at the data block allocation size, which is the minimum block that any file can allocate. Large files consist of multiple blocks. And there's always some "waste" at the end of large files (or all small files) where the final block isn't filled entirely, and therefore unused.
As far as I know, every popular Linux filesystem uses 4K blocks by default because that's the default pagesize of modern CPUs, which means that there's an easy mapping between memory-mapped files and disk blocks. I know for a fact that BTRFS and Ext4 default to the page size (which is 4K on most systems).
On ext4, just use tune2fs to check your block size, as follows (change /dev/sda1 to your own device path):
[root@centos8 ~]# tune2fs -l /dev/sda1 |grep "^Block size:"
Block size: 4096
[root@centos8 ~]#
On btrfs, use the following command to check your block size (change /dev/mapper/cr_root to your own device path, this example simply uses a typical encrypted BTRFS-on-LUKS path):
sudo btrfs inspect-internal dump-super -f /dev/mapper/cr_root | grep "^sectorsize"
| How do I determine the block size for ext4 and btrfs filesystems? |
1,586,636,159,000 |
Will # dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table?
Or is it the other way around, i.e, does
# fdisk /dev/sda g (for GPT)
wipe out the zeros written by /dev/zero?
|
Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table?
Yes, the partition table is in the first part of the drive, so writing over it will destroy it. That dd will write over the whole drive if you let it run (so it will take quite some time).
Something like dd bs=512 count=50 if=/dev/zero of=/dev/sda would be enough to overwrite the first 50 sectors, including the MBR partition table and the primary GPT. Though at least according to Wikipedia, GPT has a secondary copy of the partition table at the end of the drive, so overwriting just the part in the head of the drive might not be enough.
(You don't have to use dd, though. head -c10000 /dev/zero > /dev/sda or cat /bin/ls > /dev/sda would have the same effect.)
does fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero?
Also yes (provided you save the changes).
(However, the phrasing in the title is just confusing, /dev/zero in itself does not do anything any more than any regular storage does.)
| Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? |
1,586,636,159,000 |
I have a text file in this format:
####################################
KEY2
VAL21
VAL22
VAL23
VAL24
####################################
KEY1
VAL11
VAL12
VAL13
VAL14
####################################
KEY3
VAL31
VAL32
VAL33
VAL34
I want sort this file by KEY line and keep next 4 lines with it in result so sorted result should be:
####################################
KEY1
VAL11
VAL12
VAL13
VAL14
####################################
KEY2
VAL21
VAL22
VAL23
VAL24
####################################
KEY3
VAL31
VAL32
VAL33
VAL34
is there a way to do this ?
|
msort(1) was designed to be able to sort files with multi-line records. It has an optional gui, as well as a normal and usable-for-humans command line version. (At least, humans that like to read manuals carefully and look for examples...)
AFAICT, you can't use an arbitrary pattern for records, so unless your records are fixed-size (in bytes, not characters or lines). msort does have a -b option for records that are blocks of lines separated by blank lines.
You can transform your input into a format that will work with -b pretty easily, by putting a blank line before every ###... (except the first one).
By default, it prints statistics on stderr, so at least it's easy to tell when it didn't sort because it thought the entire input was a single record.
msort works on your data. The sed command prepends a newline to every #+ line except for line 1. -w sorts the whole record (lexicographically). There are options for picking what part of a record to use as a key, but I didn't need them.
I also left out stripping the extra newlines.
$ sed '2,$ s/^#\+/\n&/' unsorted.records | msort -b -w 2>/dev/null
####################################
KEY1
VAL11
VAL12
VAL13
VAL14
####################################
KEY2
VAL21
VAL22
VAL23
VAL24
####################################
KEY3
VAL31
VAL32
VAL33
VAL34
I didn't have any luck with -r '#' to use that as the record separator. It thought the whole file was one record.
| Sort text files with multiple lines as a row |
1,586,636,159,000 |
I've found this website; it has zip files (links on the main page) with all the artworks. Some of them have an .ans extension and they look like ANSI escape codes used on Linux/Unix, but when I open one of them using cat in the XFce terminal it produces garbage (but in color). They don't look like the image gallery.
The first line of the main artwork from the link looks like this (copied from Emacs):
[0;1m[30mthere is no substitute [0;33mÜܲ[1;43m°±²²[40mÛ[43mÛ²±[0;33mÝ ßÜ[1;43m²²²[40mÛÛ²[40m[K
The file type is DOS, but they can be just created on Windows.
When searching for ANSI art I also found this website that has zip files containing only files with an .ans extension and they also don't render properly on Linux (gallery on page 2).
My questions are:
what type of encoding is this, for what computer?
do I need a special viewer to see it on Linux terminal?
do you know if this type of artwork was created for Linux/Unix terminals? I've only found ASCII art.
is it possible to convert it to be viewed on Linux terminals?
|
These are ANSI escape codes, but you’re running into three issues:
the character encoding, as you suspect — most of these files are in CP437, so you need to convert them:
iconv -f CP437
(use the -t option if you need to specify the target encoding; by default iconv will match the current locale’s character encoding);
the colour scheme — these files typically assume something similar to the CGA/EGA/VGA colour scheme used on PCs; terminal emulators generally allow you to choose a colour scheme (or redefine colours manually), for example GNOME Terminal has a “Linux console” built-in scheme which works well for ANSI art;
the screen size — most ANSI art assumes a screen width of 80 columns and expects to wrap around there.
Once you fix all that, you don’t need a special viewer; here’s a screenshot showing the output of aa-neurodancer.ans in GNOME Terminal, after converting the character encoding:
The bottom of the screenshot shows the file’s SAUCE record:
SAUCE version 00
title: “Neurodancer”
author: “Antsy Atheist”
date: August 13, 2018
file size: 0x1A65, 6757 bytes
data type: character
file type: ANSi
width: 80
height: 23
font: IBM VGA
(Ansilove can decode SAUCE records for you.)
| What type of encoding do these ANSI artworks use? |
1,586,636,159,000 |
Which format (Mac or DOS) should I use on Linux PCs/Clusters?
I know the difference:
DOS format uses "carriage return" (CR or \r) then "line feed" (LF or \n).
Mac format uses "carriage return" (CR or \r)
Unix uses "line feed" (LF or \n)
I also know how to select the option:
AltM for Mac format
AltD for DOS format
But there is no UNIX format.
Then save the file with Enter.
|
Use neither: enter a filename and press Enter, and the file will be saved with the default Unix line-endings (which is what you want on Linux).
If nano tells you it’s going to use DOS or Mac format (which happens if it loaded a file in DOS or Mac format), i.e. you see
File Name to Write [DOS Format]:
or
File Name to Write [Mac Format]:
press AltD or AltM respectively to deselect DOS or Mac format, which effectively selects the default Unix format.
| GNU nano 2: DOS Format or Mac Format on Linux |
1,586,636,159,000 |
Linux Mint tells me, I only have 622 MB free disk space but there should be some gigabytes left.
Looking at the partitions I am told that there are about ten gigabytes unused. I googled the problem and didn't find a solution but I did find the hint that I should check the disk usage with df -h.
sudo df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p8 189G 178G 622M 100% /home
The output doesn't make any sense to me: The difference between Size and Used is 11GB, but it only shows 622M as Available.
The SSD isn't old, so I wouldn't expect such a discrepancy.
What should I do?
|
If the filesystem is ext4, there are reserved blocks, mostly to help handling and help avoid fragmentation and available only to the root user. For this setting, it can be changed live using tune2fs (not all settings can be handled like this when the filesystem is mounted):
-m reserved-blocks-percentage
Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks
for use by privileged processes is done to avoid filesystem
fragmentation, and to allow system daemons, such as syslogd(8), to
continue to function correctly after non-privileged processes are
prevented from writing to the filesystem. Normally, the default
percentage of reserved blocks is 5%.
So if you want to lower the reservation to 1% (~ 2GB) thus getting access to ~ 8GB of no more reserved space, you can do this:
sudo tune2fs -m 1 /dev/nvme0n1p8
Note: the -m option actually accepts a decimal number as parameter. You can use -m 0.1 to reserve only about ~200MB (and access most of those previously unavailable 10GB). You can also use the -r option instead to reserve directly by blocks. It's probably not advised to have 0 reserved blocks.
| Disk usage confusion: 10G missing on Linux home partition on SSD |
1,586,636,159,000 |
I try to run Android from SD-card. This card is prepared. There are partitions: boot(FAT32), rootfs(ext4), system(ext4), cache(ext4) and usedata(ext4). Boot partitions has files to run u-boot: MLO, u-boot.bin and uImage. To run it I use commands
mmcinit 0
fatload mmc 0 0x80000000 uImage
setenv bootargs 'console=ttyO2,115200n8 mem=456M@0x80000000 mem=512M@0xA0000000 init=/init vram=10M omapfb.vram=0:4M androidboot.console=ttyO2 root=/dev/mmcblk1p2 rw rootwait rootfstype=ext4'
bootm 0x80000000
Than I see how Linux starts. But after few seconds on step of loading rootfs I see an error message
[ 4.015655] EXT4-fs (mmcblk1p2): couldn't mount RDWR because of unsupported optional features (400)
[ 4.036499] sd 0:0:0:0: [sda] Attached SCSI removable disk
[ 4.079986] List of all partitions:
[ 4.083801] b300 31162368 mmcblk0 driver: mmcblk
[ 4.089660] b301 128 mmcblk0p1 f9f21f00-a8d4-5f0e-9746-594869aec34e
[ 4.097839] b302 256 mmcblk0p2 f9f21f01-a8d4-5f0e-9746-594869aec34e
[ 4.106018] b303 128 mmcblk0p3 f9f21f02-a8d4-5f0e-9746-594869aec34e
[ 4.114288] b304 16384 mmcblk0p4 f9f21f03-a8d4-5f0e-9746-594869aec34e
[ 4.122436] b305 16 mmcblk0p5 f9f21f04-a8d4-5f0e-9746-594869aec34e
[ 4.130676] b306 8192 mmcblk0p6 f9f21f05-a8d4-5f0e-9746-594869aec34e
[ 4.138916] b307 8192 mmcblk0p7 f9f21f06-a8d4-5f0e-9746-594869aec34e
[ 4.147094] 103:00000 524288 mmcblk0p8 f9f21f07-a8d4-5f0e-9746-594869aec34e
[ 4.155334] 103:00001 262144 mmcblk0p9 f9f21f08-a8d4-5f0e-9746-594869aec34e
[ 4.163574] 103:00002 30342128 mmcblk0p10 f9f21f09-a8d4-5f0e-9746-594869aec34e
[ 4.171874] b310 2048 mmcblk0boot1 (driver?)
[ 4.177734] b308 2048 mmcblk0boot0 (driver?)
[ 4.183593] b318 15179776 mmcblk1 driver: mmcblk
[ 4.189453] b319 102400 mmcblk1p1 00000000-0000-0000-0000-000000000000
[ 4.197692] b31a 10240 mmcblk1p2 00000000-0000-0000-0000-000000000000
[ 4.205932] b31b 1 mmcblk1p3 00000000-0000-0000-0000-000000000000
[ 4.214141] b31d 262144 mmcblk1p5 00000000-0000-0000-0000-000000000000
[ 4.222351] b31e 13228032 mmcblk1p6 00000000-0000-0000-0000-000000000000
[ 4.230682] b31f 1572864 mmcblk1p7 00000000-0000-0000-0000-000000000000
[ 4.238891] No filesystem could mount root, tried: ext4
[ 4.244812] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,26)
[ 4.254089] CPU1: stopping
I don't know why it happens.
How can I solve this problem?
|
The error "EXT4-fs : couldn't mount RDWR because of unsupported optional features (400)" is due to different versions between the partition formatter (mkfs.ext4) and the mounter.
You have two options:
a) Either you have to upgrade the mounter program using a newer distro inside the SD-card.
b) or you have to backup the files, reformat the SD-card with the same distro (the same ext4 versions) you are doing the mounting, and after the reformat copy the files again to the SD-card.
In the second option, care must be taken with the original ext4 options the formatter has put, trying to consider the same options at reformat. Note also that a reformat of partitions doesn't need a repartition of the whole device, so the boot MBR would not be altered.
| Linux - couldn't mount RDWR because of unsupported optional features (400) |
1,586,636,159,000 |
I have a bash script which uses rsync to backup files in Archlinux. I noticed that rsync failed to copy a file from /sys, while cp worked just fine:
# rsync /sys/class/net/enp3s1/address /tmp
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
ERROR: address failed verification -- update discarded.
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1052) [sender=3.0.9]
# cp /sys/class/net/enp3s1/address /tmp ## this works
I wonder why does rsync fail, and is it possible to copy the file with it?
|
Rsync has code which specifically checks if a file is truncated during read and gives this error — ENODATA. I don't know why the files in /sys have this behavior, but since they're not real files, I guess it's not too surprising. There doesn't seem to be a way to tell rsync to skip this particular check.
I think you're probably better off not rsyncing /sys and using specific scripts to cherry-pick out the particular information you want (like the network card address).
| Why does rsync fail to copy files from /sys in Linux? |
1,586,636,159,000 |
I want to run a command on Linux in a way that it cannot create or open any files to write. It should still be able to read files as normal (so an empty chroot is not an option), and still be able to write to files already open (especially stdout).
Bonus points if writing files to certain directories (i.e. the current directory) is still possible.
I’m looking for a solution that is process-local, i.e. does not involve configuring things like AppArmor or SELinux for the whole system, nor root privileges. It may involve installing their kernel modules, though.
I was looking at capabilities and these would have been nice and easy, if there were a capability for creating files. ulimit is another approach that would be convenient, if it covered this use case.
|
It seems that the right tool for this job is fseccomp Based on sync-ignoringf code by Bastian Blank, I came up with this relatively small file that causes all its children to not be able to open a file for writing:
/*
* Copyright (C) 2013 Joachim Breitner <[email protected]>
*
* Based on code Copyright (C) 2013 Bastian Blank <[email protected]>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#define _GNU_SOURCE 1
#include <errno.h>
#include <fcntl.h>
#include <seccomp.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define filter_rule_add(action, syscall, count, ...) \
if (seccomp_rule_add(filter, action, syscall, count, ##__VA_ARGS__)) abort();
static int filter_init(void)
{
scmp_filter_ctx filter;
if (!(filter = seccomp_init(SCMP_ACT_ALLOW))) abort();
if (seccomp_attr_set(filter, SCMP_FLTATR_CTL_NNP, 1)) abort();
filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_WRONLY, O_WRONLY));
filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_RDWR, O_RDWR));
return seccomp_load(filter);
}
int main(__attribute__((unused)) int argc, char *argv[])
{
if (argc <= 1)
{
fprintf(stderr, "usage: %s COMMAND [ARG]...\n", argv[0]);
return 2;
}
if (filter_init())
{
fprintf(stderr, "%s: can't initialize seccomp filter\n", argv[0]);
return 1;
}
execvp(argv[1], &argv[1]);
if (errno == ENOENT)
{
fprintf(stderr, "%s: command not found: %s\n", argv[0], argv[1]);
return 127;
}
fprintf(stderr, "%s: failed to execute: %s: %s\n", argv[0], argv[1], strerror(errno));
return 1;
}
Here you can see that it is still possible to read files:
[jojo@kirk:1] Wed, der 06.03.2013 um 12:58 Uhr Keep Smiling :-)
> ls test
ls: cannot access test: No such file or directory
> echo foo > test
bash: test: Permission denied
> ls test
ls: cannot access test: No such file or directory
> touch test
touch: cannot touch 'test': Permission denied
> head -n 1 no-writes.c # reading still works
/*
It does not prevent deleting files, or moving them, or other file operations besides opening, but that could be added.
A tool that enables this without having to write C code is syscall_limiter.
| How to prevent a process from writing files |
1,586,636,159,000 |
How to get a list of all disks, like this?
/dev/sda
/dev/sdb
|
ls (shows individual partitions though)
# ls /dev/sd*
/dev/sda /dev/sda1
ls (just disks, ignore partitions)
# ls /dev/sd*[a-z]
/dev/sda
fdisk
# fdisk -l 2>/dev/null |awk '/^Disk \//{print substr($2,0,length($2)-1)}'
/dev/xvda
| Get simple list of all disks [duplicate] |
1,586,636,159,000 |
I ran the following iptables commands to create a blacklist rule but used the wrong port:
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j SSH_WHITELIST
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j ULOG --ulog-prefix SSH_brute_force
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP
How can I undo the above and then redo it for a different port?
|
use iptables -D ... to delete the entries.
iptables -D INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH
iptables -D INPUT -p tcp --dport 22 -m state --state NEW -j SSH_WHITELIST
iptables -D INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j ULOG --ulog-prefix SSH_brute_force
iptables -D INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP
| Undo iptables modification |
1,586,636,159,000 |
The standard way of making new processes in Linux is that the memory footprint of the parent process is copied and that becomes the environment of the child process until execv is called.
What memory footprint are we talking about, the virtual (what the process requested) or the resident one (what is actually being used)?
Motivation: I have a device with limited swap space and an application with a big difference between virtual and resident memory footprint. The application can't fork due to lack of memory and would like to see if trying to reduce the virtual footprint size would help.
|
In modern systems none of the memory is actually copied just because a fork system call is used. It is all marked read only in the page table such that on first attempt to write a trap into kernel code will happen. Only once the first process attempt to write will the copying happen.
This is known as copy-on-write.
However it may be necessary to keep track of committed address space as well. If no memory or swap is available at the time the kernel has to copy a page, it has to kill some process to free memory. This is not always desirable, so it is possible to keep track of how much memory the kernel has committed to.
If the kernel would commit to more than the available memory + swap, it can give an error code on attempt to call fork. If enough is available the kernel will commit to the full virtual size of the parent for both processes after the fork.
| When a process forks is its virtual or resident memory copied? |
1,586,636,159,000 |
As far as I know the device drivers are located in the Linux kernel. For example let's say a GNU/Linux distro A has the same kernel version as a GNU/Linux distro B. Does that mean that they have the same hardware support?
|
The short answer is no.
The driver support for the same kernel version is configurable at compile time and also allows for module loading. The actual devices supported in a distro therefore depend on the included compiled in device drivers, compiled loadable modules for devices and actual installed modules.
There are also devices not included in the kernel per se that a distro might ship.
I have not run into problems lately, but when I started with Linux at home I went with SuSE, although they had the same, or similar, kernel versions as RedHat, SuSE included ISDN drivers and packages "out of the box" (that was back 1998).
| Do different distros (but same kernel ver) have same hardware support |
1,586,636,159,000 |
I want to check connectivity between 2 servers (i.e. if ssh will succeed).
The main idea is to check the shortest way between server-a and server-b using a list of middle servers (for example if I'm on dev server and I want to connect to prod server - usually a direct ssh will fail).
Because this can take a while, I prefer not to use SSH - rather I prefer to check first if I can connect and if so then try to connect through SSH.
Some possible routes to get the idea:
server-a -> server-b
server-a -> middle-server-1 -> server-b
server-a -> middle-server-6 -> server-b
server-a -> middle-server-3 -> middle-server-2 -> server-b
Hope you understand what I'm looking for?
|
For checking server connectivity you have 4 tools at your disposal.
ping
This will check to see if any of the servers you're attempting to connect through, but won't be able to see if middle-server-1 can reach server-b, for example.
You can gate how long ping will attempt to ping another server through the use of the count switch (-c). Limiting it to 1 should suffice.
$ ping -c 1 skinner
PING skinner (192.168.1.3) 56(84) bytes of data.
64 bytes from skinner (192.168.1.3): icmp_req=1 ttl=64 time=5.94 ms
--- skinner ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.946/5.946/5.946/0.000 ms
You can check the status of this command through the use of this variable, $?. If it has the value 0 then it was successful, anything else and a problem occurred.
$ echo $?
0
traceroute
Another command you can use to check connectivity is traceroute.
$ traceroute skinner
traceroute to skinner (192.168.1.3), 30 hops max, 60 byte packets
1 skinner (192.168.1.3) 0.867 ms 0.859 ms 0.929 ms
Again this tool will not show connectivity through one server to another (same issue as ping), but it will show you the path through the network that your taking to get to another server.
ssh
ssh can be used in BatchMode to test connectivity. With BatchMode=yes you'll attempt to connect to another server, bypassing the use of username/passwords and only public/private keys. This typically speeds things up quite a bit.
$ ssh -o "BatchMode=yes" skinner
You can construct a rough one liner that will check for connectivity to a server:
$ ssh -q -o "BatchMode=yes" skinner "echo 2>&1" && echo $host SSH_OK || echo $host SSH_NOK
SSH_OK
If it works you'll get a SSH_OK message, if it fails you'll get a SSH_NOK message.
An alternative to this method is to also include the ConnectTimeout option. This will guard the ssh client from taking a long time. Something like this typically is acceptable, ConnectTimeout=5. For example:
$ ssh -o BatchMode=yes -o ConnectTimeout=5 skinner echo ok 2>&1
ok
If it fails it will look something like this:
$ ssh -o BatchMode=yes -o ConnectTimeout=5 mungr echo ok 2>&1
ssh: connect to host 192.168.1.2 port 22: No route to host
It will also set the return status:
$ echo $?
255
telnet
You can use this test to see if an ssh server is accessible on another server using just a basic telnet:
$ echo quit | telnet skinner 22 2>/dev/null | grep Connected
Connected to skinner.
| Method to check connectivity to other server |
1,586,636,159,000 |
I have access to an 8-core node of a Linux cluster. When logged in to the node, I can see a list of processors using this command:
more /proc/cpuinfo
In my 8-core node, the processors are numbered from 0 to 7. Each processor is an Intel Xeon CPU (E5430 @ 2.66GHz).
Now suppose I call the program foo with some arguments args:
foo args
The program foo takes a long time to execute (hours or days, for example). Having called foo, is it possible to determine the particular processor (i.e., 0 to 7) on which foo is running? The top program shows me the process ID and similar information, but I don't see the processor number. Is such information available?
|
ps can give you that information if you ask for the psr column (or use the -F flag which includes it).
Ex:
$ ps -F $$
UID PID PPID C SZ RSS PSR STIME TTY STAT TIME CMD
me 6415 6413 0 5210 2624 2 18:52 pts/0 SN 0:00 -su
Or:
$ ps -o pid,psr,comm -p $$
PID PSR COMMAND
6415 0 bash
My shell was running on CPU 2 when I ran the first command, on CPU 0 when I ran the second. Beware that processes can change CPUs very, very quickly so the information you actually see is, essentially, already stale.
Some more info in this Super User question's answers:
Linux: command to know the processor number in which a process is loaded?
| Determining the particular processor on which a process is running |
1,586,636,159,000 |
Yes, I've seen that there's already a similar question, but I came across kill -- -0 and was wondering what -- is doing?
|
In UNIX/Linux world two dashes one after other mean end of options. For example if you want to search for string -n with grep you should use command like:
grep -- -n file
If you want to get only the line number in above case you should use
grep -l -- -n file
So the command kill -- -0 try to send signal to process with ID -0 (minus zero)
| What does kill -- -0 do? [duplicate] |
1,586,636,159,000 |
$ k=v p &
[1] 3028
is there any way for p to change the contents of /proc/3028/environ to not mention k=v while p is still running?
|
On Linux, you can overwrite the value of the environment strings on the stack.
So you can hide the entry by overwriting it with zeros or anything else:
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char* argv[], char* envp[]) {
char cmd[100];
while (*envp) {
if (strncmp(*envp, "k=", 2) == 0)
memset(*envp, 0, strlen(*envp));
envp++;
}
sprintf(cmd, "cat /proc/%u/environ", getpid());
system(cmd);
return 0;
}
Run as:
$ env -i a=foo k=v b=bar ./wipe-env | hd
00000000 61 3d 66 6f 6f 00 00 00 00 00 62 3d 62 61 72 00 |a=foo.....b=bar.|
00000010
the k=v has been overwritten with \0\0\0.
Note that setenv("k", "", 1) to overwrite the value won't work as in that case, a new "k=" string is allocated.
If you've not otherwise modified the k environment variable with setenv()/putenv(), then you should also be able to do something like this to get the address of the k=v string on the stack (well, of one of them):
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char* argv[]) {
char cmd[100];
char *e = getenv("k");
if (e) {
e -= strlen("k=");
memset(e, 0, strlen(e));
}
sprintf(cmd, "cat /proc/%u/environ", getpid());
system(cmd);
return 0;
}
Note however that it removes only one of the k=v entries received in the environment. Usually, there is only one, but nothing is stopping anyone from passing both k=v1 and k=v2 (or k=v twice) in the env list passed to execve(). That has been the cause of security vulnerabilities in the past such as CVE-2016-2381. It could genuinely happen with bash prior to shellshock when exporting both a variable and function by the same name.
In any case, there will always be a small window during which the env var string has not been overridden yet, so you may want to find another way to pass the secret information to the command (like a pipe for instance) if exposing it via /proc/pid/environ is a concern.
Also note that contrary to /proc/pid/cmdline, /proc/pid/environment is only accessible by processes with the same euid or root (or root only if the euid and ruid of the process are not the same it would seem).
You can hide that value from them in /proc/pid/environ, but they may still be able to get any other copy you've made of the string in memory, for instance by attaching a debugger to it.
See https://www.kernel.org/doc/Documentation/security/Yama.txt for ways to prevent at least non-root users from doing that.
| change /proc/PID/environ after process start |
1,586,636,159,000 |
Once, I was installing some kernel patches & something went wrong on a live server where we had hundreds of clients. Only one kernel was there in the system. So, the server was down for some time, and using a live CD, we got the system up & running & did the further repairing work.
Now my question: Is it a good idea to have a 2 versions of the kernel, so that if the kernel is corrupted we can always reboot with another available kernel? Please let me know.
Also, is it possible to have 2 versions of the same kernel? So that I can choose the another kernel when there is kernel corruption?
Edited:
My Server Details:
2.6.32-431.el6.x86_64
CentOS release 6.5 (Final)
How can I have the same copy of this kernel, so that when my kernel corrupts, I can start the the backup kernel?
|
Both RedHat and Debian-based distribution keep several versions of Kernel when you install a new one using yum or apt-get by default. That is considered a good practice and is done exactly for the case you describe: if something goes wrong with the latest kernel you can always reboot and in GRUB choose to boot using one of the previous kernels.
In RedHat distros you control number of the kernels to keep in /etc/yum.conf with installonly_limit setting. On my fresh CentOS 7 install it defaults to 5.
Also if on RedHat you're installing new kernel from RPM package you should use rpm -ivh, not rpm -Uvh: the former will keep the older kernel in place while the later will replace it.
Debian keeps old kernels but don't automatically removes them. If you need to free up your boot partition you have to remove old kernels manually (remember to leave at least one of the previous kerneles). To list all kernel-installing and kernel-headers packages use dpkg -l | egrep "linux-(im|he)".
Answering your question -- Also, Is it possible to have a 2 version of the same kernel ? -- Yes, it is possible. I can't check it on CentOS 6.5 right now, but on CentOS 7 I was able to yield the desired result by just duplicating kernel-related files of /boot directory and rebuilding the grub menu:
cd /boot
# Duplicate kernel files;
# "3.10.0-123.el7" is a substring in the name of the current kernel
ls -1 | grep "3.10.0-123.el7" | { while read i; \
do cp $i $(echo $i | sed 's/el7/el7.backup/'); done; }
# Backup the grub configuration, just in case
cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.backup
# Rebuild grub configuration
grub2-mkconfig -o /boot/grub2/grub.cfg
# At this point you can reboot and see that a new kernel is available
# for you to choose in GRUB menu
| Is it good to have multiple version of Linux Kernel? |
1,586,636,159,000 |
Question: What does /dev/disk/by-pathdescribe? And where is this documented?
Going through the meaning of what is displayed in the folders /dev/disk/by- I've got that far, and I wonder is this correct?
by-id → based upon the serial number of the hardware devices
by-label → Whatever name was set manually for this disk
by-path → ?!
by-uuid → Universal Unique Identifier: a uniquely created string to identify the disk [done so through the system]
[Note: I work on GNU/Linux Debian 7, Crunchbang, if this matters…]
|
Mountpoint /dev is devtmpfs filesystem and managed by udev completely.
So for details we have to go to udev configuration.
2 udev rules are handling this typically
$ grep -ri '/dev/disk' /usr/lib/udev/rules.d/
/usr/lib/udev/rules.d/60-persistent-storage.rules:# persistent storage links: /dev/disk/{by-id,by-uuid,by-label,by-path}
/usr/lib/udev/rules.d/13-dm-disk.rules:# These rules create symlinks in /dev/disk directory.
60-persistent-storage.rules mentions
# by-path (parent device path)
ENV{DEVTYPE}=="disk", DEVPATH!="*/virtual/*", IMPORT{builtin}="path_id"
ENV{DEVTYPE}=="disk", ENV{ID_PATH}=="?*", SYMLINK+="disk/by-path/$env{ID_PATH}"
ENV{DEVTYPE}=="partition", ENV{ID_PATH}=="?*", SYMLINK+="disk/by-path/$env{ID_PATH}-part%n"
Finally ID_PATH is a unique identificator for a device based on it's physical HW location / connection (eg. something like ID_PATH=pci-0000:02:04.0-scsi-0:0:0:0).
ID_PATH comes from builtin udev program called path_id (eg. for /sys/block/sdc)
$ udevadm test-builtin path_id /sys/block/sdc
calling: test-builtin
=== trie on-disk ===
tool version: 204
file size: 5632867 bytes
header size 80 bytes
strings 1260755 bytes
nodes 4372032 bytes
load module index
ID_PATH=pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:0
ID_PATH_TAG=pci-0000_00_14_0-usb-0_1_1_0-scsi-0_0_0_0
We can relate it to
drwxr-xr-x 6 root root 0 Aug 15 02:30 /sys/devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.0/
Ultimately, if anybody is interested in details consult the source code
http://cgit.freedesktop.org/systemd/systemd/tree/src/udev/udev-builtin-path_id.c
| understanding /dev/disk/by- folders |
1,586,636,159,000 |
I am trying to add a public key to a server but I don't want to restart the sshd service for it to take effect. The reason is that restarting the ssh service seems to be disruptive for other users who could use the ssh service at that time. Most documentation suggest to add a public key to $HOME/.ssh/authorized_keys and then to restart the sshd service (systemctl restart sshd). The OS of interest is Linux.
My questions are:
Is the restart of sshd needed?
If sshd is restarted, is there a service outage at that time?
Is there a way to set up passwordless auth using ssh without needing to restart the sshd service after adding new public keys to $HOME/.ssh/authorized_keys?
|
Is the restart of sshd needed?
Not usually. Linux distributions usually ship with a default configuration that allows public key authentication, so you usually don't even have to edit configuration to enable it, and so restarting is unnecessary. Even in the case that you had to do something with sshd_config, you'd only have to restart it only once after editing that file, not for each edit after of the authorized keys file.
Note that you don't even have to restart sshd. From man sshd:
sshd rereads its configuration file when it receives a hangup signal, SIGHUP, by executing itself with the name and options it was started with, e.g. /usr/sbin/sshd.
And the typical systemd service for sshd recognizes this, so you can do systemctl reload sshd instead.
If sshd is restarted, is there a service outage at that time?
Depends on your definition of service outage. A simple restart of sshd will not kill existing ssh connections, but new connections wouldn't be accepted until sshd finishes restarting.
| Add key to authorized_users without needing to restart sshd |
1,586,636,159,000 |
I have a hard drive, which should go to standby automatically after 30 or 60 minutes.
I tried (3 minutes for testing):
# hdparm -S 36 /dev/sda
/dev/sda:
setting standby to 36 (3 minutes)
And it didn't work, even when there were no access for more than 5 minutes. Now I thought about some process accessing data, so I tested
# hdparm -y /dev/sda
/dev/sda:
issuing standby command
Drive went to standby and kept sleeping, as you can lookup with
# hdparm -C /dev/sda;date
/dev/sda:
drive state is: standby
Touching some file in the mountpoint woke it up as you would expect it.
Why isn't the automatic suspend working? As far as I understood it should even turn off the hard drive independend from the OS, as long as there is no access.
|
The actual problem was smartd, which regulary checked the values of the device, even when it was in standby mode.
I solved it by disabling smartd and running tests with smartctl from time to time.
| Hard disk not going to standby automatically |
1,586,636,159,000 |
If I read the ext4 documentation correctly, starting from Linux 3.8 it should be possible to store data directly in the inode in the case of a very small file.
I was expecting such a file to have a size of 0 blocks, but it is not the case.
# creating a small file
printf "abcde" > small_file
# checking size of file in bytes
stat --printf='%s\n' small_file
5
# number of 512-byte blocks used by file
stat --printf='%b\n' small_file
8
I would expect this last number here to be 0. Am I am missing something?
|
To enable inline data in ext4, you'll need to use e2fsprogs 1.43 or later. Support for inline data was added in March 2014 to the Git repository but was only released in May 2016.
Once you have that, you can run mke2fs -O inline_data on an appropriate device to create a new filesystem with inline data support; this will erase all your data. It's apparently not yet possible to activate inline data on an existing filesystem (at least, tune2fs doesn't support it).
Now create a small file, and run debugfs on the filesystem. cd to the appropriate directory, and run stat smallfile; you'll get something like
Inode: 32770 Type: regular Mode: 0644 Flags: 0x10000000
Generation: 2302340561 Version: 0x00000000:00000001
User: 1000 Group: 1000 Size: 6
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
atime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
mtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
crtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
Size of extra inode fields: 28
Extended attributes:
system.data (0)
Size of inline data: 60
As you can see the data was stored inline. This can also be seen using df; before creating the file:
% df -i /mnt/new
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg--large--mirror-inline 65536 12 65524 1% /mnt/new
% df /mnt/new
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new
After creating the file:
% echo Hello > smallfile
% ls -l
total 1
-rw-r--r-- 1 steve steve 6 Apr 22 07:35 smallfile
% df -i /mnt/new
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg--large--mirror-inline 65536 13 65523 1% /mnt/new
% df /mnt/new
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new
The file is there, it uses an inode but the storage space available hasn't changed.
| How to use the new ext4 inline data feature? (storing data directly in the inode) |
1,586,636,159,000 |
I would like to know what is the difference between a Library call and a System call in Linux. Any pointers for a good understanding of the concepts behind both will be greatly appreciated.
|
There's not really such a thing as a "library call". You can call a function that's linked to a shared library. And that just means that the library path is looked up at runtime to determine the location of the function to call.
System calls are low level kernel calls handled by the kernel.
| What is the difference between a Library call and a System call in Linux? |
1,586,636,159,000 |
I was using one Linux server with CentOS7 installed for testing and installing some tools. And now I don't remember how many packages I installed.
I want to remove all that packages so my server would be like new as it was. I don't want to search for every package and remove one by one.
Is there any way to remove them with just only one command?
|
List all the files in the reverse order of their installation date into a file:
rpm -qa --last >list
You'll get lines like
atop-2.1-1.fc22.x86_64 Wed Apr 13 07:35:27 2016
telnet-server-0.17-60.fc22.x86_64 Mon Apr 11 20:10:43 2016
mhddfs-0.1.39-3.fc22.x86_64 Sat Apr 9 21:26:06 2016
libpcap-devel-1.7.3-1.fc22.x86_64 Fri Apr 8 09:40:43 2016
Choose the cutoff date that applies to you and delete all the lines that follow it. Give the remaining lines to yum to remove, after removing the date part. Eg
sudo yum remove $(awk '{print $1}' <list)
| Remove completely all packages I installed? |
1,586,636,159,000 |
A lot of Linux programs state that the config file(s) location is distribution dependent. I was wondering how the different distributions do this. Do they actually modify the source code? Is there build parameters that sets these locations? I have searched for this but cannot find any information. I know it's out there, I just can't seem to find it. What is the "Linux way" in regards to this?
|
It depends on the distribution and the original ('upstream') source.
With most autoconf- and automake-using packages, it is possible to specify the directory where the configuration files will be looked for using the --sysconfdir parameter. Other build systems (e.g., CMake) have similar options. If the source package uses one of those build systems, then the packager can easily specify the right parameters, and no patches are required. Even if they don't (e.g., because the upstream source uses some home-grown build system), it's often still possible to specify some build configuration to move the config files to a particular location without having to patch the upstream source.
It that isn't the case, then often the distribution will indeed have to add patches to the source to make it move files in what they consider to be the 'right' location. In most cases, distribution packagers will then write a patch which will allow the source to be configured in the above sense, so that they can send the patch to the upstream maintainers, and don't have to keep maintaining/updating it. This is the case for configuration file locations, but also for other things, like the bin/sbin executables (the interpretation of what is a system administrator's command differs between distributions), location where to write documentation, and so on.
Side note: if you maintain some free software, please make it easy for packagers to talk to you. Otherwise we have to maintain such patches for no particularly good reason...
| How do different distributions modify the locations of config files for programs? |
1,586,636,159,000 |
I have a kernel in which one initramfs is embedded.
I want to extract it.
I got the output x86 boot sector when I do file bzImage
I have System.map file for this kernel image.
Is there any way to extract the embedded initramfs image from this kernel with or without the help of System.map file ?
The interesting string found in System map file is: (Just in case it helps)
57312:c17fd8cc T __initramfs_start
57316:c19d7b90 T __initramfs_size
|
There is some information about this in the gentoo wiki: https://wiki.gentoo.org/wiki/Custom_Initramfs#Salvaging
It recommends the usage of binwalk which works exceedingly well.
I'll give a quick walk-through with an example:
first extract the bzImage file with binwalk:
> binwalk --extract bzImage
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 Microsoft executable, portable (PE)
18356 0x47B4 xz compressed data
9772088 0x951C38 xz compressed data
I ended up with three files: 47B4, 47B4.xz and 951C38.xz
> file 47B4
47B4: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=aa47c6853b19e9242401db60d6ce12fe84814020, stripped
Now lets run binwalk again on 47B4:
> binwalk --extract 47B4
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 ELF, 64-bit LSB executable, AMD x86-64, version 1 (SYSV)
9818304 0x95D0C0 Linux kernel version "4.4.6-gentoo (root@host) (gcc version 4.9.3 (Gentoo Hardened 4.9.3 p1.5, pie-0.6.4) ) #1 SMP Tue Apr 12 14:55:10 CEST 2016"
9977288 0x983DC8 gzip compressed data, maximum compression, from Unix, NULL date (1970-01-01 00:00:00)
<snip>
This came back with a long list of found paths and several potentially interesting files. Lets have a look.
> file _47B4.extracted/*
<snip>
_47B4.extracted/E9B348: ASCII cpio archive (SVR4 with no CRC)
file E9B348 is a (already decompressed) cpio archive, just what we are looking for! Bingo!
To unpack the uncompressed cpio archive (your initramfs!) in your current directory just run
> cpio -i < E9B348
That was almost too easy. binwalk is absolutely the tool you are looking for. For reference, I was using v2.1.1 here.
| extract Embedded initramfs |
1,586,636,159,000 |
My system was running slow recently and I checked htop to identify resource consumption. RES Column is 213M which is quite normal for chrome. I was surprised after looking at VIRT column, Google Chrome was taking 1.1T !!!
I killed chrome and open again and still it was using 1.1T VIRT memory. Any pointer would be helpful if Higher VIRT is not normal and need to be fixed.
Laptop Hardware details.
Processor Intel® Core™ i3-4005U CPU @ 1.70GHz × 4
Graphics NVD7 / Intel® HD Graphics 4400 (HSW GT2)
Memory 7.7 GiB
Disk Capacity 740.2 GB
// uname -srvmpio
Linux 5.13.0-41-generic #46~20.04.1-Ubuntu SMP Wed Apr 20 13:16:21 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
// Google Chrome version
Version 102.0.5005.61 (Official Build) (64-bit)
|
Please disregard VIRT. I've never used or seen anyone use or pay attention to it ever.
It basically means nothing. No idea why top/htop still show it.
Mugurel Sumanariu once wrote about it:
VIRT stands for the virtual size of a process, which is the sum of memory it is actually using, memory it has mapped into itself (for instance the video card’s RAM for the X server), files on disk that have been mapped into it (most notably shared libraries), and memory shared with other processes. VIRT represents how much memory the program is able to access at the present moment.
(On a system where memory overcommit is disabled it could mean something but you wouldn't want to use such a system).
| Why Google Chrome is reserving Terabytes scale virtual memory? |
1,586,636,159,000 |
This is something I haven't been able to find much info on so any help would be appreciated.
My understanding is thus. Take the following file:
-rw-r----- 1 root adm 69524 May 21 17:31 debug.1
The user phil cannot access this file:
phil@server:/var/log$ head -n 1 debug.1
cat: debug.1: Permission denied
If phil is added to the adm group, it can:
root@server:~# adduser phil adm
Adding user `phil' to group `adm' ...
Adding user phil to group adm
Done.
phil@server:/var/log$ head -n 1 debug.1
May 21 11:23:15 server kernel: [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org 04/01/2014
If, however, a process is started whilst explicitly setting the user:group to phil:phil it cannot read the file. Process started like this:
nice -n 19 chroot --userspec phil:phil / sh -c "process"
If the process is started as phil:adm, it can read the file:
nice -n 19 chroot --userspec phil:adm / sh -c "process"
So the question really is:
What is special about running a process with a specific user/group combo that prevents the process being able to access files owned by supplementary groups of that user and is there any way around this?
|
A process is run with a uid ang a gid. Both have permissions assigned to them. You could call chroot with a userspec of a user and group, where actually the user is not in that group. The process would then executed with the users uid and the given groups gid.
See an example. I have a user called user, and he is in the group student:
root@host:~$ id user
uid=10298(user) gid=20002(student) groups=20002(student)
I have a file as follows:
root@host:~$ ls -l file
-rw-r----- 1 root root 9 Mai 29 13:39 file
He cannot read it:
user@host:~$ cat file
cat: file: Permission denied
Now, I can execte the cat process in the context of the user user AND the group root. Now, the cat process has the necessary permissions:
root@host:~$ chroot --userspec user:root / sh -c "cat file"
file contents
Its interesting to see what id says:
root@host:~$ chroot --userspec user:root / sh -c "id"
uid=10298(user) gid=0(root) groups=20002(student),0(root)
Hm, but the user user is not in that group (root). Where does id get its informations from? If called without argument, id uses the system calls, getuid(), getgid() and getgroups(). So the process context of id itself is printed. That context we have altered with --userspec.
When called with an argument, id just determines the group assignments of the user:
root@host:~$ chroot --userspec user:root / sh -c "id user"
uid=10298(user) gid=20002(student) groups=20002(student)
To your question:
What is special about running a process with a specific user/group
combo that prevents the process being able to access files owned by
supplementary groups of that user and is there any way around this?
You can set the security process context that is needed to solve whatever task the process needs to do. Every process has a uid and gid set under which he runs. Normally the process "takes" the calling users uid and gid as his context. With "takes" I means the kernel does, otherwise it would be a security problem.
So, it's actually not the user, that has no permissions to read the file, its the process' permissions (cat). But the process runs with the uid/gid of the calling user.
So you don't have to be in a specific group for a process to run with your uid and the gid of that group.
| How do Linux permissions work when a process is running as a specific group? |
1,586,636,159,000 |
My bash shell will no longer change directory with cd. I noticed it earlier when working and found that any new shells I opened (terminal or xterm etc) would be stuck in the home directory and could not get out (already open terminals continued to work fine).
[~]$ pwd
/home/sys/dave
[~]$ cd /
[~]$ cd Documents/
[~]$ pwd
/home/sys/dave
[~]$ type cd
cd is a shell builtin
[~]$ alias
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'
I thought it must be some weirdness I didn't have time to deal with such as a handler out of memory (having checked that cd wasn't aliased and using the builtin version).
So I (yes, I know) rebooted the machine.
Fresh boot, exactly the same problem.
CSH on the other hand works fine, so immediately after the snippet above:
[~]$ csh
[~]$ cd /
[/]$ pwd
/
[/]$ cd ~/Documents/
[~/Documents]$ pwd
/home/sys/dave/Documents
[~/Documents]$
I haven't installed anything new or performed any updates in the last few days and it was working fine until late this evening.
Ideas/assistance/HELP much appreciated!
** UPDATE **
So digging around I found this line in .bashrc
export PROMPT_COMMAND="cd"
If I unset PROMPT_COMMAND then everything works as normal.
But... WTF. I didn't put this line in the .bashrc and everything was working perfectly until tonight. Should I just comment it out, manually unset it, or just burn the computer as a witch?
|
Setting PROMPT_COMMAND to cd is a pretty common prank, if you didn't set it, and you're the only user, then yes, you've been compromised.
If friends have access though, this is a prank I've seen numerous times, talk with them.
| Bash No Longer Changes Directory |
1,586,636,159,000 |
I'm working on a system with multiple NVIDIA GPUs. I would like disable / make-disappear one of my GPUs, but not the others; without rebooting; and so that I can later re-enable it.
Is this possible?
Notes:
Assume I have root (though a non-root solution for users which have permissions for the device files is even better).
In case it matters, the distribution is either SLES 12 or SLES 15, and - don't ask me why :-(
|
Disabling:
The following disables a GPU, making it invisible, so that it's not on the list of CUDA devices you can find (and it doesn't even take up a device index)
nvidia-smi -i 0000:xx:00.0 -pm 0
nvidia-smi drain -p 0000:xx:00.0 -m 1
where xx is the PCI device ID of your GPU. You can determine that using lspci | grep NVIDIA or nvidia-smi.
The device will still be visible with lspci after running the commands above.
Re-enabling:
nvidia-smi drain -p 0000:xx:00.0 -m 0
the device should now be visible
Problems with this approach
This may fail to work if you are not root; or in some scenarios I can't yet characterize.
Haven't yet checked what happens to procesess which are actively using the GPU as you do this.
The syntax is baroque and confusing. NVIDIA - for shame, you need to make it simpler to disable GPUs.
| How can I disable (and later re-enable) one of my NVIDIA GPUs? |
1,586,636,159,000 |
In our Linux box we have USB -> serial device which was always identified as
/dev/ttyACM0. So I've written an application and until yesterday, everything worked fine. But suddenly (yeah, during the remote presentation ...) the device stopped working. After quick research, I found that the connection changed to /dev/ttyACM1. It was a little untimely, but now I have a problem - how to unambiguously identify my device? Like, for example, the storage drive could be initialized using UUID although the /dev/sd** has changed. Is there some way to do that for serial devices?
Now I use a stupid workaround:
for(int i = 0; i < 10; i ++)
{
m_port = std::string("/dev/ttyACM") + (char)('0' + i);
m_fd = open(m_port.c_str(), O_RDWR | O_NOCTTY | O_NDELAY);
}
The link to the device we use.
|
Since we are talking USB devices and assuming you have udev, you could setup some udev rules.
I guess, and this is just a wild guess, somebody or something unplugged/removed the device and plugged it back in/added the device again, which bumps up the number.
Now, first you need vendor and product id's:
$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 011: ID 0403:6001 FTDI FT232 USB-Serial (UART) IC
Next, you need the serial number (in case you have several):
# udevadm info -a -n /dev/ttyUSB1 | grep '{serial}' | head -n1
ATTRS{serial}=="A6008isP"
Now, lets create a udev rule:
UDEV rules are usually scattered into many files in /etc/udev/rules.d. Create a new file called 99-usb-serial.rules and put the following line in there, I have three devices, each with a a different serial number:
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A6008isP", SYMLINK+="MySerialDevice"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A7004IXj", SYMLINK+="MyOtherSerialDevice"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="FTDIF46B", SYMLINK+="YetAnotherSerialDevice"
ls -l /dev/MySerialDevice
lrwxrwxrwx 1 root root 7 Nov 25 22:12 /dev/MySerialDevice -> ttyUSB1
If you do not want the serial number, any device from vendor with same chip will then get the same symlink, only one can be plugged in at any given time.
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="MySerialDevice"
Taken from here
| Consistent Linux device enumeration |
1,586,636,159,000 |
I have root access to my local server. Some days ago, my colleague created a user on that server, giving me the username and password, but the user has minimized permissions. For instance, the user can't even create a file under its own home directory.
Is there any concept about "the permissions of a user"? If there is, how do I check/modify it?
|
It may be the case that your colleague, while creating the account, created the home directory "by hand" which resulted in it being owned by root. Try running the following as root:
chown -R username ~username
chgrp -R $(id -gn username) ~username
Where username is the name of the problematic account.
Edit
If this turns out to be your problem, to avoid this happening in the future, you want to add the -m switch to the useradd command line used to create the user account. This ensures that the user's selected home directory is created if it doesn't exist. This creates the home directory with the "right" ownership and permissions so you don't face this kind of issue.
Edit 2
The chgrp command added above will change group ownership of the entire home directory of username to username's primary group. Depending on your environment, this may not be exactly what you want and you'll possibly need to change group ownership of specific sub-directories inside the home-directory "manually", thereby setting different group ownership for different sub-directories. This is usually not the case for personal computers, but since you mentioned "a colleague", I'm assuming we're talking about a networked office environment, in which case group ownership is important for shared directories.
| How do I know a specified user's permissions on Linux with root access? |
1,586,636,159,000 |
Say I plug in several USB drives which don't get automatically mounted. How can I find out which device file belongs to which physical device, so I can mount it for example?
I'm running Mac OS X but I rather like an answer that works on all (or at least the most popular) Unix systems. I had this problem with Linux in the past.
|
Using udev:
You can get useful information querying udev (on systems that use it - almost all desktop-type Linuxes for sure). For instance, if you want to know which attached drive is associated with /dev/sdb, you can use:
udevadm info --query=property --name=sdb
It will show you a list of properties of that device, including the serial (ID_SERIAL_SHORT). Having that information, you can look at the output of lsusb -v and find out things like the manufacturer and product name.
A shorter path to do this would be
udevadm info --query=property --name=sdb | grep "\(MODEL_ID\|VENDOR_ID\)"
and see the line with matching $ID_VENDOR_ID:$ID_MODEL_ID in the much shorter output of lsusb.
Another useful option is udevadm monitor. Use it if you'd like know which device node is created at the point of attaching the device. So first run
udevadm monitor --udev --subsystem-match=block
And then connect the device. You'll see the device names of the detected block devices (disks/partitions) printed at the end of each output line.
A practical example shell function:
Here's a function you can put in your .bashrc (or .zshrc) :
listusbdisks ()
{
[[ "x$1" == "x-v" ]] && shift && local VERBOSE=-v
for dsk in ${@-/dev/sd?}
do
/sbin/udevadm info --query=path --name="$dsk" | grep --colour=auto -q usb || continue
echo "===== device $dsk is:"
( eval $(/sbin/udevadm info --query=property --name="$dsk" | grep "\(MODEL\|VENDOR\)_ID")
[ "$ID_VENDOR_ID:$ID_MODEL_ID" == ":" ] && echo "Unknown" || \
lsusb $VERBOSE -d "$ID_VENDOR_ID:$ID_MODEL_ID"
)
grep -q "$dsk" /proc/mounts && echo "----- DEVICE IS MOUNTED ----"
echo
done
}
Use it like this :
listusbdisks - to recognize all /dev/sdx devices;
listusbdisks sdb or listusbdisks /dev/sdb or listusbdisks sdb sdc - to get info about certain devices only;
listusbdisks -v [optional devices as above] - to show verbose outputs of lsusb
[Edit]: Added some functionality like querying many devices, checking mounts and control verbosity of lsusb.
| If I connect a physical device, how can I ever know which device file belongs to it? |
1,586,636,159,000 |
I have a laptop (~5 year old HP compaq nc6400 running Fedora Linux) that I use most of the time as a desktop machine. It is plugged into a docking station with its lid closed and connected through that by DVI cable to a large external LCD display.
For various reasons (login greeter appears on closed display, limited graphics card cannot do 3D to both displays at once) I would like to prevent the laptop's integrated display panel being used by X at all. While docked and on my desk (which is how I use it about 97% of the time) I would like it to simply not use the integrated laptop panel. Booting is not a particular problem, as by default everything is mirrored between the two displays. Also, I don't mind a 'manual' solution, such that I have to undo settings on those rare occasions when I am using the laptop away from my desk.
Once logged in I can configure Gnome so that it only uses the external monitor and the laptop panel is marked "off", however this has no effect on the initial auto-configured state of X and the pre-login greeter display. Surprisingly the laptop does not appear to have a lid sensor, so opening or closing the lid does not appear to trigger any events. I can use xrandr -display :0 --output LVDS1 --off --output DVI1 --auto on a separate VC before login, but this is still after the fact of X having started and discovered and deciding to use both displays.
I tried configuring Xorg by creating a file /etc/X11/xorg.conf.d/01-turn-off-laptop-display.conf which contains:
Section "Monitor"
Identifier "laptop panel"
Option "Monitor-LVDS1" "laptop panel"
Option "Enable" "no"
EndSection
Section "Monitor"
Identifier "big display"
Option "Monitor-DVI1" "big display"
EndSection
Section "Screen"
Identifier "main"
Device "Default"
Monitor "big display"
EndSection
However that did not have a useful effect.
The video card is Intel 945GM:
[dan@khorium ~]$ sudo lspci -v -s 0:2
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) (prog-if 00 [VGA controller])
Subsystem: Hewlett-Packard Company Device 30ad
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at f4600000 (32-bit, non-prefetchable) [size=512K]
I/O ports at 4000 [size=8]
Memory at e0000000 (32-bit, prefetchable) [size=256M]
Memory at f4680000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at <unassigned> [disabled]
Capabilities: [90] MSI: Enable- Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Kernel driver in use: i915
Kernel modules: i915
00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03)
Subsystem: Hewlett-Packard Company Device 30ad
Flags: bus master, fast devsel, latency 0
Memory at f4700000 (32-bit, non-prefetchable) [size=512K]
Capabilities: [d0] Power Management version 2
The machine has been running various versions of Fedora Linux (x86_64) since about version 10/11). I'm currently trying Fedora 15 beta (which includes Gnome 3), but the problem has existed in previous OS releases.
|
I was able to achieve the desired aim with the following xorg.conf:
Section "Monitor"
Identifier "laptop panel"
Option "ignore" "true"
EndSection
Section "Monitor"
Identifier "big display"
EndSection
Section "Device"
Identifier "onboard"
Option "Monitor-LVDS1" "laptop panel"
Option "Monitor-DVI1" "big display"
EndSection
the critical element being Option "Ignore" "true". I might be able to simplify this further, but it works. I don't yet know what will happen when/if I use the laptop away from the external display, possibly X will exit with an error - not a perfect solution but I can move the configuration out of the way in that event.
| how do I prevent Xorg using my Linux laptop's display panel? |
1,586,636,159,000 |
I tried to grow my LVM (on luks) root partition with
lvresize -L +5G -r /dev/vg/lv-root
and found that the file system wouldn't grow because it was mounted.
Now I found this
https://ubuntuforums.org/showthread.php?t=1537569
that says I should boot from something else, and do
resize2fs /dev/vg/lv-root <size>
My question is: can I omit the size and just let the filesystem
fill the partition (which was successfully enlarged before)?
I'd try it but afraid of messing things up.
Using (up to date) Arch and the filesystem is ext4.
|
You can resize it without rebooting, doing:
lvextend -r -l+100%FREE /dev/vg/lv-root
if you only have 5GB free on the volume group vg
or
lvextend -r -L+5G /dev/vg/lv-root
This commands adds the free space from the volume group vg to the volume lv-root, and extends it; with -r it also extends the underlying filesystem at the same time at run time. (So no need to reboot it from something else)
As for lvresize I think you have an extra space in the command. The command should be:
lvresize -L+5G -r /dev/vg/lv-root
| Growing LVM root |
1,586,636,159,000 |
I'm using uclinux and I want to find out which processes are using the serial port. The problem is that I have no lsof or fuser.
Is there any other way I can get this information?
|
This one-liner should help:
ls -l /proc/[0-9]*/fd/* |grep /dev/ttyS0
replace ttyS0 with actual port name
example output:
lrwx------ 1 root dialout 64 Sep 12 10:30 /proc/14683/fd/3 -> /dev/ttyUSB0
That means the pid 14683 has the /dev/ttyUSB0 open as file descriptor 3
| How to find processes using serial port |
1,586,636,159,000 |
I'm just trying to understand the modinfo output that describes a kernel module. For instance, in the case of the module i915, the output looks like this:
$ modinfo i915
filename: /lib/modules/4.2.0-1-amd64/kernel/drivers/gpu/drm/i915/i915.ko
license: GPL and additional rights
description: Intel Graphics
author: Intel Corporation
[...]
firmware: i915/skl_dmc_ver1.bin
alias: pci:v00008086d00005A84sv*sd*bc03sc*i*
[...]
depends: drm_kms_helper,drm,video,button,i2c-algo-bit
intree: Y
vermagic: 4.2.0-1-amd64 SMP mod_unload modversions
parm: modeset:Use kernel modesetting [KMS] (0=DRM_I915_KMS from .config, 1=on, -1=force vga console preference [default]) (int)
[...]
I'm able to understand some of the fields, but I have no idea what the following mean:
firmware
alias
intree
vermagic
Does anyone know how to interpret them?
|
firmware:
firmware: i915/skl_dmc_ver1.bin
Many devices need two things to run properly. A driver and a firmware. The driver requests the firmware from the filesystem at /lib/firmware. This is a special file, needed by the hardware, it's not a binary. The diver then does what it needs to do to load the firmware into the device. The firmware does programming the hardware inside the device.
alias:
alias: pci:v00008086d00005A84sv*sd*bc03sc*i*
This can be splitted up in the part after the characters:
v00008086: v stands for the vendor id, it identifies a hardware manufacturer. That list is maintained by the PCI Special Interest Group. Your number 0x8086 stands for "Intel Corporation".
d00005A84: d stands for the device id, which is selected by the manufacturer. This ID is usually paired with the vendor ID to make a unique 32-bit identifier for a hardware device. There is no offical list and I wasn't able to find a Intel device id list to lookup that number.
sv*, sd*: The subsystem vendor version and subsystem device version are for further identification of a device (* indicates that it will match anything)
bc03: The base class. It defines what kind of device it is; IDE interface, Ethernet controller, USB Controller, ... bc03 stands for Display controller. You may notice them from the output of lspci, because lspci maps the number to the device class.
sc*: A sub class to the base class.
i*: interface
intree:
intree: Y
All kernel modules start their developments as out-of-tree. Once a module gets accepted to be included, it becomes an in-tree module. A modules without that flag (set to N) could taint the kernel.
vermagic:
vermagic: 4.2.0-1-amd64 SMP mod_unload modversions
When loading a module, the strings in the vermagic value are checked if they match. If they don't match you will get an error and the kernel refuses to load the module. You can overcome that by using the --force flag of modprobe. Naturally, these checks are there for your protection, so using this option is dangerous.
| How to understand the modinfo output? |
1,425,396,741,000 |
I often come across the situation when developing, where I am running a binary file, say a.out in the background as it does some lengthy job. While it's doing that, I make changes to the C code which produced a.out and compile a.out again. So far, I haven't had any problems with this. The process which is running a.out continues as normal, never crashes, and always runs the old code from which it originally started.
However, say a.out was a huge file, maybe comparable to the size of the RAM. What would happen in this case? And say it linked to a shared object file, libblas.so, what if I modified libblas.so during runtime? What would happen?
My main question is - does the OS guarantee that when I run a.out, then the original code will always run normally, as per the original binary, regardless of the size of the binary or .so files it links to, even when those .o and .so files are modfied during runtime?
I know there are these questions that address similar issues:
https://stackoverflow.com/questions/8506865/when-a-binary-file-runs-does-it-copy-its-entire-binary-data-into-memory-at-once
What happens if you edit a script during execution?
How is it possible to do a live update while a program is running?
Which have helped me understand a bit more about this but I don't think that they are asking exactly what I want, which is a general rule for the consequences of modifying a binary during execution
|
While the Stack Overflow question seemed to be enough at first, I understand, from your comments, why you may still have a doubt about this. To me, this is exactly the kind of critical situation involved when the two UNIX subsystems (processes and files) communicate.
As you may know, UNIX systems are usually divided into two subsystems: the file subsystem, and the process subsystem. Now, unless it is instructed otherwise through a system call, the kernel should not have these two subsystems interact with one another. There is however one exception: the loading of an executable file into a process' text regions. Of course, one may argue that this operation is also triggered by a system call (execve), but this is usually known to be the one case where the process subsystem makes an implicit request to the file subsystem.
Because the process subsystem naturally has no way of handling files (otherwise there would be no point in dividing the whole thing in two), it has to use whatever the file subsystem provides to access files. This also means that the process subsystem is submitted to whatever measure the file subsystem takes regarding file edition/deletion. On this point, I would recommend reading Gilles' answer to this U&L question. The rest of my answer is based on this more general one from Gilles.
The first thing that should be noted is that internally, files are only accessible through inodes. If the kernel is given a path, its first step will be to translate it into a inode to be used for all other operations. When a process loads an executable into memory, it does it through its inode, which has been provided by the file subsystem after translation of a path. Inodes may be associated to several paths (links), and programs may only delete links. In order to delete a file and its inode, userland must remove all existing links to that inode, and ensure that it is completely unused. When these conditions are met, the kernel will automatically delete the file from disk.
If you have a look at the replacing executables part of Gilles' answer, you'll see that depending on how you edit/delete the file, the kernel will react/adapt differently, always through a mechanism implemented within the file subsystem.
If you try strategy one (open/truncate to zero/write or open/write/truncate to new size), you'll see that the kernel won't bother handling your request. You'll get an error 26: Text file busy (ETXTBSY). No consequences whatsoever.
If you try strategy two, the first step is to delete your executable. However, since it is being used by a process, the file subsystem will kick in and prevent the file (and its inode) from being truly deleted from disk. From this point, the only way to access the old file's content is to do it through its inode, which is what the process subsystem does whenever it needs to load new data into text sections (internally, there is no point in using paths, except when translating them into inodes). Even though you've unlinked the file (removed all its paths), the process can still use it as if you'd done nothing. Creating a new file with the old path doesn't change anything: the new file will be given a completely new inode, which the running process has no knowledge of.
Strategies 2 and 3 are safe for executables as well: although running executables (and dynamically loaded libraries) aren't open files in the sense of having a file descriptor, they behave in a very similar way. As long as some program is running the code, the file remains on disk even without a directory entry.
Strategy three is quite similar since the mv operation is an atomic one. This will probably require the use of the rename system call, and since processes can't be interrupted while in kernel mode, nothing can interfere with this operation until it completes (successfully or not). Again, there is no alteration of the old file's inode: a new one is created, and already-running processes will have no knowledge of it, even if it's been associated with one of the old inode's links.
With strategy 3, the step of moving the new file to the existing name removes the directory entry leading to the old content and creates a directory entry leading to the new content. This is done in one atomic operation, so this strategy has a major advantage: if a process opens the file at any time, it will either see the old content or the new content — there's no risk of getting mixed content or of the file not existing.
Recompiling a file : when using gcc (and the behaviour is probably similar for many other compilers), you are using strategy 2. You can see that by running a strace of your compiler's processes:
stat("a.out", {st_mode=S_IFREG|0750, st_size=8511, ...}) = 0
unlink("a.out") = 0
open("a.out", O_RDWR|O_CREAT|O_TRUNC, 0666) = 3
chmod("a.out", 0750) = 0
The compiler detects that the file already exists through the stat and lstat system calls.
The file is unlinked. Here, while it is no longer accessible through the name a.out, its inode and contents remain on disk, for as long as they are being used by already-running processes.
A new file is created and made executable under the name a.out. This is a brand new inode, and brand new contents, which already-running processes don't care about.
Now, when it comes to shared libraries, the same behaviour will apply. As long as a library object is used by a process, it will not be deleted from disk, no matter how you change its links. Whenever something has to be loaded into memory, the kernel will do it through the file's inode, and will therefore ignore the changes you made to its links (such as associating them with new files).
| Modifying binary during execution |
1,425,396,741,000 |
I use Nagios and check_mk to monitor some servers. I get several warnings about the mount options of a couple of servers. The message is: 'OK - missing: seclabel'.
I can't find documentation about seclabel. The only thing I can find about it is that it probably has to do with selinux. Maybe I could just add the seclabel to the mount options but I'd like to know what it does and why it's there first.
So my question is, what is the seclabel mount option for?
|
seclabel is an indicator added by the selinux code, that the filesystem is using xattrs for labels and that it supports label changes by setting the xattrs.
You shouldn't add seclabel on your own, it should normally be added by selinux automatically if it's enabled.
I would try to find a way to ignore that nagios message if you don't need selinux.
| What does the 'seclabel' mount option do? |
1,425,396,741,000 |
Without unplugging my keyboard I'd like to disable it from the terminal; I was hoping that this could be done using rmmod but based on my currently loaded modules it doesn't look like it is possible.
Does anyone have any ideas?
|
There are pretty good directions on doing it here, titled: Disable / enable keyboard and mouse in Linux.
Example
You can list the devices with this command.
$ xinput --list
"Virtual core pointer" id=0 [XPointer]
"Virtual core keyboard" id=1 [XKeyboard]
"Keyboard2" id=2 [XExtensionKeyboard]
"Mouse2" id=3 [XExtensionKeyboard]
And disable the keyboard with this:
$ xinput set-int-prop 2 "Device Enabled" 8 0
And enable it with this one:
$ xinput set-int-prop 2 "Device Enabled" 8 1
This only works for disabling the keyboard through X. So if you're on a system that isn't running X this won't work.
List of properties
You can use this command to get a list of all the properties for a given device:
$ xinput --list-props 2
Device 'Virtual core keyboard':
Device Enabled (124): 1
Coordinate Transformation Matrix (126): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
| How to disable keyboard? |
1,425,396,741,000 |
I believe that if there is any output from a cronjob it is mailed to the user who the job belongs to. I think you can also add something like [email protected] at the top of the cron file to change where the output is sent to.
Can I set an option so that cron jobs system-wide will be emailed to root instead of to the user who runs them? (i.e. so that I don't have to set this in each user's cron file)
|
Check /etc/crontab file and set MAILTO=root in there. Might also need in /etc/rc file
crond seems to accept MAILTO variable, I guess I am not sure completely but its worth a try changing the environment variable for crond before it is started. Like in /etc/sysconfig/crond or /etc/rc.d/init.d/crond script which sources the earlier file.
Example:
[centos@centos scripts]$ strings /usr/sbin/crond | grep -i mail
ValidateMailRcpts
MailCmd
cron_default_mail_charset
usage: %s [-n] [-p] [-m <mail command>] [-x [
CRON_VALIDATE_MAILRCPTS
mailed %d byte%s of output but got status 0x%04x
[%ld] no more grandchildren--mail written?
MAILTO
/usr/sbin/sendmail
mailcmd too long
[%ld] closing pipe to mail
MAIL
| Can I change the default mail recipient on cron jobs? |
1,425,396,741,000 |
I've always heard that the target of a shebang line (e.g. #!/bin/bash) must be a binary executable, not a script. And this is still true for many OSes (e.g. MacOS). But I was surprised to see that this is not true on Linux, where up to 4 levels of scripts can be used, where the fourth script references a binary executable in its shebang line. However, if 5 levels of scripts are used, then the program will fail with the error Too many levels of symbolic links.
See the LWN article "How programs get run" and the following code which was not shown in that article.
$ cat wrapper2
#!./wrapper
When did this change occur (assuming that at some point it was not allowed)?
|
According to Sven Mascheck (who's generally reliable and well-informed):
interpreter itself as #! script
or: can you nest #!?
(…)
Linux since 2.6.27.9 2 and Minix accept this.
(…)
see the kernel patch
(patch to be applied to 2.6.27.9) and especially see binfmt_script.c which contains the important parts.
Linux allows at most BINPRM_MAX_RECURSION, that is 4, levels of nesting.
Note that this recursion concerns both indirect execution mechanisms that Linux implements: #! scripts, and executable formats registered through binfmt_misc. So for example you can have a script with a #! line that points to an interpreter written in bytecode which gets dispatched to a foreign-architecture binary which gets dispatched via Qemu, and that counts for 3 levels of nesting.
Sven Mascheck also notes that no BSD supports nested shebang, but that some shells will take over if the kernel returns an error.
| Shebang can reference a script in Linux |
1,425,396,741,000 |
I tried adding cap_sys_admin permissions to user myroot.
For this, I added these lines to /etc/security/capabilities:
cap_sys_admin myroot
none *
and this line to /etc/pam.d/su:
auth required pam_cap.so
But user myroot doesn't have these permissions.
What can I do to add these permissions to my user?
|
I believe the file is called /etc/security/capability.conf not /etc/security/capabilities. I was able to get this working like so:
$ cat /etc/security/capability.conf
cap_sys_admin user1
And then adding pam_cap.so to PAM. NOTE: It's imperative that pam_cap.so come before the pam_rootok.so line.
$ cat /etc/pam.d/su
#%PAM-1.0
auth optional pam_cap.so
auth sufficient pam_rootok.so
...
...
Example
Here with the above in place if I run the following su command:
$ su - user1
I can verify this user's capabilities:
$ capsh --print
Current: = cap_sys_admin+i
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1001(user1)
gid=1001(user1)
groups=1001(user1)
The key line in that output:
Current: = cap_sys_admin+i
Packages
This was done on a CentOS 7.x box. I had these packages installed pertaining to capabilities:
$ rpm -qa | grep libcap
libcap-ng-utils-0.7.5-4.el7.x86_64
libcap-2.22-9.el7.x86_64
libcap-ng-0.7.5-4.el7.x86_64
They provide the following useful tools when dealing with capabilities:
$ rpm -ql libcap-ng-utils | grep /bin/
/usr/bin/captest
/usr/bin/filecap
/usr/bin/netcap
/usr/bin/pscap
$ rpm -ql libcap | grep /sbin/
/usr/sbin/capsh
/usr/sbin/getcap
/usr/sbin/getpcaps
/usr/sbin/setcap
NOTE: See the respective man pages for these tools if you need more info on their usage.
References
Is it possible to configure Linux capabilities per user? [closed]
Linux Kernel Capabilities Explained
How can I get capabilities to work with su?
Restricting and granting capabilities
| How do you add `cap_sys_admin` permissions to user in CentOS 7? |
1,425,396,741,000 |
Will a VNC server work without X Server installed? I know vnc works with X Server, but what about without it?
|
No you'll typically need X installed on the server you're remoting into using VNC since it merely is displaying an X desktop back from this server.
In computing, Virtual Network Computing (VNC) is a graphical desktop sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.
This bit might be what confuses people:
Note that the machine the VNC server is running on does not need to have a physical display. In the normal method of operation a viewer connects to a port on the server (default port 5900).
When they mention "Display" they're talking about a physical monitor. The remote server still requires that X be installed and configured so that GUI desktops can be run.
What about Xvnc, X11vnc, and vncserver?
Xvnc
Xvnc is a X11 server that you can run standalone, but it will still require a desktop to operate it, otherwise when you launch it you'll be presented with just a black window. So Xvnc doesn't technically require X to be installed since it contains its own X server.
So Xvnc is really two servers in one. To the applications it is an X server, and to the remote VNC users it is a VNC server. By convention we have arranged that the VNC server display number will be the same as the X server display number, which means you can use eg. snoopy:2 to refer to display 2 on machine 'snoopy' in both the X world and the VNC world.
Normally you will start Xvnc using the vncserver script, which is designed to simplify the process, and which is written in Perl. You will probably want to edit this to suit your preferences and local conditions. We recommend using vncserver rather than running Xvnc directly, but Xvnc has essentially the same options as a standard X server, with a few extensions. Running Xvnc -h will display a list.
$ export DISPLAY=localhost:1.0
$ /usr/bin/Xvnc :1 -ac -auth "/root/.Xauthority" \
-geometry "1200x700" -depth 8 -rfbwait 120000 \
-rfbauth /root/.vnc/passwd 2> /root/.vnc/ServerDaemon.log &
$ /bin/sleep 10
$ /usr/bin/fvwm 2> /root/.vnc/fvwm.log &
x11vnc
Where Xvnc contains its own X server, x11vnc does not. It's a VNC server that integrates with an already running X server, Xvnc, or Xvfb. It does have the unique feature of being able to connect to things that have a framebuffer.
excerpt
x11vnc keeps a copy of the X server's frame buffer in RAM. The X11 programming interface XShmGetImage is used to retrieve the frame buffer pixel data. x11vnc compares the X server's frame buffer against its copy to see which pixel regions have changed (and hence need to be sent to the VNC viewers.)
excerpt
It allows remote access from a remote client to a computer hosting an X Window session and the x11vnc software, continuously polling the X server's frame buffer for changes. This allows the user to control their X11 desktop (KDE, GNOME, XFCE, etc.) from a remote computer either on the user's own network, or from over the Internet as if the user were sitting in front of it. x11vnc can also poll non-X11 frame buffer devices, such as webcams or TV tuner cards, iPAQ, Neuros OSD, the Linux console, and the Mac OS X graphics display.
x11vnc does not create an extra display (or X desktop) for remote control. Instead, it uses the existing X11 display shown on the monitor of a Unix-like computer in real time, unlike other Linux alternatives such as TightVNC Server. However, it is possible to use Xvnc or Xvfb to create a 'virtual' extra display, and have x11vnc connect to it, enabling X-11 access to headless servers.
vncserver
vncserver is just a frontend Perl script that helps ease the complexity of setting up VNC + X on remote servers that you'll be using VNC to connect to.
vncserver is used to start a VNC (Virtual Network Computing) desktop. vncserver is a Perl script which simplifies the process of starting an Xvnc server. It runs Xvnc with appropriate options and starts a window manager on the VNC desktop.
References
Virtual Network Computing - Wikipedia
| VNC Server without X Window System |
1,425,396,741,000 |
Occasionally some processes on my GNU/Linux desktop (such as gv and gnash) use up the physical memory and cause thrashing. Since these processes aren't important, I want them to be automatically killed if they use too much memory.
I think the /etc/security/limits.conf file and the -v option could be used for this. The question is whether it limits the amount of available memory per process of a particular user, or the sum for all the processes of a user. Also I would like to ask how to make change to that file in effect without rebooting.
|
There's also the ulimit mechanism. There's a system call (in Linux, it's a C library function) ulimit(3) and a Bash builtin ulimit. Type ulimit -a to see all the things you can limit to. To see the current virtual memory limit say ulimit -v. You can set it by saying ulimit -v INTEGER-KILOBYTES.
Running ulimit changes things for your current shell, and you can only select a value smaller than the current one. To run a command with limited virtual memory, you can just use a Bash sub-shell:
( ulimit -v 131072; some-app )
| How to limit available virtual memory per process [duplicate] |
1,425,396,741,000 |
A few years ago, a coworker came up with an elegant solution for a watchdog program. The program ran on Windows and used Windows Event objects to monitor the process handles (PID’s) of several applications. If any one of the processes terminated unexpectedly, its process handle would no longer exist and his watchdog would immediately be signaled. The watchdog would then take an appropriate action to “heal” the system.
My question is, how would you implement such a watchdog on Linux? Is there a way for a single program to monitor the PID’s of many others?
|
The traditional, portable, commonly-used way is that the parent process watches over its children.
The basic primitives are the wait and waitpid system calls. When a child process dies, the parent process receives a SIGCHLD signal, telling it it should call wait to know which child died and its exit status. The parent process can instead choose to ignore SIGCHLD and call waitpid(-1, &status, WNOHANG) at its convenience.
To monitor many processes, you would either spawn them all from the same parent, or invoke them all through a simple monitoring process that just calls the desired program, waits for it to terminate and reports on the termination (in shell syntax: myprogram; echo myprogram $? >>/var/run/monitor-collector-pipe). If you're coming from the Windows world, note that having small programs doing one specialized task is a common design in the Unix world, the OS is designed to make processes cheap.
There are many process monitoring (also called supervisor) programs that can report when a process dies and optionally restart it and far more besides: Monit, Supervise, Upstart, …
| Linux: Writing a watchdog to monitor multiple processes |
1,425,396,741,000 |
Let's assume you have a pipeline like the following:
$ a | b
If b stops processing stdin, after a while the pipe fills up, and writes, from a to its stdout, will block (until either b starts processing again or it dies).
If I wanted to avoid this, I could be tempted to use a bigger pipe (or, more simply, buffer(1)) like so:
$ a | buffer | b
This would simply buy me more time, but in the end a would eventually stop.
What I would love to have (for a very specific scenario that I'm addressing) is to have a "leaky" pipe that, when full, would drop some data (ideally, line-by-line) from the buffer to let a continue processing (as you can probably imagine, the data that flows in the pipe is expendable, i.e. having the data processed by b is less important than having a able to run without blocking).
To sum it up I would love to have something like a bounded, leaky buffer:
$ a | leakybuffer | b
I could probably implement it quite easily in any language, I was just wondering if there's something "ready to use" (or something like a bash one-liner) that I'm missing.
Note: in the examples I'm using regular pipes, but the question equally applies to named pipes
While I awarded the answer below, I also decided to implement the leakybuffer command because the simple solution below had some limitations: https://github.com/CAFxX/leakybuffer
|
Easiest way would be to pipe through some program which sets nonblocking output.
Here is simple perl oneliner (which you can save as leakybuffer) which does so:
so your a | b becomes:
a | perl -MFcntl -e \
'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { print }' | b
what is does is read the input and write to output (same as cat(1)) but the output is nonblocking - meaning that if write fails, it will return error and lose data, but the process will continue with next line of input as we conveniently ignore the error. Process is kind-of line-buffered as you wanted, but see caveat below.
you can test with for example:
seq 1 500000 | perl -w -MFcntl -e \
'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { print }' | \
while read a; do echo $a; done > output
you will get output file with lost lines (exact output depends on the speed of your shell etc.) like this:
12768
12769
12770
12771
12772
12773
127775610
75611
75612
75613
you see where the shell lost lines after 12773, but also an anomaly - the perl didn't have enough buffer for 12774\n but did for 1277 so it wrote just that -- and so next number 75610 does not start at the beginning of the line, making it little ugly.
That could be improved upon by having perl detect when the write did not succeed completely, and then later try to flush remaining of the line while ignoring new lines coming in, but that would complicate perl script much more, so is left as an exercise for the interested reader :)
Update (for binary files):
If you are not processing newline terminated lines (like log files or similar), you need to change command slightly, or perl will consume large amounts of memory (depending how often newline characters appear in your input):
perl -w -MFcntl -e 'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (read STDIN, $_, 4096) { print }'
it will work correctly for binary files too (without consuming extra memory).
Update2 - nicer text file output:
Avoiding output buffers (syswrite instead of print):
seq 1 500000 | perl -w -MFcntl -e \
'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { syswrite STDOUT,$_ }' | \
while read a; do echo $a; done > output
seems to fix problems with "merged lines" for me:
12766
12767
12768
16384
16385
16386
(Note: one can verify on which lines output was cut with: perl -ne '$c++; next if $c==$_; print "$c $_"; $c=$_' output oneliner)
| “Leaky” pipes in linux |
1,425,396,741,000 |
I wrote main.c in Linux:
int main()
{
while (1){}
}
When I compile and start it, I can pmap it:
# pmap 28578
28578: ./a.out
0000000000400000 4K r-x-- /root/a.out
0000000000600000 4K r---- /root/a.out
0000000000601000 4K rw--- /root/a.out
00007f87c16c2000 1524K r-x-- /lib/libc-2.11.1.so
00007f87c183f000 2044K ----- /lib/libc-2.11.1.so
00007f87c1a3e000 16K r---- /lib/libc-2.11.1.so
00007f87c1a42000 4K rw--- /lib/libc-2.11.1.so
00007f87c1a43000 20K rw--- [ anon ]
00007f87c1a48000 128K r-x-- /lib/ld-2.11.1.so
00007f87c1c55000 12K rw--- [ anon ]
00007f87c1c65000 8K rw--- [ anon ]
00007f87c1c67000 4K r---- /lib/ld-2.11.1.so
00007f87c1c68000 4K rw--- /lib/ld-2.11.1.so
00007f87c1c69000 4K rw--- [ anon ]
00007fff19b82000 84K rw--- [ stack ]
00007fff19bfe000 8K r-x-- [ anon ]
ffffffffff600000 4K r-x-- [ anon ]
total 3876K
total (3876) divided by K equals the VIRT column in the output of top. Now where is the text segment? At 400000, 600000 and 601000, right? Where can I read an explanation what is where? man pmap did not help.
|
The text segment is the mapping at 0x400000 - it's marked 'r-x' for readable and executable. The mapping at 0x600000 is read-only, so that's almost certainly the ".rodata" section of the executable file. GCC puts C string literals into a read-only section. The mapping at 0x601000 is 'rw-', so that's probably the famed heap. You could have your executable malloc() 1024 bytes and print out the address to see for sure.
You might get a little bit more information by finding the PID of your process, and doing: cat /proc/$PID/maps - on my Arch laptop, that gives some extra info. It's running a 3.12 kernel, so it also has /proc/$PID/numa_maps, and catting that might give a small insight, too.
Other things to run on the executable file: nm and objdump -x. The former can give you an idea of where various things lie in the memory map, so you can see what's in the 0x4000000 section vs the other sections. objdump -x shows you ELF file headers among lots of other things, so you can see all the sections, complete with section names and whether they're mapped in a run time or not.
As far as finding a written explanation of "what is where", you'll have to do things like google for "ELF FILE memory layout". Be aware that the ELF file format can support more exotic memory layouts than commonly get used. GCC and Gnu ld and glibc all make simplifying assumptions about how an executable file gets laid out and then mapped into memory at run time. Lots of web pages exist that purport to document this, but only apply to older versions of Linux, older versions of GCC or glibc, or only apply to x86 executables. If you don't have it, get the readelf command. If you can write C programs, create your own version of objdump -x or readelf to become familiar with how executable files work, and what's in them.
| The meaning of output of pmap |
1,425,396,741,000 |
I want to check if my Linux kernel is preemptive or non-preemptive.
How can I check this using a command, something such as uname -a?
|
Whether a kernel is preemptive or not depends on what you want to preempt, as in the Linux kernel, there are various things that can have preemption enabled/disabled separately.
If your kernel has CONFIG_IKCONFIG and CONFIG_IKCONFIG_PROC enabled, you can find out your preemption configuration through /proc/config.gz (if you don't have this, some distributions ship the kernel config in /boot instead):
$ gzip -cd /proc/config.gz | grep PREEMPT
CONFIG_TREE_PREEMPT_RCU=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_PREEMPT_TRACER is not set
If you have CONFIG_IKCONFIG, but not CONFIG_IKCONFIG_PROC, you can still get it out of the kernel image with extract-ikconfig.
| How can I check my kernel preemption configuration? |
1,425,396,741,000 |
I have access to a Ubuntu Linux node at my institution. The nodes are shared among the group, but typically I am the only person who uses this particular node.
I am running a calculation in parallel on all 8 CPUs on this node. My calculation runs, but when I view the active processes using top, I see an additional process that says user man and command mandb. This mandb command seems to be running every time I look at top, and it appears to take up a fairly appreciable amount of CPU power (6 %CPU) and memory (2.5 %MEM), according to top.
When I look around on the internet, it seems that:
mandb is used to initialise or manually update index database caches that are usually maintained by man.
Why, then, does mandb run all the time on this node? (I don't have this problem on other nodes within my institution's cluster, according to top on other nodes.) Why would mandb need to run all the time, since I am not currently looking at manuals?
Is this process likely to be a phantom process that I can safely terminate using kill?
|
It isn't normal for mandb to run continuously. It is typical to run mandb once a day in a cron job, to perform maintenance task such as updating an index of installed man pages and building or trimming a cache of formatted man pages. The daily job should run in a few seconds, perhaps a few minutes if you have a lot of man pages and a slow disk. If the job runs for longer than that, there's something wrong.
6% CPU isn't high, but the process may be doing disk I/O. 2.5% of the memory on a cluster node sounds high. It's likely that the job is misconfigured and looking where it shouldn't be, or that there's a bug in the mandb program, or that there's a hardware failure causing mandb to become stuck.
You can watch the cron scripts in /etc/crontab or /etc/cron.*/* (the exact location is distribution-dependent; /etc/cron.daily/man-db and /etc/cron.weekly/man-db are likely locations). You can see what invoked mandb by looking at the process more closely: run pstree | less and search for the mandb process. Running ps ww 12345 (where 12345 is the PID of the offending process) will show the complete command line.
This is something that you might be able to diagnose on your own, but not fix without root permissions. If you do have root permissions, you can safely kill the mandb process (use the command sudo pkill mandb or su -c 'pkill mandb', depending on how you become root). In any case, contact your system administrator and explain the symptoms. Give all the information you can (such as what program invoked mandb and with what arguments).
| On Ubuntu Linux, is it normal for mandb to run continuously (apparently in the background)? |
1,425,396,741,000 |
By default when I login to my Arch linux box in a tty, there is a timeout after I type my username but before I type my password.
So it goes like this
Login: mylogin <enter>
Password:
(+ 60 seconds)
Login:
As you can see, if I don't type the password it recycles the prompt -- I want it to wait indefinitely for my password instead of recycling the login prompt.
Is this possible?
It seems like the --timeout option to agetty would be what I want. However, I tried adding this flag in the getty files in /usr/lib/systemd/system/ (the option is not used by default), and rebooting -- it seemed to have no effect.
|
agetty calls login after reading in the user name, so any timeout when reading the password is done by login.
To change this, edit /etc/login.defs and change the LOGIN_TIMEOUT value.
#
# Max time in seconds for login
#
LOGIN_TIMEOUT 60
| change tty login timeout - ArchLinux |
1,425,396,741,000 |
I've been searching to find a way to send a command to a detached screen session. So far, so good. This is what I've come up with:
$ screen -S test -p 0 -X stuff 'command\n'
This command works as it should. But, I would like the output from it too, echoed straight in front of my eyes (no need for a .log file or something, I just want the output).
Using the screen -L command, is not an option.
|
Use a first in first out pipe:
mkfifo /tmp/test
Use a redirect operator. Redirect command's output to /tmp/test for example like this:
screen -S test -p 0 -X stuff 'command >/tmp/test\n'
Then in another shell
tail -f /tmp/test.
Note you may also want to redirect error messages using the 2>&1 operator.
Example
As requested in the comments, let's assume we have a php script accepting user input and printing the server load on the input of "status":
# cat test.php
<?php
$fp=fopen("php://stdin","r");
while($line=stream_get_line($fp,65535,"\n"))
{
if ($line=="status") {echo "load is stub";}
}
fclose($fp);
?>
You create two fifos:
# mkfifo /tmp/fifoin /tmp/fifoout
You call a screen:
screen
In another console, let's call it console 2 you find out the name of your screen:
# screen -ls
There is a screen on:
8023.pts-3.tweedleburg (Attached)
1 Socket in /var/run/screens/S-root.
In console 2 you send the command to the screen:
# screen -S 8023.pts-3.tweedleburg -p 0 -X stuff 'php test.php </tmp/fifoin >/tmp/fifoout\n'
you see the command appearing in the screen. Now in console 2 you can send commands to your php process:
echo "status" >/tmp/fifoin
and read from it:
# cat /tmp/fifoout
load is stub
| Send command to detached screen and get the output |
1,425,396,741,000 |
Local: Linux Mint 15 - Olivia
/proc/version: Linux version 3.8.0-19-generic (buildd@allspice) (gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-1ubuntu1) )
ssh -V: OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012
sshfs -V: SSHFS version 2.4
FUSE library version: 2.9.0
fusermount version: 2.9.0
using FUSE kernel interface version 7.18
Remote: Ubuntu 12.04.3 LTS
/proc/version: Linux version 3.10.9-xxxx-std-ipv6-64 ([email protected]) (gcc version 4.7.2 (Debian 4.7.2-5) )
ssh -V: OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
I'm trying to set up a password-less mount of a remote server using sshfs and fuse. The remote server is running on a non standard port and I will be using a ssh key pair to authenticate.
When successful I will be repeating this for three more remote servers each with different keys so I do need to be able to specify which key maps to which remote server.
I based my modifications off this tutorial
The public key is in remote:authorized_keys
I have added my local user to the fuse group.
I have edited my local ~/.ssh/config to have (per server):
`
Host [server_ip]
Port = [port]
IdentityFile = "~/.ssh/[private_key]"
User = "[user]"
`
Whenever I try to mount the remote server locally I get prompted for the remote user's password (not my private key's password). The remote user has a long randomly generated password that I'd like to not have to save or remember and so keys is how I want to do this.
I can connect through ssh (combined with the ~/.ssh/config file) using the command ssh [ip] so I know that the config file can be read correctly as I am asked for my key's passphrase not the remote user's.
To even attempt to connect to the remote server I have to manually specify the full connection details in the command: `sshfs [user]@[ip]:[remote_path] [local_path] -p [port]
What I've tried so far:
ssh-add /path/to/key (successful addition)
Specifying PreferredAuthentication = publickey in ~/.ssh/config
sshfs -o IdentityFile=/path/to/key user@ip:/ /my/mnt/dir
sshfs user@ip:/ /my/mnt/dir -o IdentityFile=/path/to/key
temp rename of key to default of id_rsa
sshfs -F ~/.ssh/config
Is there a remote or local configuration file that I'm overlooking? Some switch or option that I need to include in the call to sshfs (tried -F) to force it to read and use my ssh config?
Output of ssh -v -p [port] [user]@[remote_ip]
OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012
debug1: Reading configuration data /home/[me]/.ssh/config
debug1: /home/[me]/.ssh/config line 2: Applying options for [remote_ip]
debug1: /home/[me]/.ssh/config line 24: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to [remote_ip] [[remote_ip]] port [port].
debug1: Connection established.
debug1: identity file /home/[me]/.ssh/[private_key] type 2
debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
debug1: identity file /home/[me]/.ssh/[private_key]-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 [email protected]
debug1: kex: client->server aes128-ctr hmac-md5 [email protected]
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: [key]
debug1: checking without port identifier
debug1: Host '[remote_ip]' is known and matches the ECDSA host key.
debug1: Found key in /home/[me]/.ssh/known_hosts:7
debug1: found matching key w/out port
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering DSA public key: /home/[me]/.ssh/[private_key]
debug1: Server accepts key: pkalg ssh-dss blen 433
debug1: Enabling compression at level 6.
debug1: Authentication succeeded (publickey).
Authenticated to [remote_ip] ([[remote_ip]]:[port]).
debug1: channel 0: new [client-session]
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_GB.UTF-8
debug1: Sending env LC_CTYPE = en_GB.UTF-8
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.10.9-xxxx-std-ipv6-64 x86_64)
Edit:
I found the problem. I was trying to mount the remote location to /mnt/new_dir using sudo. If I mount to a location within my local home then it works. sshfs -p [port] [user]@[ip]:/ /home/[me]/tmp/mount.
I have now done a sudo chown root:fuse /mnt/new_dir and sudo chmod 774 /mnt/new_dir and I believe that all's working as intended.
Are there any security issues with this set up that I need to be aware of? (My own user and root are the only members of of the fuse group.
|
If you're using sudo then you're likely using root's credentials to mount, which I do not believe is what you want. I wouldn't probably do what you're asking, wrt. mounting to /mnt as user1 and acessing as user2. It's going to get complicated with groups & user permissions. If you truly want to mount a directory to /mnt to share then you really should be mounting it via the system level for all using autofs.
Automounting
There are 3 methods that I'm aware of for automounting a mount such as this.
autosshfs – Per user SSHFS automount using user’s SSH config and keys.
autofs - provide access to file systems on demand
afuse is an automounting file system implemented in user-space using FUSE.
| sshfs will not use ~/.ssh/config (on Linux Mint 15) |
1,425,396,741,000 |
When I tab tab _ in terminal, Bash suggests 206 posibilities. I tried to run one of them _git_rm but nothing happend, what are they?
Here is a screenshot:
|
These functions whose name begins with an underscore are part of the programmable completion engine. Bash follows zsh's convention here, where the function that generates completions for somecommand is called _somecommand, and if that function requires auxiliary functions, they are called _somecommand_stuff.
These completion functions typically do nothing useful or raise an error if you call them manually: they're intended to be called from the completion engine.
This follows on a fairly widespread practice in various programming languages to use a leading underscore to indicate that a function or variable is in some way internal to a library and not intended for the end-user (or end-programmer).
| What's those underscore commands? |
1,425,396,741,000 |
How can I change a remote host primary IP address without getting disconnected at all (without being in a "no IP addr" state).
The matter is poorly discussed on Internet (according to my research). The best resource I found is a little bit tricky.
EXAMPLE : change 10.0.0.11/24 to 10.0.0.15/24
1. ssh [email protected]
2. ip addr add 10.0.0.15/24 dev eth0
3. logout
4. ssh [email protected]
5. ip addr del 10.0.0.11/24 dev eth0
Problem: The last command removes both IP addresses and the connection is lost because 10.0.0.11 is primary, and it removes its secondary addresses (to which 10.0.0.15 belongs) when deleted.
I know I could "cheat" by adding 10.0.0.11/25 (instead of 24). However, I think it is theoretically possible to do this properly.
What do you think?
|
You need to set the promote_secondaries option on the interface, or on all interfaces:
echo 1 > /proc/sys/net/ipv4/conf/eth0/promote_secondaries
or
sysctl net.ipv4.conf.eth0.promote_secondaries=1
Change eth0 to all to have it work on all interfaces.
This option has been in since 2.6.12.
I tested this with a dummy interface and it worked there.
| Change remote host IP address without losing control (Linux) |
1,425,396,741,000 |
So I'm trying to get a handle on how Linux's mount namespace works. So, I did a little experiment and opened up two terminals and ran the following:
Terminal 1
root@goliath:~# mkdir a b
root@goliath:~# touch a/foo.txt
root@goliath:~# unshare --mount -- /bin/bash
root@goliath:~# mount --bind a b
root@goliath:~# ls b
foo.txt
Terminal 2
root@goliath:~# ls b
foo.txt
How come the mount is visible in Terminal 2? Since it is not part of the mount namespace I expected the directory to appear empty here. I also tried passing -o shared=no and using --make-private options with mount, but I got the same result.
What am I missing and how can I make it actually private?
|
If you are on a systemd-based distribution with a util-linux version less than 2.27, you will see this unintuitive behavior. This is because CLONE_NEWNS propogates flags such as shared depending on a setting in the kernel. This setting is normally private, but systemd changes this to shared. As of util-linux 2.27, a patch was made that changes the default behaviour of the unshare command to use private as the default propagation behaviour as to be more intuitive.
Solution
If you are on a systemd system with util-linux prior to version 2.27, you must remount the root filesystem after running the unshare command:
# unshare --mount -- /bin/bash
# mount --make-private -o remount /
If you are on a systemd system with util-linux version 2.27 or later, it should work as expected in the example you gave in your question, verbatim, without the need to remount. If not, pass --propagation private to the unshare command to force the propagation of the mount namespace to be private.
| Why is my bind mount visible outside its mount namespace? |
1,425,396,741,000 |
Can someone explain to me how umask affects the default mask of newly created files if ACLs are activated? Is there some documentation about this?
Example:
$ mkdir test_dir && cd test_dir
$ setfacl -m d:someuser:rwx -m u:someuser:rwx . # give access to some user
$ getfacl .
# file: .
# owner: myUsername
# group: myGroup
user::rwx
user:someuser:rwx
group::---
mask::rwx
other::---
default:user::rwx
default:user:someuser:rwx
default:group::---
default:mask::rwx
default:other::---
$ umask # show my umask
077
$ echo "main(){}" > x.c # minimal C program
$ make x # build it
cc x.c -o x
$ getfacl x
# file: x
# owner: myUsername
# group: myGroup
user::rwx
user:someuser:rwx #effective:rw-
group::---
mask::rw-
other::---
I would expect mask:rwx. Actually after setting umask to e.g. 027 I get the expected behavior.
|
I found this example, titled: ACL and MASK in linux. In this article the following examples are demonstrated which I think help to understand how ACL's and umask interact with each other.
Background
When a file is created on a Linux system the default permissions 0666 are applied whereas when a directory is created the default permissions 0777 are applied.
example 1 - file
Suppose we set our umask to 077 and touch a file. We can use strace to see what's actually happening when we do this:
$ umask 077; strace -eopen touch testfile 2>&1 | tail -1; ls -l testfile
open("testfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3
-rw-------. 1 root root 0 Sep 4 15:25 testfile
In this example we can see that the system call open() is made with the permissions 0666, however when the umask 077 is then applied by the kernel the following permissions are removed (---rwxrwx) and we're left with rw------- aka 0600.
example - 2 directory
The same concept can be applied to directories, except that instead of the default permissions being 0666, they're 0777.
$ umask 022; strace -emkdir mkdir testdir; ls -ld testdir
mkdir("testdir", 0777) = 0
drwxr-xr-x 2 saml saml 4096 Jul 9 10:55 testdir
This time we're using the mkdir command. The mkdir command then called the system call mkdir(). In the above example we can see that the mkdir command called the mkdir() system call with the defaul permissions 0777 (rwxrwxrwx). This time with a umask of 022 the following permissions are removed (----w--w-), so we're left with 0755 (rwxr-xr-x) when the directories created.
example 3 (Applying default ACL)
Now let's create a directory and demonstrate what happens when the default ACL is applied to it along with a file inside it.
$ mkdir acldir
$ sudo strace -s 128 -fvTttto luv setfacl -m d:u:nginx:rwx,u:nginx:rwx acldir
$ getfacl --all-effective acldir
# file: acldir
# owner: saml
# group: saml
user::rwx
user:nginx:rwx #effective:rwx
group::r-x #effective:r-x
mask::rwx
other::r-x
default:user::rwx
default:user:nginx:rwx #effective:rwx
default:group::r-x #effective:r-x
default:mask::rwx
default:other::r-x
Now let's create the file, aclfile:
$ strace -s 128 -fvTttto luvly touch acldir/aclfile
# view the results of this command in the log file "luvly"
$ less luvly
Now get permissions of newly created file:
$ getfacl --all-effective acldir/aclfile
# file: acldir/aclfile
# owner: saml
# group: saml
user::rw-
user:nginx:rwx #effective:rw-
group::r-x #effective:r--
mask::rw-
other::r--
Notice the mask, mask::rw-. Why isn't it mask::rwx just like when the directory was created?
Check the luvly log file to see what default permissions were used for the file's creation:
$ less luvly |grep open |tail -1
10006 1373382808.176797 open("acldir/aclfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 <0.000060>
This is where it get's a little confusing. With the mask set to rwx when the directory was created, you'd expect the same behavior for the creation of the file, but it doesn't work that way. It's because the kernel is calling the open() function with the default permissions of 0666.
To summarize
Files won't get execute permission (masking or effective). Doesn't matter which method we use: ACL, umask, or mask & ACL.
Directories can get execute permissions, but it depends on how the masking field is set.
The only way to set execute permissions for a file which is under ACL permissions is to manually set them using chmod.
References
acl man page
| How does umask affect ACLs? |
1,425,396,741,000 |
I am working in a lab with three Ubuntu systems, and I would like to cross-mount some filesystems via NFS. However, while the systems have some of the same usernames, the UIDs and GIDs don't match, because the three systems were set up separately. When I mount an NFS filesystem from one system to another, the ownership shows up wrong. For example, if UID 1000 is alice on server1 and the same UID, 1000, is bob on server2, then when server1 mounts server2's exported filesystem, bob's files appear to be owned by alice.
So Is there any way to make NFS (v4) convert UID's between servers via their associated user names? Googling for this, I've seen lots of references to Kerberos, LDAP, or NIS, which seems like massive overkill for such a simple task, and might not be possible since these systems are not centrally-managed. This link seems to indicate that what I ask is impossible. Is it correct?
Edit: I've tried every configuration for /etc/idmapd.conf that I can think of or find on the internet, and while the idmapd process is clearly running, so far I have not seen any evidence that NFS is making any attempt to use it at all, and it has never had any effect whatsoever on the user ID's reported on NFS mounts.
|
With no centralized user administration, the "best" way I see is for you to force all servers to use the same GID and UID for each user.
Now ... I'm only talking about files and/or directories.
What I would do in this case is:
Register each UID and GID currently in use.
Edit /etc/passwd and /etc/group and match the groups on all servers. Preferably to new UIDs and GIDs so the next step will be faster
Run this (it will take some time):
find / -group <OLD_GID> -exec chgrp <NEW_GID> '{}' \+
find / -user <OLD_UID> -exec chown <NEW_UID> '{}' \+
| How can I do NFSv4 UID mapping across systems with UID mismatches? |
1,425,396,741,000 |
I've read two separate ways of increasing the allowed open file count (I'm attempting to modify for root, if it matters).
One way is to update the settings in /etc/security/limits.conf with something like:
* soft nofile 500000
* hard nofile 500000
root soft nofile 500000
root hard nofile 500000
To make settings for the active shell, it looks like you can just do ulimit -n 500000, which wouldn't require a reboot or to logout/login, but may require restarting services (?).
The other option is to update /etc/sysctl.conf:
echo 'fs.file-max = 500000' >> /etc/sysctl.conf
To make settings for the active shell, we can do sysctl -p, and verify with sysctl fs.file-max.
So my question is, what's the difference? Is there one? I'm on Ubuntu 14.04.2 LTS
|
The difference is the scope, and how it's applied. Open file limits set via sysctls apply to the entire system, whereas limits set via /etc/security/limits.conf apply only to things that meet the criteria specified there. The other primary difference is that /etc/security/limits.conf limits are applied via ulimit, and thus can be changed more readily, while the sysctl limit is essentially setting up a memory allocation limit in the kernel itself.
As a general rule, you almost always want to use /etc/security/limits.conf, even if you're setting global limits with the wildcard match there, as it is a bit more reliable, and things usually fail more gracefully when hit with ulimit restrictions than hitting kernel memory allocation limits.
| What's the difference between setting open file limits in /etc/sysctl.conf vs /etc/security/limits.conf? |
1,425,396,741,000 |
Up until Fedora 14 I was successfully using cdctl to enable/disable the CD/DVD eject button on my laptop (Thinkpad T410). Sadly it has stopped working now.
I've consulted the methods discussed in these 2 questions:
disable cd/dvd button on linux laptop (ubuntu)
Disable the DVD eject button on a Thinkpad running Linux
None of which have worked for me. So I turn back to cdctl to see if we can't fix what's broken with it, since it's worked for so long.
Debugging the issue
So starting with cdctl switches I notice that most things seem to work just fine.
Examples
These things work.
ejects the drive
$ cdctl -e
list capabilities
$ cdctl -k
Tray close : 1
Tray open : 1
Can disable eject : 1
Selectable spin speed : 1
Is a jukebox : 0
Is multisession capable: 1
Can read the MCN (UPC) : 1
Can report media change: 1
Can play audio discs : 1
Can do a hard reset : 1
Can report drive status: 1
According to that list cdctl even thinks that it can enable/disable the eject button.
Can disable eject : 1
So I continue on with debugging the issue.
Debugging cdctl
So I figure lets do an strace on cdctl to see if it can shed some light on what's going on.
$ strace cdctl -o1
...
brk(0) = 0x1371000
open("/dev/cdrom", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
open("/dev/cd", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
open("/dev/scd0", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
open("/dev/sr0", O_RDONLY|O_NONBLOCK) = 3
ioctl(3, CDROM_LOCKDOOR, 0x1) = 0
close(3) = 0
exit_group(0) = ?
+++ exited with 0 +++
Curiously it seems like cdctl thinks it's disabling the button.
$ strace cdctl -o1
ioctl(3, CDROM_LOCKDOOR, 0x1) = 0
$ strace cdctl -o0
ioctl(3, CDROM_LOCKDOOR, 0) = 0
NOTE: If I understand this right, the return of a 0 means it was successful.
One thing that caught my eye here was the list of devices that cdctl is attempting to interact with. So I thought "what if I try these devices with eject"?
eject command
One of the other commands I used to use years ago was the eject command to interact with the CD/DVD device. I noticed that this command also now has a similar named switch:
$ eject --help
-i, --manualeject <on|off> toggle manual eject protection on/off
Example
$ eject -i 1 /dev/sr0
eject: CD-Drive may NOT be ejected with device button
$ eject -i 0 /dev/sr0
eject: CD-Drive may be ejected with device button
So eject too thinks that it's disabling the button, yet it isn't either. Using strace here I see the same system calls:
$ strace eject -i 1 /dev/sr0 |& grep ioctl
ioctl(3, CDROM_LOCKDOOR, 0x1) = 0
$ strace eject -i 0 /dev/sr0 |& grep ioctl
ioctl(3, CDROM_LOCKDOOR, 0) = 0
So now I'm wondering if UDEV or something else is potentially blocking or taking ownership of device?
Thoughts?
|
Thanks to @Affix's answer which gave me the right direction to head, I've figured out the solution to the problem.
The problem is definitely caused by UDEV as you've guessed. The issue is this line that is in most UDEV files related to the cdrom drive.
Example
On Fedora 19 there is the following file, /usr/lib/udev/rules.d/60-cdrom_id.rules. In this file is the following line which is co-opting the eject button for CD/DVD devices.
ENV{DISK_EJECT_REQUEST}=="?*", RUN+="cdrom_id --eject-media $devnode", GOTO="cdrom_end"
You can work around the issue and disable UDEV's ability to co-opt the eject button by doing the following:
Make a copy of the file 60-cdrom_id.rules
$ sudo cp /usr/lib/udev/rules.d/60-cdrom_id.rules /etc/udev/rules.d/.
Edit this copied version of the file and comment out the line containing the string, DISK_EJECT_REQUEST.
$ sudoedit /etc/udev/rules.d/60-cdrom_id.rules
Save the file and the change should be noticeable immediately!
The above solution fixes the problem for both eject and cdctl. So now the following commands work as expected:
lock the drive
$ eject -i on /dev/sr0
eject: CD-Drive may NOT be ejected with device button
-or-
$ cdctl -o1
unlock the drive
$ eject -i off /dev/sr0
eject: CD-Drive may be ejected with device button
-or-
$ cdctl -o0
| How can I disable the button of my CD/DVD drive? |
1,425,396,741,000 |
I miss using a clicky keyboard at work. It's a fairly quiet office, so I'm stuck using a nearly silent keyboard. The upshot is that I can wear headphones. Is there something in Linux or X that can respond to all keyboard events with a nice, sharp click, giving me that audio feedback? Before you think I'm crazy, I know some high-end keyboards even have speakers in them to reproduce this click for those who like the audio feedback. I'm looking for something at the operating system level.
|
after saying "why not to check out the apt cache?", i come out with a great solution !
[0][~]apt search key sound
bucklespring - Nostalgia bucklespring keyboard sound
bucklespring-data - Nostalgia bucklespring keyboard sound - sound files
soundkonverter - audio converter frontend for KDE
[0][~]sudo apt install bucklespring
[0][~]apropos bucklespring
buckle (1) - Nostalgia bucklespring keyboard sound
[0][~]which buckle
/usr/games/buckle
[0][272][~]buckle -h
bucklespring version 1.4.0
usage: buckle [options]
options:
-d DEVICE use OpenAL audio device DEVICE
-f use a fallback sound for unknown keys
-g GAIN set playback gain [0..100]
-m CODE use CODE as mute key (default 0x46 for scroll lock)
-h show help
-l list available openAL audio devices
-p PATH load .wav files from directory PATH
-s WIDTH set stereo width [0..100]
-v increase verbosity / debugging
as you see in the help message only optional stuff!
so you can just fork it in backgroud as i did (zeroConf).
[0][~]buckle&
[4] 1522
[0][~]Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
it's working!
| Is there something that will generate keyboard's click sounds? |
1,425,396,741,000 |
My understanding is that hard drives and SSDs implement some basic error correction inside the drive, and most RAID configurations e.g. mdadm will depend on this to decide when a drive has failed to correct an error and needs to be taken offline. However, this depends on the storage being 100% accurate in its error diagnosis. That's not so, and a common configuration like a two-drive RAID-1 mirror will be vulnerable: suppose some bits on one drive are silently corrupted and the drive does not report a read error. Thus, file systems like btrfs and ZFS implement their own checksums, so as not to trust buggy drive firmwares, glitchy SATA cables, and so on.
Similarly, RAM can also have reliability problems and thus we have ECC RAM to solve this problem.
My question is this: what's the canonical way to protect the Linux swap file from silent corruption / bit rot not caught by drive firmware on a two-disk configuration (i.e. using mainline kernel drivers)? It seems to me that a configuration that lacks end-to-end protection here (such as that provided by btrfs) somewhat negates the peace of mind brought by ECC RAM. Yet I cannot think of a good way:
btrfs does not support swapfiles at all. You could set up a loop device from a btrfs file and make a swap on that. But that has problems:
Random writes don't perform well: https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation
The suggestion there to disable copy-on-write will also disable checksumming - thus defeating the whole point of this exercise. Their assumption is that the data file has its own internal protections.
ZFS on Linux allows using a ZVOL as swap, which I guess could work: http://zfsonlinux.org/faq.html#CanIUseaZVOLforSwap - however, from my reading, ZFS is normally demanding on memory, and getting it working in a swap-only application sounds like some work figuring it out. I think this is not my first choice. Why you would have to use some out-of-tree kernel module just to have a reliable swap is beyond me - surely there is a way to accomplish this with most modern Linux distributions / kernels in this day & age?
There was actually a thread on a Linux kernel mailing list with patches to enable checksums within the memory manager itself, for exactly the reasons I discuss in this question: http://thread.gmane.org/gmane.linux.kernel/989246 - unfortunately, as far as I can tell, the patch died and never made it upstream for reasons unknown to me. Too bad, it sounded like a nice feature. On the other hand, if you put swap on a RAID-1 - if the corruption is beyond the ability of the checksum to repair, you'd want the memory manager to try to read from the other drive before panicking or whatever, which is probably outside the scope of what a memory manager should do.
In summary:
RAM has ECC to correct errors
Files on permanent storage have btrfs to correct errors
Swap has ??? <--- this is my question
|
We trust the integrity of the data retrieved from swap because the storage hardware has checksums, CRCs, and such.
In one of the comments above, you say:
true, but it won't protect against bit flips outside of the disk itself
"It" meaning the disk's checksums here.
That is true, but SATA uses 32-bit CRCs for commands and data. Thus, you have a 1 in 4 billion chance of corrupting data undetectably between the disk and the SATA controller. That means that a continuous error source could introduce an error as often as every 125 MiB transferred, but a rare, random error source like cosmic rays would cause undetectable errors at a vanishingly small rate.
Realize also that if you've got a source that causes an undetected error at a rate anywhere near one per 125 MiB transferred, performance will be terrible because of the high number of detected errors requiring re-transfer. Monitoring and logging will probably alert you to the problem in time to avoid undetected corruption.
As for the storage medium's checksums, every SATA (and before it, PATA) disk uses per-sector checksums of some kind. One of the characteristic features of "enterprise" hard disks is larger sectors protected by additional data integrity features, greatly reducing the chance of an undetected error.
Without such measures, there would be no point to the spare sector pool in every hard drive: the drive itself could not detect a bad sector, so it could never swap fresh sectors in.
In another comment, you ask:
if SATA is so trustworthy, why are there checksummed file systems like ZFS, btrfs, ReFS?
Generally speaking, we aren't asking swap to store data long-term. The limit on swap storage is the system's uptime, and most data in swap doesn't last nearly that long, since most data that goes through your system's virtual memory system belongs to much shorter-lived processes.
On top of that, uptimes have generally gotten shorter over the years, what with the increased frequency of kernel and libc updates, virtualization, cloud architectures, etc.
Furthermore, most data in swap is inherently disused in a well-managed system, being one that doesn't run itself out of main RAM. In such a system, the only things that end up in swap are pages that the program doesn't use often, if ever. This is more common than you might guess. Most dynamic libraries that your programs link to have routines in them that your program doesn't use, but they had to be loaded into RAM by the dynamic linker. When the OS sees that you aren't using all of the program text in the library, it swaps it out, making room for code and data that your programs are using. If such swapped-out memory pages are corrupted, who would ever know?
Contrast this with the likes of ZFS where we expect the data to be durably and persistently stored, so that it lasts not only beyond the system's current uptime, but also beyond the life of the individual storage devices that comprise the storage system. ZFS and such are solving a problem with a time scale roughly two orders of magnitude longer than the problem solved by swap. We therefore have much higher corruption detection requirements for ZFS than for Linux swap.
ZFS and such differ from swap in another key way here: we don't RAID swap filesystems together. When multiple swap devices are in use on a single machine, it's a JBOD scheme, not like RAID-0 or higher. (e.g. macOS's chained swap files scheme, Linux's swapon, etc.) Since the swap devices are independent, rather than interdependent as with RAID, we don't need extensive checksumming because replacing a swap device doesn't involve looking at other interdependent swap devices for the data that should go on the replacement device. In ZFS terms, we don't resilver swap devices from redundant copies on other storage devices.
All of this does mean that you must use a reliable swap device. I once used a $20 external USB HDD enclosure to rescue an ailing ZFS pool, only to discover that the enclosure was itself unreliable, introducing errors of its own into the process. ZFS's strong checksumming saved me here. You can't get away with such cavalier treatment of storage media with a swap file. If the swap device is dying, and is thus approaching that worst case where it could inject an undetectable error every 125 MiB transferred, you simply have to replace it, ASAP.
The overall sense of paranoia in this question devolves to an instance of the Byzantine generals problem. Read up on that, ponder the 1982 date on the academic paper describing the problem to the computer science world, and then decide whether you, in 2019, have fresh thoughts to add to this problem. And if not, then perhaps you will just use the technology designed by three decades of CS graduates who all know about the Byzantine Generals Problem.
This is well-trod ground. You probably can't come up with an idea, objection, or solution that hasn't already been discussed to death in the computer science journals.
SATA is certainly not utterly reliable, but unless you are going to join academia or one of the the kernel development teams, you are not going to be in a position to add materially to the state of the art here. These problems are already well in hand, as you've already noted: ZFS, btrfs, ReFS... As an OS user, you simply have to trust that the OS's creators are taking care of these problems for you, because they also know about the Byzantine Generals.
It is currently not practical to put your swap file on top of ZFS or Btrfs, but if the above doesn't reassure you, you could at least put it atop xfs or ext4. That would be better than using a dedicated swap partition.
| Silent disk errors and reliability of Linux swap |
1,425,396,741,000 |
Possible Duplicate:
What is the number between file permission and owner in ls -l command output?
I've been using Linux for years now and I'm embarrassed to say that until now I didn't notice that I have no idea what the second column of ls -l means:
-r--r--r-- 1 roic develop1 roic 685 2012-10-11 14:15 API.h
^
In this example - 1. What does it stand for?
|
It is the no. of the links that the file is having...links is the nothing but different names of the same file
Links are of two type hard and soft links
use the following code:
ln file1 file2 #ln command creates file2 as a link of file 1
ls -l file1
This will show 2 in place of 1 bcoz file1 has two names
Now file1 and file2
Now same file can be used by two different names
Arun
| What's the second column of ls -l say? [duplicate] |
1,425,396,741,000 |
Perhaps I'm misunderstanding what KVM is capable of, but the ability to add/remove hardware on the VM seems to imply I can add a serial port that then acts as a terminal.
So, my questions are:
Which settings are best for the guest FreeBSD distribution? (There are many!)
How do I access said terminal from my Linux host?
|
I can now answer my own question based on Stefan's comment and the two linked articles:
https://askubuntu.com/questions/1733/what-reason-could-prevent-console-output-from-virsh-c-qemu-system-console-gu
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=507650#29
Here is the solution:
You need not edit anything to do with the host configuration provided it has the default serial device pointing to pty in there.
Ensure the boot parameter for your kernel has this appended: serial=tty0 console=ttyS0,115200n8. Usually achieved by editing /boot/grub/menu.lst
Configure /etc/inittab and append the line T0:S12345:respawn:/sbin/getty -hL ttyS0 115200 vt100 to launch a getty and give you the login prompt.
I can confirm this works for me using a fedora system (albeit I did have to set enforcing=0 as an additional parameter because the system in question is fedora rawhide running SELinux MLS).
I think from there I can probably work out how to do the same for freebsd.
Thanks Stefan!
| How do I connect a serial terminal to a KVM instance? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.