date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,320,067,648,000 |
As opposed to editing /etc/hostname, or wherever is relevant?
There must be a good reason (I hope) - in general I much prefer the "old" way, where everything was a text file. I'm not trying to be contentious - I'd really like to know, and to decide for myself if it's a good reason.
Thanks.
|
Background
hostnamectl is part of systemd, and provides a proper API for dealing with setting a server's hostnames in a standardized way.
$ rpm -qf $(type -P hostnamectl)
systemd-219-57.el7.x86_64
Previously each distro that did not use systemd, had their own methods for doing this which made for a lot of unnecessary complexity.
DESCRIPTION
hostnamectl may be used to query and change the system hostname and
related settings.
This tool distinguishes three different hostnames: the high-level
"pretty" hostname which might include all kinds of special characters
(e.g. "Lennart's Laptop"), the static hostname which is used to
initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the
transient hostname which is a default received from network
configuration. If a static hostname is set, and is valid (something
other than localhost), then the transient hostname is not used.
Note that the pretty hostname has little restrictions on the characters
used, while the static and transient hostnames are limited to the
usually accepted characters of Internet domain names.
The static hostname is stored in /etc/hostname, see hostname(5) for
more information. The pretty hostname, chassis type, and icon name are
stored in /etc/machine-info, see machine-info(5).
Use systemd-firstboot(1) to initialize the system host name for mounted
(but not booted) system images.
hostnamectl also pulls a lot of disparate data together into a single location to boot:
$ hostnamectl
Static hostname: centos7
Icon name: computer-vm
Chassis: vm
Machine ID: 1ec1e304541e429e8876ba9b8942a14a
Boot ID: 37c39a452464482da8d261f0ee46dfa5
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.21.1.el7.x86_64
Architecture: x86-64
The info here is coming from /etc/*release, uname -a, etc. including the hostname of the server.
What about the files?
Incidentally, everything is still in files, hostnamectl is merely simplifying how we have to interact with these files or know their every location.
As proof of this you can use strace -s 2000 hostnamectl and see what files it's pulling from:
$ strace -s 2000 hostnamectl |& grep ^open | tail -5
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/proc/self/stat", O_RDONLY|O_CLOEXEC) = 3
open("/etc/machine-id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4
open("/proc/sys/kernel/random/boot_id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4
systemd-hostname.service?
To the astute observer, you should notice in the above strace that not all files are present. hostnamectl is actually interacting with a service, systemd-hostnamectl.service which in fact does the "interacting" with most of the files that most admins would be familiar with, such as /etc/hostname.
Therefore when you run hostnamectl you're getting details from the service. This is a ondemand service, so you won't see if running all the time. Only when hostnamectl runs. You can see it if you run a watch command, and then start running hostnamectl multiple times:
$ watch "ps -eaf|grep [h]ostname"
root 3162 1 0 10:35 ? 00:00:00 /usr/lib/systemd/systemd-hostnamed
The source for it is here: https://github.com/systemd/systemd/blob/master/src/hostname/hostnamed.c and if you look through it, you'll see the references to /etc/hostname etc.
References
systemd/src/hostname/hostnamectl.c
systemd/src/hostname/hostnamed.c
hostnamectl
systemd-hostnamed.service
| What's the point of the hostnamectl command? |
1,319,998,769,000 |
Why are most Linux programs written in C? Why are they not written with C++, which is newer?
|
There have been many discussions about this. Mainly, the reason is a philosophical one.
C was invented as a simple language for system development (not so much application development). There are many arguments for using C++, but there are about as many for not using C++ and sticking to C.
In the end, it's a historical issue. Most application stuff is written in C, because most Kernel stuff is written in C. And since back then most stuff was written in C, people tend to use the original languages.
At this point, someone might ask "OK, so why is the kernel written in C and not ported to C++?". This has been discussed on kerneltrap some time ago. One nice explanation that can be quoted from this thread is a response by yoshi314 (quoting directly):
that's because nearly every c++ app needs a separate c++ standard library to operate. so they would have to port it to kernel, and expect an extra overhead everywhere.
c++ is more complex language and that means that compiler creates more complex code from it. because of that, finding that a problem stems from compiler bug,rather than code error is easier in c.
also c language is more barebone, and it's easier to follow its assembly representation, which is often easy to predict.
c++ is more versatile, but c is more suited for lowlevel or embedded stuff.
On the other hand, "most of Linux programs" is quite misleading. Take a look at graphical applications. Python is getting more and more ground especially in GUI environments on Linux. About the same thing that's happening with Windows and .NET.
| Why are most Linux programs written in C? |
1,319,998,769,000 |
PulseAudio is always running on my system, and it always instantly restarts if it crashes or I kill it. However, I never actually start PulseAudio.
I have checked /etc/init.d/ and /etc/X11/Xsession.d/, and I have checked systemctl list-units -a, and PulseAudio is nowhere to be found.
How come PulseAudio seemingly magically starts by itself without me ever running it, and how does it instantly restart when it dies?
I'm using Debian 8 (jessie) with xinit and the i3 window manager, and PulseAudio 5.
|
It seems any process linking to the libpulse* family of shared objects--either before or after running X and the i3 window manager--may implicitly autospawn PulseAudio server, under your user process, as a byproduct of attempts to interface with the audio subsystem. PulseAudio creator Lennart Poettering seems to confirm this, in a 2015-05-29 email to the systemd-devel mailing list:
"pulseaudio is generally not a system service but a user service.
Unless your user session is fully converted to be managed by systemd
too (which is unlikely) systemd is hence not involved at all with
starting it.
"PA is usually started from the session setup script or service. In
Gnome that's gnome-session, for example. It's also auto-spawned
on-demand if the libraries are used and note that it is missing."
For example, on Debian Stretch (Testing), web browser IceWeasel links to two libpulse* shared objects: 1) libpulsecommon-7.1.so; and 2) libpulse.so.0.18.2:
k@bucket:~$ ps -ef | grep iceweasel
k 17318 1 5 18:58 tty2 00:00:15 iceweasel
k 17498 1879 0 19:03 pts/0 00:00:00 grep iceweasel
k@bucket:~$ sudo pmap 17318 | grep -i pulse
00007fee08377000 65540K rw-s- pulse-shm-2442253193
00007fee0c378000 65540K rw-s- pulse-shm-3156287926
00007fee11d24000 500K r-x-- libpulsecommon-7.1.so
00007fee11da1000 2048K ----- libpulsecommon-7.1.so
00007fee11fa1000 4K r---- libpulsecommon-7.1.so
00007fee11fa2000 8K rw--- libpulsecommon-7.1.so
00007fee121af000 316K r-x-- libpulse.so.0.18.2
00007fee121fe000 2044K ----- libpulse.so.0.18.2
00007fee123fd000 4K r---- libpulse.so.0.18.2
00007fee123fe000 4K rw--- libpulse.so.0.18.2
You may see which running processes link to libpulse*. For example, first get a list of libpulse* shared objects, then run lsof on each (note: this comes from Debian Stretch (Testing), so your output may differ):
sudo find / -type f -name "*libpulse*"
*snip*
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsedsp.so
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
/usr/lib/x86_64-linux-gnu/libpulse.so.0.18.2
/usr/lib/x86_64-linux-gnu/libpulse-simple.so.0.1.0
/usr/lib/x86_64-linux-gnu/libpulse-mainloop-glib.so.0.0.5
/usr/lib/libpulsecore-7.1.so
/usr/lib/ao/plugins-4/libpulse.so
sudo lsof /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
gnome-she 864 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-set 965 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-set 1232 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-she 1286 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
chrome 2730 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
pulseaudi 18356 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
To tell these processes not to autospawn PulseAudio, edit ~/.config/pulse/client.conf and add line
autospawn = no
PulseAudio and its libraries respect that setting, generally.
The libpulse* linking by running processes may also indicate why PulseAudio respawns so quickly. The FreeDesktop.org page, "Running PulseAudio", seems to confirm this:
"...typically some background application will immediately reconnect,
causing the server to get immediately restarted."
You seem to indicate you start the i3 window manager via the console (by running xinit) and do not use a display manager or desktop environment. The rest of this answer details info for those that do use GNOME, KDE, and so forth.
ADDITIONAL INFO, FOR GNOME/KDE AUTOSTART
Package PulseAudio (5.0-13), in Debian Jessie (Stable) amd64, installs the following four system files:
/etc/xdg/autostart/pulseaudio-kde.desktop
/etc/xdg/autostart/pulseaudio.desktop
/usr/bin/start-pulseaudio-x11
/usr/bin/start-pulseaudio-kde
Some graphical session managers automatically run FreeDesktop.org autostart scripts on user login. The PulseAudio autostart script, in turn, tells graphical session managers to run the appropriate PulseAudio startup script:
/usr/bin/start-pulseaudio-x11
/usr/bin/start-pulseaudio-kde
These scripts call PulseAudio client /usr/bin/pactl to load PulseAudio modules, which spawns the PulseAudio server as a byproduct (note: if you have autospawn set to "no", pactl respects that and will not autospawn PulseAudio server).
More detail, at the FreeDesktop.org page "Running PulseAudio".
Some display managers, in addition and in other distributions, may start PulseAudio (for example, SDDM, on ArchLinux. Though maintainers may have resolved this, by now).
| How does PulseAudio start? |
1,319,998,769,000 |
I'm writing a program that will test programs written by students. I'm afraid that I can't trust them and I need to make sure that it won't end up badly for the computer running it.
I was thinking about making some crash test user with limited access to system resources and run programs as that user, but from what I have found on the net so far, making a virtual system would be the safest option...
Can someone help me with choosing the right approach? Security is a big concern for me. On the other hand, I don't want a solution that is overkill and waste much time trying to learn something I don't really need.
|
Virtual machine can give you highest security without reboot, but lowest performance.
Another option, for even higher security than a virtual machine: boot a "live" CD/DVD/pendrive without access to the hard drive (temporarily disable the HDD in BIOS; if you can't, at least do not mount the drive / unmount it, if mounted automatically - but this is much less secure)
A docker container is a bit less secure alternative to a full virtual machine. Probably the crucial difference (in terms of security) between these two is that systems running in docker actually use the kernel of your host system.
There are programs such as isolate that will create a special, secured environment - this is generally called a sandbox - those are typically chroot-based, with additional supervision - find one that fits you.
A simple chroot will be least secure (esp. in regards to executing programs), though maybe a little faster, but... You'll need to build/copy a whole separate root tree and use bind mounts for /dev etc. (see Note 1 below!). So in general, this approach cannot be recommended, especially if you can use a more secure, and often easier to set up, sandbox environment.
Note 0: To the aspect of a "special user", like the nobody account: This gives hardly any security, much less than even a simple chroot. A nobody user can still access files and programs that have read and execute permissions set for other. You can test it with su -s /bin/sh -c 'some command' nobody. And if you have any configuration/history/cache file accessible to anybody (by a mistake or minor security hole), a program running with nobody's permissions can access it, grep for confidential data (like "pass=" etc.) and in many ways send it over the net or whatever.
Note 1: As Gilles pointed in a comment below, a simple chroot environment will give very little security against exploits aiming at privilege escalation. A sole chroot makes sense security-wise, only if the environment is minimal, consisting of security-confirmed programs only (but there still remains the risk of exploiting potential kernel-level vulnerabilities), and all the untrusted programs running in the chroot are running as a user who does not run any process outside the chroot. What chroot does prevent against (with the restrictions mentioned here), is direct system penetration without privilege escalation. However, as Gilles noted in another comment, even that level of security might get circumvented, allowing a program to break out of the chroot.
| Execution of possibly harmful program on Linux |
1,319,998,769,000 |
I am using Putty, Suse box and Vim 7.2 combo for editing and I want to remap Ctrl + Arrow keypresses to a particular task. But for some reason, Vim ignores the shortcut, goes into insert mode, and inserts character D (for Ctrl + ←) or character C (for Ctrl + →).
Which part of my keyboard/terminal configuration is to blame and how to fix it?
|
Figure out exactly what escape sequence your terminal sends for Ctrl+arrow by typing Ctrl+V, Ctrl+arrow in insert mode: this will insert the leading ESC character (shown as ^[ in vim) literally, followed by the rest of the escape sequence. Then tell vim about these escape sequences with something like
map <ESC>[5D <C-Left>
map <ESC>[5C <C-Right>
map! <ESC>[5D <C-Left>
map! <ESC>[5C <C-Right>
I seem to recall that Putty has a default setting for Application Cursor Keys mode that's inconvenient (I forget why), you might want to toggle this setting first.
Note that although escape sequences vary between terminals, conflicts (i.e. an escape sequence that corresponds to different keys in different terminals) are rare, so there's no particular need to try to apply the mappings only on a particular terminal type.
| How to fix Ctrl + arrows in Vim? |
1,319,998,769,000 |
According to Wikipedia (which could be wrong)
When a fork() system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process. But this is not needed in certain cases. Consider the case when a child executes an "exec" system call (which is used to execute any executable file from within a C program) or exits very soon after the fork(). When the child is needed just to execute a command for the parent process, there is no need for copying the parent process' pages, since exec replaces the address space of the process which invoked it with the command to be executed.
In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs, the parent process's pages are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification. This process will then use the newly copied page rather than the shared one in all future references. The other process (the one which did not modify the shared page) continues to use the original copy of the page (which is now no longer shared). This technique is called copy-on-write since the page is copied when some process writes to it.
It seems that when either of the processes tries to write to the page a new copy of the page gets allocated and assigned to the process that generated the page fault. The original page gets marked writable afterwards.
My question is: what happens if the fork() gets called multiple times before any of the processes made an attempt to write to a shared page?
|
Nothing particular happens. All processes are sharing the same set of pages and each one gets its own private copy when it wants to modify a page.
| How does copy-on-write in fork() handle multiple fork? |
1,319,998,769,000 |
I want to be able to start zsh with a custom rc file similar to the command: bash --rc-file /path/to/file
If this is not possible, then is it possible to start zsh, run source /path/to/file, then stay in the same zsh session?
Note: The command zsh --rcs /path/to/file does not work, at least not for me...
EDIT: In its entirety I wish to be able to do the following:
ssh to a remote server "example.com", run zsh, source my configuration located at /path/to/file, all in 1 command. This is where I've struggled, especially because I'd rather not write over any configuration files on the remote machine.
|
From the man pages:
STARTUP/SHUTDOWN FILES
Commands are first read from /etc/zshenv; this cannot be overridden. Subsequent be‐
haviour is modified by the RCS and GLOBAL_RCS options; the former affects all startup
files, while the second only affects global startup files (those shown here with an
path starting with a /). If one of the options is unset at any point, any subsequent
startup file(s) of the corresponding type will not be read. It is also possible for
a file in $ZDOTDIR to re-enable GLOBAL_RCS. Both RCS and GLOBAL_RCS are set by
default.
Commands are then read from $ZDOTDIR/.zshenv. If the shell is a login shell, com‐
mands are read from /etc/zprofile and then $ZDOTDIR/.zprofile. Then, if the shell is
interactive, commands are read from /etc/zshrc and then $ZDOTDIR/.zshrc. Finally, if
the shell is a login shell, /etc/zlogin and $ZDOTDIR/.zlogin are read.
When a login shell exits, the files $ZDOTDIR/.zlogout and then /etc/zlogout are read.
This happens with either an explicit exit via the exit or logout commands, or an
implicit exit by reading end-of-file from the terminal. However, if the shell termi‐
nates due to exec'ing another process, the logout files are not read. These are also
affected by the RCS and GLOBAL_RCS options. Note also that the RCS option affects
the saving of history files, i.e. if RCS is unset when the shell exits, no history
file will be saved.
If ZDOTDIR is unset, HOME is used instead. Files listed above as being in /etc may
be in another directory, depending on the installation.
As /etc/zshenv is run for all instances of zsh, it is important that it be kept as
small as possible. In particular, it is a good idea to put code that does not need
to be run for every single shell behind a test of the form `if [[ -o rcs ]]; then
...' so that it will not be executed when zsh is invoked with the `-f' option.
so you should be able to set the environment variable ZDOTDIR to a new directory to get zsh to look for a different set of dotfiles.
As the man page suggests, RCS and GLOBAL_RCS are not paths to rc files, as you are attempting to use them, but rather options you can enable or disable. So, for instance, the flag --rcs will enable the RCS option, causing zsh to read from rc files. You can use the following command-line flags to zsh to enable or disable RCS or GLOBAL_RCS:
--globalrcs
--rcs
-d equivalent to --no-globalrcs
-f equivalent to --no-rcs
To answer your other question:
is it possible to start zsh, run "source /path/to/file", then stay in
the same zsh session?
Yes, this is pretty easy according to the above directions. Just run zsh -d -f and then source /path/to/zshrc.
| Start zsh with a custom zshrc |
1,319,998,769,000 |
I want to force a disk partition to read only mode and keep it read-only for more than 30 minutes.
What I have tried:
mount -o remount,ro (partition-identifier) (mount-point) -t (filesystem)
Issue: This gave device busy error as some processes were using the partition. I don't want to kill the processes using the disk. I want to simulate the disk suddenly going read-only when the processes are still using it.
Used magic sysrq key, like below
echo u > /proc/sysrq-trigger
Issue: This will make all the disk partitions read-only (although device is busy). But after 20-30 minutes the machine is rebooting itself. Some machines are rebooting immediately once this command is executed. Not sure what is causing this reboot yet. I don't want the machine to reboot itself and need to keep the disk in read-only mode for 30+ minutes.
Question:
Is there any better way I can force a single disk partition to read-only and sustain it in that state for half an hour and bring it back to read-write mode without causing any reboot in the process?
|
You normally can't remount a filesystem as read-only if processes have a file on it that's open for writing, or if it contains a file that's deleted but still open. Similarly, you can't unmount a filesystem that has any file open (or similar uses of files such as a process having its current directory there, a running executable, etc.).
You can use umount -l to release the mount point and prevent the opening of further files, but keep the filesystem mounted and keep processes that already have files open running normally.
I can't think of a generic way to force a filesystem to be remounted read-only when it shouldn't be. However, if the filesystem is backed by a block device, you can make the block device read-only, e.g.
echo 1 >/sys/block/dm-4/ro
echo 1 >/sys/block/sda/sda2/ro
echo u > /proc/sysrq-trigger is a rather extreme way to force remounting as read-only, because it affects all filesystems. It's meant as a last-ditch method to leave the filesystem in a clean state just before rebooting.
Remounting a filesystem as read-only does not cause a reboot. Whatever is causing the reboot is not directly related to remounting the partition as read-only. Maybe it's completely unrelated, or maybe this triggers a bug in the application which causes it to spin and make the processor overheat and your processor is defective or overclocked and eventually reboots. You need to track down the cause of the reboot.
| Remount a busy disk to read-only mode |
1,319,998,769,000 |
I have an NTFS partition (containing a Windows installation from which I dual boot) that I would like to permanently mount from my Linux installation. Problem is, I can't figure out what the best/right/correct mount point for the NTFS partition is. Obviously, it shouldn't be mounted as /home, /usr, etc. (any of the standard mount points for filesystems) because it's not part of the Linux system. I do want it to be permanently mounted, though; and this raises the question, where do I mount it? Here are the mount point possibilities I've come up with:
/media/windows
This one makes a lot of sense because it would be right alongside auto-mounted devices, but according to the filesystem standard, /media/ is really for removable media, so it doesn't seem quite right to put my permanently mounted, internal partition next to auto-mounted, removable ones. I'm leaning toward this option the most, but only because it is less incongruent than the others.
/mnt/windows
This one also seems pretty logical, but again, the standard (and other things I've read) indicate that subdirectory mountpoints are generally discouraged here. Plus, I do actually mount filesystems temporarily in /mnt/ on occasion (as the standard intended it), so this one looks like it would get in the way of regular system use.
/windows
I really don't like the idea of adding another top-level directory to my filesystem, if I can avoid it. It doesn't feel right. An upside to this one, though, is that it is very easily accessible and doesn't get in the way of anything else (i.e. automounting partitions in /media/ or temporary mounts in /mnt/).
/home/[my username]/filesystems/windows
I don't like this idea because the partition is decidedly system-specific, not user-specific, so shoving it in a home directory seems not right.
Which of these options is the "right" one—or is there an alternative I didn't list here?
For clarity, note that I am running Linux (Arch Linux in particular), so any recommendations should probably be based on Linux's idea of filesystem organization rather than BSD's, for example.
|
First and foremost, this is going to depend solely on your architecture, and customs.
I for instance mount things like this under /mnt. I know people that create top level directories, and people who put this stuff in /home. It all depends on what you're comfortable with. There is no distinct standard on this anymore, the architecture of the system has changed, and you have varying views now, on things that used to be 'gospel'. Things like /usr/local or /opt/share, rpm or source...you get the drift.
Secondly, if you re-read through your link at pathname.com, you'll notice the paragraph under /media that states
Rationale
Historically there have been a number
of other different places used to
mount removeable media such as /cdrom,
/mnt or /mnt/cdrom. Placing the mount
points for all removeable media
directly in the root directory would
potentially result in a large number
of extra directories in /. Although
the use of subdirectories in /mnt as a
mount point has recently been common,
it conflicts with a much older
tradition of using /mnt directly as a
temporary mount point.
So personally, I advocate /mnt/windows or some iteration of that. It keeps the top level dir free, and is simple and intuitive. When I'm looking through or auditing a system, that's where I look for mounts right off the bat.
| What's the most "correct" mount point for a permanent NTFS partition? |
1,319,998,769,000 |
Under Linux, we usually use the "filter" table to do common filtering:
iptables --table filter --append INPUT --source 1.2.3.4 --jump DROP
iptables --table filter --append INPUT --in-interface lo --jump ACCEPT
According to the netfilter flow chart below, the packets first travel through the "raw" table:
So we can write:
iptables --table raw --append PREROUTING --source 1.2.3.4 --jump DROP
iptables --table raw --append PREROUTING --in-interface lo --jump ACCEPT
the packets are handled sooner, without the need to go though conntrack+mangle+nat+routing. So lightly less CPU/memory used (and in turn lightly compensated by the fact the iptable_raw module has to be loaded)
only one rule in case the box is also a router (won't be ok for every rules, obviously) because there is no need to add the same rule for filter/forward
I did only rapid tests, and this works perfectly well.
The documentations I found always describe the raw table to be used in strict cases. But none give even the smallest justification.
Question: is there any reasons not to use the raw table, apart dogmatic ones ?
|
From man iptables:
raw: This table is used mainly for configuring exemptions from connection
tracking in combination with the NOTRACK target. It registers at the
netfilter hooks with higher priority and is thus called before
ip_conntrack, or any other IP tables.
It provides the following built-in chains:
- PREROUTING (for packets arriving via any network interface)
- OUTPUT (for packets generated by local processes)
Analysis:
So, the RAW table is before conntrack and it was designed with the objective to be used to set the NOTRACK mark on packets that you do not wish to track in netfilter.
The -j targets are not restricted only to NOTRACK, so yes, you con filter packets in the raw table with the benefits of less CPU/memory consumption.
Most often, servers don't need to keep track of all connections. You only need tracking if you need to filter packets in iptables based on previous established connections. On servers that only serve a simple purpose like with only port 80 (and maybe 21) open, don't require that. In those instances, you can disable connection tracking.
However, if you're trying to run a NAT router, things get slightly complicated. In order to NAT something, you need to keep track of those connections so you can deliver packets from the outside network to the internal network.
If a whole connection is set with NOTRACK, then you will not be able to track related connections either, conntrack and nat helpers will simply not work for untracked connections, nor will related ICMP errors do. You will have to open up for these manually in other words. When it comes to complex protocols such as FTP and SCTP and others, this can be very hard to manage.
Use cases:
One example would be if you have a heavily trafficked router that you want to firewall the incoming and outgoing traffic on, but not the routed traffic. Then, you could set the NOTRACK mark for ignore the forwarded traffic to save processing power.
Another example when NOTRACK can be used is if you have a highly trafficked web-server, you could then set up a rule that turns of tracking for port 80 on all the locally owned IP addresses, or the ones that are actually serving web traffic. You could then enjoy stateful tracking on all other services, except for web traffic which might save some processing power on an already overloaded system.
Example --> running-a-semi-stateless-linux-router-for-private-network
Conclusion:
There isn't a strong reason to not to use the raw table, but there is some reasons to take care when using the NOTRACK target in the raw table.
| netfilter/iptables: why not using the raw table? |
1,319,998,769,000 |
I have a laptop with Debian on it, and I am going to sell this laptop.
Would it suffice to erase the Debian installation before selling it to completely clean up my laptop from my personal data, and if yes how can I uninstall Debian (so that there isn't any operating system on the laptop)?
|
This nixCraft post explain how to erase hard disk
The secure removal of data is not as easy as you may think. When you
delete a file using the default commands of the operating system (for
example “rm” in Linux/BSD/MacOS/UNIX or “del” in DOS or emptying the
recycle bin in WINDOWS) the operating system does NOT delete the file,
the contents of the file remains on your hard disk. The only way to
make recovering of your sensitive data nearly impossible is to
overwrite (“wipe” or “shred”) the data with several defined patterns.
For erasing hard disk permanently, you can use the standard dd
command. However, I recommend using shred command or wipe command or
scrub command.
Warning: Check that the correct drive or partition has been targeted.
Wrong drive or partition target going to result into data loss . Under
no circumstances we can be help responsible for total or partial data
loss, so please be careful with disk names. YOU HAVE BEEN WARNED!
Erase disk permanently using a live Linux cd
First, download a knoppix Live Linux CD or SystemRescueCd
live CD.
Next, burn a live cd and boot your laptop or desktop from live CD. You
can now wipe any disk including Windows, Linux, Mac OS X or Unix-like
system.
1. How do I use the shred command?
Shred originally designed to delete file securely. It deletes a file
securely, first overwriting it to hide its contents. However, the same
command can be used to erase hard disk. For example, if your hard
drive named as /dev/sda, then type the following command:
# shred -n 5 -vz /dev/sda
Where,
-n 5: Overwrite 5 times instead of the default (25 times).
-v : Show progress.
-z : Add a final overwrite with zeros to hide shredding.
The command is same for IDE hard disk hda (PC/Windows first hard disk
connected to IDE) :
# shred -n 5 -vz /dev/hda
Note: Comment from @Gilles
Replace shred -n 5 by shred -n 1 or by cat /dev/zero. Multiple passes are not useful unless your hard disk uses 1980s technology.
In this example use shred and /dev/urandom as the source of random
data:
# shred -v --random-source=/dev/urandom -n1 /dev/DISK/TO/DELETE
# shred -v --random-source=/dev/urandom -n1 /dev/sda
2. How to use the wipe command
You can use wipe command to delete any file including disks:
# wipe -D /path/to/file.doc
3. How to use the scrub command
You can use disk scrubbing program such as scrub. It overwrites hard
disks, files, and other devices with repeating patterns intended to
make recovering data from these devices more difficult. Although
physical destruction is unarguably the most reliable method of
destroying sensitive data, it is inconvenient and costly. For certain
classes of data, organizations may be willing to do the next best
thing which is scribble on all the bytes until retrieval would require
heroic efforts in a lab. The scrub implements several different
algorithms. The syntax is:
# scrub -p nnsa|dod|bsi|old|fastold|gutmann|random|random2 fileNameHere
To erase /dev/sda, enter:
# scrub -p dod /dev/sda
4. Use dd command to securely wipe disk
You can wipe a disk is done by writing new data over every single bit.
The dd command can be used as follows:
# dd if=/dev/urandom of=/dev/DISK/TO/WIPE bs=4096
Wipe a /dev/sda disk, enter:
# dd if=/dev/urandom of=/dev/sda bs=4096
5. How do I securely wipe drive/partition using a randomly-seeded AES cipher from OpenSSL?
You can use openssl and pv command to securely erase the disk too.
First, get the total /dev/sda disk size in bytes:
# blockdev --getsize64 /dev/sda
399717171200
Next, type the following command to wipe a /dev/sda disk:
# openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero | pv -bartpes
399717171200 | dd bs=64K of=/dev/sda
6. How to use badblocks command to securely wipe disk
The syntax is:
# badblocks -c BLOCK_SIZE_HERE -wsvf /dev/DISK/TO/WIPE
# badblocks -wsvf /dev/DISK/TO/WIPE
# badblocks -wsvf /dev/sda
| Erasing a Linux laptop |
1,319,998,769,000 |
Possible Duplicate:
Can I identify my RAM without shutting down linux?
I'd like to know the type, size, and model. But I'd like to avoid having to shut down and open the machine.
|
Check out this How do I detect the RAM memory chip specification from within a Linux machine question.
This tool might help:
http://www.cyberciti.biz/faq/check-ram-speed-linux/
$ sudo dmidecode --type 17 | more
Sample output:
# dmidecode 2.9
SMBIOS 2.4 present.
Handle 0x0018, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 2048 MB
Form Factor: DIMM
Set: None
Locator: J6H1
Bank Locator: CHAN A DIMM 0
Type: DDR2
Type Detail: Synchronous
Speed: 800 MHz (1.2 ns)
Manufacturer: 0x2CFFFFFFFFFFFFFF
Serial Number: 0x00000000
Asset Tag: Unknown
Part Number: 0x5A494F4E203830302D3247422D413131382D
Handle 0x001A, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: Unknown
Data Width: Unknown
Size: No Module Installed
Form Factor: DIMM
Set: None
Locator: J6H2
Bank Locator: CHAN A DIMM 1
Type: DDR2
Type Detail: None
Speed: Unknown
Manufacturer: NO DIMM
Serial Number: NO DIMM
Asset Tag: NO DIMM
Part Number: NO DIMM
Alternatively, both newegg.com and crucial.com among other sites have memory upgrade advisors/scanners that I've used regularly under Windows. Some of them were web-based at some point, so you could try that, or if you could possibly boot into Windows (even if temporarily) it might help.
Not sure what the results would be under a Windows VM, and unfortunately I am currently running Linux in a VM under Windows 7, so can't reliably test for this myself.
I do realize that this doesn't give you necessarily exactly what you asked for .. but perhaps it will be of use none-the-less.
| How to find information about my RAM? [duplicate] |
1,319,998,769,000 |
bash won't source .bashrc from an interactive terminal unless I manually run bash from a terminal:
$ bash
or manually source it:
$ source ./.bashrc
or running:
$ st -e bash
Here's some useful output I hope:
$ echo $TERM
st-256color
$ echo $SHELL
/bin/sh
$ readlink /bin/sh
bash
$ shopt login_shell
login_shell off
I'm on CRUX Linux 3.0 and I use dwm and st. I've tried using .bash_profile and .profile with no success.
Any ideas?
|
Why would it source it? Your default shell is not bash, but sh:
$ echo $SHELL
/bin/sh
In most modern systems, sh is a symlink to a basic shell. On my Debian for example:
$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Aug 1 2012 /bin/sh -> dash
In your case, sh is a link to bash but, as explained in man bash:
If bash is invoked with the name sh, it tries to mimic the startup
behavior of historical versions of sh as closely as possible, while
conforming to the POSIX standard as well. [...] When invoked as an
interactive shell with the name sh, bash looks for the variable ENV,
expands its value if it is defined, and uses the expanded value as
the name of a file to read and execute. Since a shell invoked as sh
does not attempt to read and execute commands from any other startup
files, the --rcfile option has no effect.
and
--norcDo not read and execute the system wide initialization file
/etc/bash.bashrc and the personal initialization file ~/.bashrc
if the shell is interactive. This option is on by default if
the shell is invoked as sh.
So, since your default shell is sh, .bashrc is not read. Just set your default shell to bash using chsh -s /bin/bash.
| Bash doesn't read .bashrc unless manually started |
1,319,998,769,000 |
I find that under my root directory, there are some directories that have the same inode number:
$ ls -aid */ .*/
2 home/ 2 tmp/ 2 usr/ 2 var/ 2 ./ 2 ../ 1 sys/ 1 proc/
I only know that the directories' names are kept in the parent directory, and their data is kept in the inode of the directories themselves.
I'm confused here.
This is what I think when I trace the pathname /home/user1.
First I get into the inode 2 which is the root directory which contains the directory lists.
Then I find the name home paired with inode 2.
So I go back to the disk to find inode 2?
And I get the name user1 here?
|
They're on different devices.
If we look at the output of stat, we can also see the device the file is on:
# stat / | grep Inode
Device: 801h/2049d Inode: 2 Links: 24
# stat /opt | grep Inode
Device: 803h/2051d Inode: 2 Links: 5
So those two are on separate devices/filesystems. Inode numbers are only unique within a filesystem so there is nothing unusual here. On ext2/3/4 inode 2 is also always the root directory, so we know they are the roots of their respective filesystems.
The combination of device number + inode is likely to be unique over the whole system. (There are filesystems that don't have inodes in the traditional sense, but I think they still have to fake some sort of a unique identifier in their place anyway.)
The device numbers there appear to be the same as those shown on the device nodes, so /dev/sda1 holds the filesystem where / is on:
# ls -l /dev/sda1
brw-rw---- 1 root disk 8, 1 Sep 21 10:45 /dev/sda1
| Why do the directories /home, /usr, /var, etc. all have the same inode number (2)? |
1,319,998,769,000 |
So I have a Palm Pre (original P100EWW) model that I enabled developer mode on, and installed a Debian Squeeze chroot. Works great. I have plans to use this for ANYTHING (bittorrent peer, web server) but a phone.
I noticed if I do a cat /dev/urandom > /dev/fb0 it actually writes random pixels to the screen until a No space left on device error is generated. Awesome, now I can use the display.
So what kind of utilites are there that will either A) let me use /dev/fb0 as a console I can output text to or B) render text on /dev/fb0 from the command line.
I don't know about recompiling the kernel for this yet (I'd love to eventually strip WebOS off entirely and turn this into a minimal ARM server) so userspace tools if they exist is what I'm asking about. Also would prefer to render directly to /dev/fb0 and not use X.
|
To use the framebuffer as console you need the fbdev module. You may have to recompile your kernel.
You may also be interested in the DirectFB project, which is a library that makes using the framebuffer easier. There are also applications and GUI environments written for it already.
| How to use /dev/fb0 as a console from userspace, or output text to it |
1,319,998,769,000 |
I have 2 questions.
During Linux installation we specify memory space for 2 mount points - root and swap. Are there any other mount points created without the users notice?
Is this statement correct: "mounting comes into the picture only when dealing with different partitions. i.e, you cannot mount, say, /proc unless it's a different partition"?
|
There are misconceptions behind your questions.
Swap is not mounted.
Mounting isn't limited to partitions.
Partitions
A partition is a slice¹ of disk space that's devoted to a particular purpose. Here are some common purposes for partitions.
A filesystem, i.e. files organized as a directory tree and stored in a format such as ext2, ext3, FFS, FAT, NTFS, …
Swap space, i.e. disk space used for paging (and storing hibernation images).
Direct application access. Some databases store their data directly on a partition rather than on a filesystem to gain a little performance. (A filesystem is a kind of database anyway.)
A container for other partitions. For example, a PC extended partition, or a disk slice containing BSD partitions, or an LVM physical volume (containing eventually logical volumes which can themselves be considered partitions), …
Filesystems
Filesystems present information in a hierarchical structure. Here are some common kinds of filesystems:
Disk-backed filesystems, such as ext2, ext3, FFS, FAT, NTFS, …
The backing need not be directly on a disk partition, as seen above. For example, this could be an LVM logical volume, or a loop mount.
Memory-backed filesystems, such as Solaris and Linux's tmpfs.
Filesystems that present information from the kernel, such as proc and sysfs on Linux.
Network filesystems, such as NFS, Samba, …
Application-backed filesystems, of which FUSE has a large collection. Application-backed filesystems can do just about anything: make an FTP server appear as a filesystem, give an alternate view of a filesystem where file names are case-insensitive or converted to a different encoding, show archive contents as if they were directories, …
Mounting
Unix presents files in a single hierarchy, usually called “the filesystem” (but in this answer I'll not use the word “filesystem” in this sense to keep confusion down). Individual filesystems must be grafted onto that hierarchy in order to access them.³
You make a filesystem accessible by mounting it. Mounting associates the root directory of the filesystem you're mounting with a existing directory in the file hierarchy. A directory that has such an association is known as a mount point.
For example, the root filesystem is mounted at boot time (before the kernel starts any process²) to the / directory.
The proc filesystem over which some unix variants such as Solaris and Linux expose information about processes is mounted on /proc, so that /proc/42/environ designates the file /42/environ on the proc filesystem, which (on Linux, at least) contains a read-only view of the environment of process number 42.
If you have a separate filesystem e.g. for /home, then /home/john/myfile.txt designates the file whose path is /john/myfile.txt from the root of the home filesystem.
Under Linux, it's possible for the same filesystem to be accessible through more than one path, thanks to bind mounts.
A typical Linux filesystems has many mounted filesystems. (This is an example; different distributions, versions and setups will lead to different filesystems being mounted.)
/: the root filesystem, mounted before the kernel loads the first process. The bootloader tells the kernel what to use as the root filesystem (it's usually a disk partition but could be something else such as an NFS export).
/proc: the proc filessytem, with process and kernel information.
/sys: the sysfs filesystem, with information about hardware devices.
/dev: an in-memory filesystem where device files are automatically created by udev based on available hardware.
/dev/pts: a special-purpose filesystem containing device files for running terminal emulators.
/dev/shm: an in-memory filesystem used for internal purposes by the system's standard library.
Depending on what system components you have running, you may see other special-purpose filesystems such as binfmt_misc (used by the foreign executable file format kernel subsystem), fusectl (used by FUSE), nfsd (used by the kernel NFS server), …
Any filesystem explicitly mentioned in /etc/fstab (and not marked noauto) is mounted as part of the boot process.
Any filesystem automatically mounted by HAL (or equivalent functionality) following the insertion of a removable device such as a USB key.
Any filesystem explicitly mounted with the mount command.
¹ Informally speaking here.
² Initrd and such are beyond the scope of this answer.
³ This is unlike Windows, which has a separate hierarchy for each filesystem, e.g. c: or \\hostname\sharename.
| What mount points exist on a typical Linux system? |
1,319,998,769,000 |
I'm getting the following error from sudo:
$ sudo ls
sudo: /etc/sudoers is owned by uid 1000, should be 0
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Of course I can't chown it back to root without using sudo. We don't have a password on the root account either.
I honestly don't know how the system got into this mess, but now it's up to me to resolve it.
Normally I would boot into recovery mode, but the system is remote and only accessible over a VPN while booted normally. For the same reason, booting from a live CD or USB stick is also impractical.
The system is Ubuntu 16.04 (beyond EOL, don't ask), but the question and answers are probably more general.
|
The procedure described here (which may itself be an imperfect copy of this Ask Ubuntu answer) performed the miracle. I'm copying it here, and adding some more explanations.
Procedure
Open two SSH sessions to the target server.
In the first session, get the PID of bash by running:
echo $$
In the second session, start the authentication agent with:
pkttyagent --process 29824
Use the pid obtained from step 1.
Back in the first session, run:
pkexec chown root:root /etc/sudoers /etc/sudoers.d -R
Enter the password in the second session password promt.
Explanation
Similar to sudo, pkexec allows an authorized user to execute a program as another user, typically root. It uses polkit for authentication; in particular, the org.freedesktop.policykit.exec action is used.
This action is defined in /usr/share/polkit-1/actions/org.freedesktop.policykit.policy:
<action id="org.freedesktop.policykit.exec">
<description>Run programs as another user</description>
<message>Authentication is required to run a program as another user</message>
<defaults>
<allow_any>auth_admin</allow_any>
<allow_inactive>auth_admin</allow_inactive>
<allow_active>auth_admin</allow_active>
</defaults>
</action>
auth_admin means that an administrative user is allowed to perform this action. Who qualifies as an administrative user?
On this particular system (Ubuntu 16.04), that is configured in /etc/polkit-1/localauthority.conf.d/51-ubuntu-admin.conf:
[Configuration]
AdminIdentities=unix-group:sudo;unix-group:admin
So any user in the group sudo or admin can use pkexec.
On a newer system (Arch Linux), it's in /usr/share/polkit-1/rules.d/50-default.rules:
polkit.addAdminRule(function(action, subject) {
return ["unix-group:wheel"];
});
So here, everyone in the wheel group is an administrative user.
In the pkexec manual page, it states that if no authentication agent is found for the current session, pkexec uses its own textual authentication agent, which appears to be pkttyagent. Indeed, if you run pkexec without first starting the pkttyagent process, you are prompted for a password in the same shell but it fails after entering the password:
polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie
This appears to be an old bug in polkit that doesn't seem to be getting any traction. More discussion.
The trick of using two shells is merely a workaround for this issue.
| How to restore a broken sudoers file without being able to use sudo? |
1,319,998,769,000 |
How do I silently extract files, without displaying status?
|
man unzip:
-q perform operations quietly (-qq = even quieter). Ordinarily
unzip prints the names of the files it's extracting or testing,
the extraction methods, any file or zipfile comments that may be
stored in the archive, and possibly a summary when finished with
each archive. The -q[q] options suppress the printing of some
or all of these messages.
| Unix system(“unzip archive.zip”) Extracting Zip Files Silently |
1,319,998,769,000 |
I'm working on an embedded system with the busybox version of dd. I'm trying to test an erase to the drive from some outside utility, however dd does not read from the disc again after the erase, but shows me the cached data.
I've narrowed it down to dd as when I do an initial dd, see the data, restart my system to flush the cache, did the erase, and then ran dd again it came up with all zeros.
However, if I do dd on factory settings, erase the drive, and do dd again without restarting it won't show me all zeros until a restart.
I've read in the GNU manpage that dd supports the iflag opt, with a nocache flag, but busybox does not support that option so that's out of the question.
My question is how can I force dd to read from the disk again rather than from cache?
|
You could try
sync
echo 3 > /proc/sys/vm/drop_caches
which drops all sorts of caches.
For details see /usr/src/linux/Documentation/sysctl/vm.txt on drop_caches.
Note: the question was about busybox dd which did not support iflag=direct at the time. It was added in busybox v1.33.0 (2020-12-29), see busybox dd: support for O_DIRECT i/o. See the other answers for usage examples.
| Force dd not to cache or not to read from cache |
1,319,998,769,000 |
An answer to the question "Allowing a regular user to listen to a port below 1024", specified giving an executable additional permissions using setcap such that the program could bind to ports smaller than 1024:
setcap 'cap_net_bind_service=+ep' /path/to/program
What is the correct way to undo these permissions?
|
To remove capabilities from a file use the -r flag
setcap -r /path/to/program
This will result in the program having no capabilities.
| Unset `setcap` additional capabilities on executable |
1,319,998,769,000 |
Is it possible to make Linux kernel completely ignore the floppy disk controller? I do not have the drive but obviously my motherboard does contain the controller. I would like to disable the /dev/fd0 device node somehow to avoid Thunar and other tools detecting it and probing it.
|
On Ubuntu, the floppy driver is loaded as a module. You can blacklist this module so it doesn't get loaded:
echo "blacklist floppy" | sudo tee /etc/modprobe.d/blacklist-floppy.conf
sudo rmmod floppy
sudo update-initramfs -u
Immediately and upon rebooting, the floppy driver should be banished for good.
| Linux, disable /dev/fd0 (floppy) |
1,319,998,769,000 |
Is there a Unix/Linux equivalent of Process Monitor, whether GUI or CUI?
If it makes a difference, I'm looking at Ubuntu, but if there's an equivalent for other systems (Mac, other Linux variants like Fedora, etc.) then knowing any of those would be useful too.
Edit:
Process Monitor is for monitoring system calls (such as file creation or writes), while Process Explorer is for monitoring process status (which is like System Monitor). I'm asking for the former, not the latter. :-)
|
The console standby for this is top, but there are alternatives like my favorite htop that give you a little more display flexibility and allow you a few more operations on the processes.
A less interactive view that is better for use in scripts would be the ps program and all it's relatives.
Edit: Based on your clarified question, you might note that strace handles watching system calls made by a given process including all read-write operations and os function calls. You can activate it on the command line before the program you want to track or attach to a running process by hitting s on a process selected in htop.
| Process Monitor equivalent for Linux? |
1,319,998,769,000 |
This is mostly out of curiosity, I'm trying to understand how event handling works
on a low level, so please don't reference me to a software that'll do it for me.
If for example I want to write a program in C/C++ that reacts to mouse clicks,
I assume I need to use a system call to hook some function to the kernel,
or maybe you need to just constantly check the status of the mouse, I don't know.
I assume it's possible since just about everything is possible in C/C++, being so low level, I'm mostly interested in how it works, even though I'll probably never have to implement it myself.
The question is how it works in linux, are there certain system calls, c libraries, etc.?
|
If you're writing a real-world program that uses the mouse in Linux, you're most likely writing an X application, and in that case you should ask the X server for mouse events. Qt, GTK, and libsdl are some popular C libraries that provide functions for accessing mouse, keyboard, graphics, timers, and other features needed to write GUI programs. Ncurses is a similar library for terminal applications.
But if you're exploring your system, or you can't use X for whatever reason, here is how it works at the kernel interface.
A core idea in the UNIX philosophy is that "everything is a file". More specifically, as many things as possible should be accessible through the same system calls that you use to work with files. And so the kernel interface to the mouse is a device file. You open() it, optionally call poll() or select() on it to see if there's incoming data, and read() to read the data.
In pre-USB times, the specific device file was often a serial port, e.g. /dev/ttyS0, or a PS/2 port, /dev/psaux. You talked to the mouse using whatever hardware protocol was built into the mouse. These days, the /dev/input/* subsystem is preferred, as it provides a unified, device-independent way of handling many different input devices. In particular, /dev/input/mice will give you events from any mouse attached to your system, and /dev/input/mouseN will give you events from a particular mouse. In most modern Linux distributions, these files are created dynamically when you plug in a mouse.
For more information about exactly what you would read or write to the mouse device file, you can start with input/input.txt in the kernel documentation. Look in particular at sections 3.2.2 (mousedev) and 3.2.4 (evdev), and also sections 4 and 5.
| How do mouse events work in linux? |
1,319,998,769,000 |
I have a Debian Wheezy server that's been running for a while with an encrypted drive. The password for the encrypted drive (/dev/sda5) was lost when my encrypted password file was corrupted.
I'd like to be able to reboot this server, but that will of course require that password. Since the drive is clearly in a decrypted state, is there a way to change the password without knowing the old one?
cryptsetup luksChangeKey /dev/sda5 requires the password of the volume.
I could of course rsync everything off and rebuild, but I'd like to avoid that. I looked through memory (#cat /dev/mem | less), but was unable to find it (which is a very good thing!).
|
Yes, you can do this by accessing the master key while the volume is decrypted.
The quick and dirty to add a new passphrase:
device=/dev/sda5
volume_name=foo
cryptsetup luksAddKey $device --master-key-file <(dmsetup table --showkeys $volume_name | awk '{ print $5 }' | xxd -r -p)
device and volume_name should be set appropriately.
volume_name is the name of the decrypted volume, the one you see in /dev/mapper.
Explanation:
LUKS volumes encrypt their data with a master key. Each passphrase you add simply stores a copy of this master key encrypted with that passphrase. So if you have the master key, you simply need to use it in a new key slot.
Lets tear apart the command above.
$ dmsetup table --showkeys $volume_name
This dumps a bunch of information about the actively decrypted volume. The output looks like this:
0 200704 crypt aes-xts-plain64 53bb7da1f26e2a032cc9e70d6162980440bd69bb31cb64d2a4012362eeaad0ac 0 7:2 4096
Field #5 is the master key.
$ dmsetup table --showkeys $volume_name | awk '{ print $5 }' | xxd -r -p
Not going to show the output of this as it's binary data, but what this does is grab the master key for the volume, and then convert it into raw binary data which is needed later.
$ cryptsetup luksAddKey $device --master-key-file <(...)
This is telling cryptsetup to add a new key to the volume. Normally this action requires an existing key, however we use --master-key-file to tell it we want to use the master key instead.
The <(...) is shell command substitution & redirection. It basically executes everything inside, sends the output to a pipe, and then substitutes the <(...) with a path to that pipe.
So the whole command is just a one-liner to condense several operations.
| Change password on a LUKS filesystem without knowing the password |
1,319,998,769,000 |
Routing table entries have an attribute scope. I would like to know how the change from global to link (or the other way round) affects the network system.
|
suppose we have NIC settings with 3 ip's with different scopes
14: ens160: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 36:ee:4c:d0:90:3a brd ff:ff:ff:ff:ff:ff
inet 172.22.0.1/24 scope host ens160
inet 172.21.0.1/24 scope link ens160
inet 172.20.0.1/24 scope global ens160
suppose we have some route for ens160 in the route table
172.20.0.0/24 dev ens160 proto kernel scope link src 172.20.0.1
as we see we have scope setting in NIC and in the route.
If a route has src specified in this case linux completely ignores scope settings
in the route and in the NIC settings. it ignores it completely. And linux just
uses in packets flowing out of NIC src ip = 172.20.0.1
suppose we have another route
4.4.4.4 scope link
if src ip is not specified in the route then linux look what scope the route has.
in our case scope = link. Then linux goes to NIC settings and searches IP
with the same scope. in our case IP with scope=link = 172.21.0.1/24.
so for dst ip = 4.4.4.4 linux will use src ip = 172.21.0.1
if scope is not specified in a route then it means scope = global
example
35.35.35.35 dev ens160
next.
lets look at default route
default via 172.16.102.1 dev ens160 onlink
it does not have scope specified , it means scope = global
as default route does not have src specified it means linux will search
on ens160 IP with scope=global and use it as src ip.
next.
suppose a route has one scope and NIC IP has another scope.
example
NIC
14: vasya2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 36:ee:4c:d0:90:3a brd ff:ff:ff:ff:ff:ff
inet 172.22.0.1/24 scope host vasya2
a route
14.14.14.14 scope link
what happens when we ping 14.14.14.14
the route has scope=link but NIC only has IP with scope=host.
the point is that ip with scope=host can be as backend only for a route with scope=host. in other cases linux can not use such ip. so linux will use
src ip = 0.0.0.0 for dst ip 14.14.14.14
(actually it also depends if nic is real physical or for instance dummy,
if nic is dummy in this case linux will use some other ip from another nic that
has scope=global )
general rule: if a route does not have src specified then
ip with scope=host can be as backend only for a route with scope=host
ip with scope=link can be as backend only for a route with scope=host or scope=link
ip with scope=global can be as backend only for a route with any scope
im quite surprised about such uncomfortable architecture
if you want to forget about all this "scope stuff" - just use src field
in a route in route table.
| What is the interface scope (global vs. link) used for? |
1,319,998,769,000 |
rdesktop and xfreerdp are both linux clients for RDP.
However from their respective websites it is not clear what advantages/drawbacks of using one over other are there.
I found one post, which indicated that xfreerdp has more features than rdesktop.
But what are these extra features?
How is the performance (or responsiveness) and clipboard support in both of them?
I am looking forward to use a RDP client (on Linux Mint 17) to connect to few Windows computers (Win 7 and 8) and linux servers running xrdp.
|
FreeRDP (xfreerdp, whose Debian package name is freerdp-x11) was, in 2015, considerably less used than rdesktop according to the Debian Popularity Contest stats, in part because it was so much newer:
#rank name inst vote old recent no-files (maintainer)
1429 rdesktop 56497 4281 41399 10775 42 (Laszlo Boszormenyi)
3056 freerdp-x11 14232 1389 9845 2992 6 (Mike Gabriel)
However, as of 2020, that is no longer true:
#rank name inst vote old recent no-files (maintainer)
4439 freerdp-x11 11869 582 10856 426 5 (Not in sid)
4597 rdesktop 11099 1191 9443 458 7 (Laszlo Boszormenyi)
7319 freerdp2-x11 3891 704 1500 1686 1 (Debian Remote Maintainers)
The old freerdp-x11 package, removed from Debian in Feb 2018, outranks the older rdesktop while its replacement, freerdp2-x11, still has some catching up to do. I assume the smaller install count is the result of fewer people actually needing this Windows-only solution or perhaps a hint of xpra and other next-gen solutions taking over.
According to FreeRDP on Wikipedia,
FreeRDP was forked in 2009 from rdesktop with the aim of modularizing the code, addressing various issues, and implementing new features.
... but Wikipedia's list of features do not break out which came from rdesktop and which are "new." The FreeRDP 1.0 release announcement (Jan 2012) did offer this list of new features, which presumably are not also available on rdesktop:
RemoteFX
Both encoder and decoder
SSE2 and NEON optimization
NSCodec
RemoteApp
Working, minor glitches
Multimedia Redirection
ffmpeg support
Network Level Authentication (NLA)
NTLMv2
Certificate validation
FIPS-compliant RDP security
new build system (cmake)
added official logo and icon
FreeRDP also has a server (listed as experimental in the 1.0 release) while rdesktop does not.
| What are the differences between rdesktop and xfreerdp? |
1,319,998,769,000 |
I'm running a Linux OS that was built from scratch. I'd like to save the kernel message buffer (dmesg) to a file that will remain persistent between reboots.
I've tried running syslogd but it just opened a new log file, /var/log/messages, with neither the existing kernel message buffer, nor any new messages the kernel generated after syslogd was launched.
How can the kernel message buffer be saved to a persistent log file?
|
You need to look at either /etc/rsyslog.conf or /etc/syslog.conf. If you have a line early on such as:
*.* -/var/log/syslog
Everything, including the stuff from dmesg, should go to that file. To target it better:
kernel.* -/var/log/dmesg
If that fails for some reason, you could periodically (e.g. via cron):
dmesg > /var/log/dmesg
Depending on how big the dmesg buffer is (this is compiled into the kernel, or set via the log_buf_len parameter) and how long your system has been up, that will keep a record of the kernel log since it started.
If you want to write the dmesg output continuously to a file use the -w (--follow) flag.
dmesg --follow > mydmesg.log
| How can dmesg content be logged into a file? |
1,319,998,769,000 |
What is the maximum supportable RAM by Linux? Assume that hardware is/supports 64-bit. Among all Linux distros.
Does it go up to 16 exabytes, or is it limited like with Windows which I believe is 192 gigabytes?
|
Red Hat Enterprise Linux (RHEL)
These are probably a good basis, looking at RHEL6's capabilities, they're covered here, titled: Red Hat Enterprise Linux 6 technology capabilities and limits.
NOTE: [5] The architectural limits are based on the capabilities of the Red Hat Enterprise Linux kernel and the physical hardware. Red Hat Enterprise Linux 6 limit is based on 46-bit physical memory addressing. Red Hat Enterprise Linux 5 limit is based on 40-bit physical memory addressing. All system memory should be balanced across NUMA nodes in a NUMA-capable system.
Kernel docs
Also if you take a look at the kernel docs, Documentation/x86/x86_64/mm.txt:
Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
So 247 bytes = 128TiB
| What is maximum RAM supportable by Linux? |
1,319,998,769,000 |
This has baffled me for a few weeks now. I have a Kyocera network printer set up in CUPS, and whenever I try to print to it I seem to end up with n² as many copies as I request. That is,
I try to print 2 copies of a document and I get 4
I try to print 5 copies of a document and I get 25
I try to print 60 copies of a document unattended, it runs out of paper, and I wander around the building depositing the extra copies in many recycling bins so as not to implicate myself too directly as the culprit
I cannot begin to imagine how to diagnose this, but besides being mildly amusing it does mean that to get my desired 60 copies of a document I have to go to some esoteric lengths (e.g. print 7 copies, print 3 copies, print 1 copy two times) which was amusing at first but has quickly gotten old.
So I am posting here in the hopes that someone can reassure me that I am not crazy, and hope that maybe someone might have experienced this before and know of a way to fix it?
I am printing a PDF from Document Viewer 3.18.2
|
FWIW, I had the very same issue with a Brother QL-1050 label printer, under Debian Sid. It was not an application bug as suggested in comments, but a CUPS/driver issue. You can confirm this by running lp or lpr and see if it is affected as well :
lp -d YOURPRINTER -n 2 /some/file.pdf
lpr -P YOURPRINTER -# 2 /some/file.pdf
I managed to solve the problem by editing /usr/lib/cups/filter/brother_lpdwrapper_ql1050, and modifying the line
CUPSOPTION=`echo "$5 Copies=$4" | sed -e …
into
CUPSOPTION=`echo "$5" | sed -e …
(Copies=1 also works).
I guess the number of copies was feeded twice somehow.
There must be a similar file for your printer, and though I guess the name and definition of CUPSOPTION may vary, those options are probably defined there.
| CUPS prints n² as many copies as I want |
1,319,998,769,000 |
Is there a way to back out of all SSH connections and close PuTTY in "one shot"? I work in Windows 7 and use PuTTY to SSH to various Linux hosts.
An example of the way I find myself working:
SSH to host1 with PuTTY...
banjer@host1:~> #...doin some work...ooh! need to go check something on host8...
banjer@host1:~> ssh host8
banjer@host8:~> #...doin some work...OK time for lunch. lets close putty...
banjer@host8:~> exit
banjer@host1:~> exit
Putty closes.
Per above, any way to get from host8 to closing PuTTY in one shot? Sometimes I find myself up to 5 or 10 hosts deep. I realize I can click the X to close the PuTTY window, but I like to make sure my SSH connections get closed properly by using the exit command. I also realize I'm asking for tips on how to increase laziness. I'll just write it off as "how can I be more efficient".
|
Try using the ssh connection termination escape sequence.
In the ssh session, enter ~. (tilde dot). You won't see the characters when you type them, but the session will terminate immediately.
$ ~.
$ Connection to me.myhost.com closed.
From man 1 ssh
The supported escapes (assuming the default ‘~’) are:
~. Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded
connection / X11 sessions to terminate.
~? Display a list of escape characters.
~B Send a BREAK to the remote system (only useful for SSH protocol
version 2 and if the peer supports it).
~C Open command line. Currently this allows the addition of port
forwardings using the -L, -R and -D options (see above). It also
allows the cancellation of existing remote port-forwardings using
-KR[bind_address:]port. !command allows the user to execute a
local command if the PermitLocalCommand option is enabled in
ssh_config(5). Basic help is available, using the -h option.
~R Request rekeying of the connection (only useful for SSH protocol
version 2 and if the peer supports it).
| exit out of all SSH connections in one command and close PuTTY |
1,319,998,769,000 |
What exactly is a "stable" Linux distribution and what are the (practical) consequences of using an "unstable" distribution?
Does it really matter for casual users (i.e. not sysadmins) ?
I've read this and this but I haven't got a clear answer yet.
"Stable" in Context:
I've seen words and phrases like "Debian Stable" and "Debian Unstable" and things like "Debian is more stable than Ubuntu".
|
In the context of Debian specifically, and more generally when many distributions describe themselves, stability isn’t about day-to-day lack of crashes, it’s about the stability of the interfaces provided by the distribution, both programming interfaces and user interfaces. It’s better to think of stable v. development distributions than stable v. “unstable” distributions.
A stable distribution is one where, after the initial release, the kernel and library interfaces won’t change. As a result, third parties can build programs on top of the distribution, and expect them to continue working as-is throughout the life of the distribution. A stable distribution provides a stable foundation for building more complex systems. In RHEL, whose base distribution moves even more slowly than Debian, this is described explicitly as API and ABI stability. This works forwards as well as backwards: thus, a binary built on Debian 10.5 should work as-is on 10.9 but also on the initial release of Debian 10. (This is one of the reasons why stable distributions never upgrade the C library in a given release.)
This is a major reason why bug fixes (including security fixes) are rarely done by upgrading to the latest version of a given piece of software, but instead by patching the version of the software present in the distribution to fix the specific bug only. Keeping a release consistent also allows it to be considered as a known whole, with a better-defined overall behaviour than in a constantly-changing system; minimising the extent of changes made to fix bugs helps keep the release consistent.
Stability as defined for distributions also affects users, but not so much through program crashes etc.; rather, users of rolling distributions or development releases of distributions (which is what Debian unstable and testing are) have to regularly adjust their uses of their computers because the software they use undergoes major upgrades (for example, bumping LibreOffice). This doesn’t happen inside a given release stream of a stable distribution. This could explain why some users might perceive Debian as more stable than Ubuntu: if they track non-LTS releases of Ubuntu, they’ll get major changes every six months, rather than every two years in Debian.
Programs in a stable distribution do end up being better tested than in a development distribution, but the goal isn’t for the development distribution to be contain more bugs than the stable distribution: after all, packages in the development distribution are always supposed to be good enough for the next release. Bugs are found and fixed during the stabilisation process leading to a release though, and they can also be found and fixed throughout the life of a release. But minor bugs are more likely to be fixed in the development distribution than in a stable distribution.
In Debian, packages which are thought to cause issues go to “experimental”, not “unstable”.
| What does it mean for a Linux distribution to be stable and how much does it matter for casual users? |
1,319,998,769,000 |
Why is Perl installed by default with most Linux distributions?
|
The answer is/isn't sexy, depending on your point of view.
Perl is very useful. Lots of the system utilities are written in or depend on perl. Most systems won't operate properly if Perl is uninstalled.
A few years ago FreeBSD went through a lot of effort to remove Perl as a dependency for the base system. It wasn't an easy task.
| Why is Perl installed by default with most Linux distributions? |
1,319,998,769,000 |
Maybe I haven't had enough coffee yet today, but I can't remember or think of any reason why /proc/PID/cmdline should be world-readable - after all, /proc/PID/environ isn't.
Making it readable only by the user (and maybe the group. and root, of course) would prevent casual exposure of passwords entered as command-line arguments.
Sure, it would affect other users running ps and htop and the like - but that's a good thing, right? That would be the point of not making it world-readable.
|
I suspect the main, and perhaps only, reason is historical — /proc/.../cmdline was initially world-readable, so it remains that way for backwards compatibility. cmdline was added in 0.98.6, released on December 2, 1992, with mode 444; the changelog says
- /proc filesystem extensions. Based on ideas (and some code) by
Darren Senn, but mostly written by yours truly. More about that
later.
I don’t know when “later” was; as far as I can tell, Darren Senn’s ideas are lost in the mists of time.
environ is an interesting counter-example to the backwards compatibility argument: it started out word-readable, but was made readable only by its owner in 1.1.85. I haven’t found the changelog for that so I don’t know what the reasoning was.
The overall accessibility and visibility of /proc/${pid} (including /proc/${pid}/cmdline) can be controlled using proc’s hidepid mount option, which was added in version 3.3 of the kernel. The gid mount option can be used to give full access to a specific group, e.g. so that monitoring processes can still see everything without running as root.
| Is there any reason why /proc/*/cmdline is world-readable? |
1,319,998,769,000 |
Currently, when I want to change owner/group recursively, I do this:
find . -type f -exec chown <owner>.<group> {} \;
find . -type d -exec chown <owner>.<group> {} \;
But that can take several minutes for each command. I heard that there was a way to do this so that it changes all the files at once (much faster), instead of one at a time, but I can't seem to find the info. Can that be done?
|
Use chown's recursive option:
chown -R owner:group * .[^.]*
Specifying both * and .[^.]* will match all the files and directories that find would. The recommended separator nowadays is : instead of .. (As pointed out by justins, using .* is unsafe since it can be expanded to include . and .., resulting in chown changing the ownership of the parent directory and all its subdirectories.)
If you want to change the current directory's ownership too, this can be simplified to
chown -R owner:group .
| A quicker way to change owner/group recursively? [duplicate] |
1,319,998,769,000 |
Sometimes I need to look up certain words through all the manual pages. I am aware of apropos, but if I understand its manual right, it restricts search to the descriptions only.
Each manual page has a short description available within it. apropos searches the descriptions for instances of keyword.
For example, if I look up a word like 'viminfo', I get no results at all...
$ apropos viminfo
viminfo: nothing appropriate.
... although this word exists in a later section of the manual of Vim (which is installed on my system).
-i {viminfo}
When using the viminfo file is enabled, this option sets the filename to use, instead of the default "~/.vim‐
info". This can also be used to skip the use of the .viminfo file, by giving the name "NONE".
So how can I look up a word through every section of every manual?
|
From man man:
-K, --global-apropos
Search for text in all manual pages. This is a brute-force
search, and is likely to take some time; if you can, you should
specify a section to reduce the number of pages that need to be
searched. Search terms may be simple strings (the default), or
regular expressions if the --regex option is used.
This directly opens the manpage (vim, then ex, then gview, ...) for me, so you could add another option, like -w to get an idea of which manpage will be displayed.
$ man -wK viminfo
/usr/share/man/man1/vim.1.gz
/usr/share/man/man1/vim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/run-one.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/run-one.1.gz
/usr/share/man/man1/run-one.1.gz
...
| How to search the whole manual pages on Linux? |
1,319,998,769,000 |
I would like to have multiple NICs (eth0 and wlan0) in the same subnet and to serve as a backup for the applications on the host if one of the NICs fail. For this reason I have created an additional routing table. This is how /etc/network/interfaces looks:
iface eth0 inet static
address 192.168.178.2
netmask 255.255.255.0
dns-nameserver 8.8.8.8 8.8.4.4
post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.2
post-up ip route add default via 192.168.178.1 dev eth0
post-up ip rule add from 192.168.178.2/32
post-up ip rule add to 192.168.178.2/32
iface wlan0 inet static
wpa-conf /etc/wpa_supplicant.conf
wireless-essid xyz
address 192.168.178.3
netmask 255.255.255.0
dns-nameserver 8.8.8.8 8.8.4.4
post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.3 table rt2
post-up ip route add default via 192.168.178.1 dev wlan0 table rt2
post-up ip rule add from 192.168.178.3/32 table rt2
post-up ip rule add to 192.168.178.3/32 table rt2
That works for connecting to the host: I can still SSH into it if one of the interfaces fails. However, the applications on the host cannot initialize a connection to the outside world if eth0 is down. That is my problem.
I have researched that topic and found the following interesting information:
When a program initiates an outbound connection it is normal for it to
use the wildcard source address (0.0.0.0), indicating no preference as
to which interface is used provided that the relevant destination
address is reachable. This is not replaced by a specific source
address until after the routing decision has been made. Traffic
associated with such connections will not therefore match either of
the above policy rules, and will not be directed to either of the
newly-added routing tables. Assuming an otherwise normal
configuration, it will instead fall through to the main routing table.
http://www.microhowto.info/howto/ensure_symmetric_routing_on_a_server_with_multiple_default_gateways.html
What I want is for the main route table to have more than one default gateway (one on eth0 and one on wlan0) and to go to the default gateway via eth0 by default and via wlan0 if eth0 is down.
Is that possible? What do I need to do to achieve such a functionality?
|
Solved it myself.
There seems to be very little information about the networking stuff that you can do with Linux, so I have decided to document and explain my solution in detail. This is my final setup:
3 NICs: eth0 (wire), wlan0 (built-in wifi, weak), wlan1 (usb wifi
adapter, stronger signal than wlan0)
All of them on a single subnet,
each of them with their own IP address.
eth0 should be used for both incoming and outgoing traffic by default.
If eth0 fails then wlan1 should be used.
If wlan1 fails then wlan0 should be used.
First step:
Create a new route table for every interface in /etc/iproute2/rt_tables. Let's call them rt1, rt2 and rt3
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
1 rt1
2 rt2
3 rt3
Second step: Network configuration in /etc/network/interfaces. This is the main part and I'll try to explain as much as I can:
auto eth0 wlan0
allow-hotplug wlan1
iface lo inet loopback
iface eth0 inet static
address 192.168.178.99
netmask 255.255.255.0
dns-nameserver 8.8.8.8 8.8.4.4
post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.99 table rt1
post-up ip route add default via 192.168.178.1 dev eth0 table rt1
post-up ip rule add from 192.168.178.99/32 table rt1
post-up ip rule add to 192.168.178.99/32 table rt1
post-up ip route add default via 192.168.178.1 metric 100 dev eth0
post-down ip rule del from 0/0 to 0/0 table rt1
post-down ip rule del from 0/0 to 0/0 table rt1
iface wlan0 inet static
wpa-conf /etc/wpa_supplicant.conf
wireless-essid xyz
address 192.168.178.97
netmask 255.255.255.0
dns-nameserver 8.8.8.8 8.8.4.4
post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.97 table rt2
post-up ip route add default via 192.168.178.1 dev wlan0 table rt2
post-up ip rule add from 192.168.178.97/32 table rt2
post-up ip rule add to 192.168.178.97/32 table rt2
post-up ip route add default via 192.168.178.1 metric 102 dev wlan0
post-down ip rule del from 0/0 to 0/0 table rt2
post-down ip rule del from 0/0 to 0/0 table rt2
iface wlan1 inet static
wpa-conf /etc/wpa_supplicant.conf
wireless-essid xyz
address 192.168.178.98
netmask 255.255.255.0
dns-nameserver 8.8.8.8 8.8.4.4
post-up ip route add 192.168.178.0/24 dev wlan1 src 192.168.178.98 table rt3
post-up ip route add default via 192.168.178.1 dev wlan1 table rt3
post-up ip rule add from 192.168.178.98/32 table rt3
post-up ip rule add to 192.168.178.98/32 table rt3
post-up ip route add default via 192.168.178.1 metric 101 dev wlan1
post-down ip rule del from 0/0 to 0/0 table rt3
post-down ip rule del from 0/0 to 0/0 table rt3
If you type ip rule show you should see the following:
0: from all lookup local
32756: from all to 192.168.178.98 lookup rt3
32757: from 192.168.178.98 lookup rt3
32758: from all to 192.168.178.99 lookup rt1
32759: from 192.168.178.99 lookup rt1
32762: from all to 192.168.178.97 lookup rt2
32763: from 192.168.178.97 lookup rt2
32766: from all lookup main
32767: from all lookup default
This tells us that traffic incoming or outgoing from the IP address "192.168.178.99" will use the rt1 route table. So far so good. But traffic that is locally generated (for example you want to ping or ssh from the machine to somewhere else) needs special treatment (see the big quote in the question).
The first four post-up lines in /etc/network/interfaces are straightforward and explanations can be found on the internet, the fifth and last post-up line is the one that makes magic happen:
post-up ip r add default via 192.168.178.1 metric 100 dev eth0
Note how we haven't specified a route-table for this post-up line. If you don't specify a route table, the information will be saved in the main route table that we saw in ip rule show. This post-up line puts a default route in the "main" route table that is used for locally generated traffic that is not a response to incoming traffic. (For example an MTA on your server trying to send an e-mail.)
The three interfaces all put a default route in the main route table, albeit with different metrics. Let's take a look a the main route table with ip route show:
default via 192.168.178.1 dev eth0 metric 100
default via 192.168.178.1 dev wlan1 metric 101
default via 192.168.178.1 dev wlan0 metric 102
192.168.178.0/24 dev wlan0 proto kernel scope link src 192.168.178.97
192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.99
192.168.178.0/24 dev wlan1 proto kernel scope link src 192.168.178.98
We can see that the main route table has three default routes, albeit with different metrics. The highest priority is eth0, then wlan1 and then wlan0 because lower metric numbers indicate a higher priority. Since eth0 has the lowest metric this is the default route that is going to be used for as long as eth0 is up. If eth0 goes down, outgoing traffic will switch to wlan1.
With this setup we can type ping 8.8.8.8 in one terminal and ifdown eth0 in another. ping should still work because because ifdown eth0 will remove the default route related to eth0, outgoing traffic will switch to wlan1.
The post-down lines make sure that the related route tables get deleted from the routing policy database (ip rule show) when the interface goes down, in order to keep everything tidy.
The problem that is left is that when you pull the plug from eth0 the default route for eth0 is still there and outgoing traffic fails. We need something to monitor our interfaces and to execute ifdown eth0 if there's a problem with the interface (i.e. NIC failure or someone pulling the plug).
Last step: enter ifplugd. That's a daemon that watches interfaces and executes ifup/ifdown if you pull the plug or if there's problem with the wifi connection /etc/default/ifplugd:
INTERFACES="eth0 wlan0 wlan1"
HOTPLUG_INTERFACES=""
ARGS="-q -f -u0 -d10 -w -I"
SUSPEND_ACTION="stop"
You can now pull the plug on eth0, outgoing traffic will switch to wlan1 and if you put the plug back in, outgoing traffic will switch back to eth0. Your server will stay online as long as any of the three interfaces work. For connecting to your server you can use the ip address of eth0 and if that fails, the ip address of wlan1 or wlan0.
| Is it possible to have multiple default gateways for outbound connections? |
1,319,998,769,000 |
Has anyone used the gold linker before? To link a fairly large project, I had to use this as opposed to the GNU ld, which threw up a few errors and failed to link.
How is the gold linker able to link large projects where ld fails? Is there some kind of memory trickery somewhere?
|
The gold linker was designed as an ELF-specific linker, with the intention of producing a more maintainable and faster linker than BFD ld (the “traditional” GNU binutils linker). As a side-effect, it is indeed able to link very large programs using less memory than BFD ld, presumably because there are fewer layers of abstraction to deal with, and because the linker’s data structures map more directly to the ELF format.
I’m not sure there’s much documentation which specifically addresses the design differences between the two linkers, and their effect on memory use. There is a very interesting series of articles on linkers by Ian Lance Taylor, the author of the various GNU linkers, which explains many of the design decisions leading up to gold. He writes that
The linker I am now working, called gold, on will be my third. It is exclusively an ELF linker. Once again, the goal is speed, in this case being faster than my second linker. That linker has been significantly slowed down over the years by adding support for ELF and for shared libraries. This support was patched in rather than being designed in.
(The second linker is BFD ld.)
| What is the gold linker? |
1,319,998,769,000 |
I need to know if a process with a given PID has opened a port without using external commands.
I must then use the /proc filesystem. I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID directory will also contains a net/tcp file. My question is :
do I need to go over all the threads net/tcp files, or will the port opened by threads be written into the process net/tcp file.
|
I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process.
That file is not a list of tcp ports opened by the process. It is a list of all open tcp ports in the current network namespace, and for processes running in the same network namespace is identical to the contents of /proc/net/tcp.
To find ports opened by your process, you would need to get a list of socket descriptors from /proc/<pid>/fd, and then match those descriptors to the inode field of /proc/net/tcp.
| Read "/proc" to know if a process has opened a port |
1,319,998,769,000 |
I regularly ssh to a centos 5 box. Somehow they keys are mapped so that control+d will log me out of my current shell. If I am sudo'ed to another use it puts me back to the previous user. If I am not sudo'ed it just disconnects me. How can I keep this from happening? I regularly use control+d to cancel out of the python interpreter and sometimes I accidentally press it more than once.
|
You're looking for the IGNOREEOF environment variable if you use bash:
IGNOREEOF
Controls the action of an interactive shell on receipt of an EOF character as the sole input.
If set, the value is the number of consecutive EOF characters which must be typed as the
first characters on an input line before bash exits. If the variable exists but does not
have a numeric value, or has no value, the default value is 10. If it does not exist, EOF
signifies the end of input to the shell.
So export IGNOREEOF=42 and you'll have to press Ctrl+D fourty-two times before it actually quits your shell.
POSIX set has an -o ignoreeof setting too. So consult your shell's documentation to see if your shell has this (it should), and to check its exact semantics.
| How can I keep control+d from disconnecting my session? |
1,451,679,960,000 |
I have a computer with:
Linux superhost 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux
It runs Apache on port 80 on all interfaces, and it does not show up in netstat -planA inet, however it unexpectedly can be found in netstat -planA inet6:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp6 0 0 :::5672 :::* LISTEN 2402/beam.smp
tcp6 0 0 :::111 :::* LISTEN 1825/rpcbind
tcp6 0 0 :::9200 :::* LISTEN 2235/java
tcp6 0 0 :::80 :::* LISTEN 2533/apache2
tcp6 0 0 :::34611 :::* LISTEN 1856/rpc.statd
tcp6 0 0 :::9300 :::* LISTEN 2235/java
...
tcp6 0 0 10.0.176.93:80 10.0.76.98:53704 TIME_WAIT -
tcp6 0 0 10.0.176.93:80 10.0.76.98:53700 TIME_WAIT -
I can reach it by TCP4 just fine, as seen above. However, even these connections are listed under tcp6. Why?
|
By default if you don't specify address to Apache Listen parameter, it handles ipv6 address using IPv4-mapped IPv6 addresses. You can take a look in Apache ipv6
The output of netstat doesn't mean Apache is not listening on IPv4 address. It's a IPv4-mapped IPv6 address.
| netstat — why are IPv4 daemons listening to ports listed only in -A inet6? |
1,451,679,960,000 |
I want a script to sleep unless a certain file is modifed/deleted (or a file created in a certain directory, or ...). Can this be achieved in some elegant way? The simplest thing that comes to my mind is a loop that sleeps for some time before checking the status again, but maybe there is a more elegant way?
|
On linux, you can use the kernel's inotify feature. Tools for scripting can be found there: inotify-tools.
Example use from wiki:
#!/bin/sh
EVENT=$(inotifywait --format '%e' ~/file1) # blocking without looping
[ $? != 0 ] && exit
[ "$EVENT" = "MODIFY" ] && echo 'file modified!'
[ "$EVENT" = "DELETE_SELF" ] && echo 'file deleted!'
# etc...
| Can a bash script be hooked to a file? |
1,451,679,960,000 |
In the past, I learned that in Linux/UNIX file systems, directories are just files, which contain the filenames and inode numbers of the files inside the directory.
Is there a simple way to see the content of a directory? I mean the way the files names and inodes are stored/organized.
I'm not looking for ls, find or something similiar. I also don't want to see the content of the files inside a directory. I want to see the implementation of the directories. If every directory is just a text file with some content, maybe a simple way exists to see the content of this text file.
In the bash in Linux it is not possible to do a cat folder. The output is just Is a directory.
Update The question How does one inspect the directory structure information of a unix/linux file? addresses the same issue but it has no helpful solution like the one from mjturner.
|
The tool to display inode detail for a filesystem will be filesystem specific. For the ext2, ext3, ext4 filesystems (the most common Linux filesystems), you can use debugfs, for XFS xfs_db, for ZFS zdb. For btrfs some information is available using the btrfs command.
For example, to explore a directory on an ext4 filesystem (in this case / is dev/sda1):
# ls src
Animation.js Map.js MarkerCluster.js ScriptsUtil.js
Directions.js MapTypeId.js markerclusterer.js TravelMode.js
library.js MapUtils.js Polygon.js UnitSystem.js
loadScripts.js Marker.js Polyline.js Waypoint.js
# ls -lid src
664488 drwxrwxrwx 2 vagrant vagrant 4096 Jul 15 13:24 src
# debugfs /dev/sda1
debugfs: imap <664488>
Inode 664488 is part of block group 81
located at block 2622042, offset 0x0700
debugfs: dump src src.out
debugfs: quit
# od -c src.out
0000000 250 # \n \0 \f \0 001 002 . \0 \0 \0 204 030 \n \0
0000020 \f \0 002 002 . . \0 \0 251 # \n \0 024 \0 \f 001
0000040 A n i m a t i o n . j s 252 # \n \0
0000060 030 \0 \r 001 D i r e c t i o n s . j
0000100 s \0 \0 \0 253 # \n \0 024 \0 \n 001 l i b r
0000120 a r y . j s \0 \0 254 # \n \0 030 \0 016 001
0000140 l o a d S c r i p t s . j s \0 \0
0000160 255 # \n \0 020 \0 006 001 M a p . j s \0 \0
0000200 256 # \n \0 024 \0 \f 001 M a p T y p e I
0000220 d . j s 257 # \n \0 024 \0 \v 001 M a p U
0000240 t i l s . j s \0 260 # \n \0 024 \0 \t 001
0000260 M a r k e r . j s \0 \0 \0 261 # \n \0
0000300 030 \0 020 001 M a r k e r C l u s t e
0000320 r . j s 262 # \n \0 034 \0 022 001 m a r k
0000340 e r c l u s t e r e r . j s \0 \0
0000360 263 # \n \0 024 \0 \n 001 P o l y g o n .
0000400 j s \0 \0 264 # \n \0 024 \0 \v 001 P o l y
0000420 l i n e . j s \0 265 # \n \0 030 \0 016 001
0000440 S c r i p t s U t i l . j s \0 \0
0000460 266 # \n \0 030 \0 \r 001 T r a v e l M o
0000500 d e . j s \0 \0 \0 267 # \n \0 030 \0 \r 001
0000520 U n i t S y s t e m . j s \0 \0 \0
0000540 270 # \n \0 240 016 \v 001 W a y p o i n t
0000560 . j s \0 305 031 \n \0 214 016 022 001 . U n i
0000600 t S y s t e m . j s . s w p \0 \0
0000620 312 031 \n \0 p 016 022 001 . U n i t S y s
0000640 t e m . j s . s w x \0 \0 \0 \0 \0 \0
0000660 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
In the above, we start by finding the inode of directory src (664488) and then dump its contents into file src.out and then display that using od. As you can see, the contents of all of the files in that directory (Animation.js, etc.) are visible in the dump.
This is just a start - see the debugfs manual page or type help within debugfs for more information.
If you're using ext4, you can find more information about the structure and layout of directory entries in the kernel documentation.
| Simple way to see the content of directories in Linux/UNIX file systems |
1,451,679,960,000 |
Somehow my Debian went to read only in root file system. I have no idea how this could have happened.
For example when I am in /root folder and type command nano and after that press Tab to list possible file in that folder I get the message:
root@debian:~# nano -bash: cannot create temp file for here-document: Read-only file system
The same for the cd command when I type cd /home and press Tab to list paths I have this:
root@debian:~# cd /home -bash: cannot create temp file for here-document: Read-only file system
I also have problems with software like apt and others. Can't even apt-get update. I have a lot of errors like this:
Err http ://ftp.de.debian.org wheezy-updates/main Sources
406 Not Acceptable
W: Not using locking for read only lock file /var/lib/apt/lists/lock
W: Failed to fetch http ://ftp.de.debian.org/debian/dists/wheezy/Release rename failed, Read-only file system (/var/lib/apt/lists/ftp.de.debian.org_debian_dists_wheezy_Release -> /var/lib/apt/lists/ftp.de.debian.org_debian_dists_wheezy_Release).
W: Failed to fetch http ://security.debian.org/dists/wheezy/updates/main/source/Sources 404 Not Found
W: Failed to fetch http ://security.debian.org/dists/wheezy/updates/main/binary-amd64/Packages 404 Not Found
W: Failed to fetch http ://ftp.de.debian.org/debian/dists/wheezy-updates/main/source/Sources 406 Not Acceptable
E: Some index files failed to download. They have been ignored, or old ones used instead.
W: Not using locking for read only lock file /var/lib/dpkg/lock
I have a lot of problems in the system.
Is it possible to fix that? How can I check what happened? What should I look for in the logs?
I know it could be because of the line in /etc/fstab file:
/dev/mapper/debian-root / ext4 errors=remount-ro 0 1
but what is the problem? I can't find nothing or maybe I don't know where to look.
Edit:
I did search messages logs and found only this:
kernel: [ 5.709326] EXT4-fs (dm-0): re-mounted. Opts: (null)
kernel: [ 5.977131] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro
kernel: [ 7.174856] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
I guess it's correct, because I have the same entries on other debian machines.
I found something in dmesg (I cut that output a bit because was a lot standard ext4 things)
root@gs3-svn:/# dmesg |grep ext4
EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
EXT4-fs error (device dm-0) in ext4_dirty_inode:4634: Journal has aborted
EXT4-fs error (device dm-0): ext4_discard_preallocations:3894: comm rsyslogd: Error loading buddy information for 1
EXT4-fs warning (device dm-0): ext4_end_bio:250: I/O error -5 writing to inode 133130 (offset 132726784 size 8192 starting block 159380)
EXT4-fs error (device dm-0): ext4_journal_start_sb:327: Detected aborted journal
5 errors and 1 warning. Any ideas? Is it safe to use mount -o remount,rw / ?
|
The default behaviour for most Linux file systems is to safeguard your data. When the kernel detects an error in the storage subsystem it will make the filesystem read-only to prevent (further) data corruption.
You can tune this somewhat with the mount option errors={continue|remount-ro|panic} which are documented in the system manual (man mount).
When your root file-system encounters such an error, most of the time the error won't be recorded in your log-files, as they will now be read-only too. Fortunately since it is a kernel action the original error message is recorded in memory first, in the kernel ring buffer. Unless already flushed from memory you can display the contents of the ring buffer with the dmesg command. .
Most real hard disks support SMART and you can use smartctl to try and diagnose the disk health.
Depending on the error messages, you could decide it is still safe to use file-system and return it read-write condition with mount -o remount,rw /
In general though, disk errors are a precursor to complete disk failure. Now is the time to create a back-up of your data or to confirm the status of your existing back-ups.
| read only root filesystem |
1,451,679,960,000 |
For a long period I thought the default behavior of the sort program was using ASCII order. However, when I input the following lines into sort without any arguments:
#
@
I got:
@
#
But according to the ASCII table, # is 35 and @ is 64. Another example is:
A
a
And the output is:
a
A
Can anybody explain this? By the way, what is 'dictionary-order' when using sort -d?
|
Looks like you are using a non-POSIX locale.
Try:
export LC_ALL=C
and then sort.
info sort clearly says:
(1) If you use a non-POSIX locale (e.g., by setting `LC_ALL' to
`en_US'), then `sort' may produce output that is sorted differently
than you're accustomed to. In that case, set the `LC_ALL' environment
variable to `C'. Note that setting only `LC_COLLATE' has two problems.
First, it is ineffective if `LC_ALL' is also set. Second, it has
undefined behavior if `LC_CTYPE' (or `LANG', if `LC_CTYPE' is unset) is
set to an incompatible value. For example, you get undefined behavior
if `LC_CTYPE' is `ja_JP.PCK' but `LC_COLLATE' is `en_US.UTF-8'.
| What's the default order of Linux sort? |
1,451,679,960,000 |
First background. I am developing a driver for Logitech game-panel devices. It's a keyboard with a screen on it. The driver is working nicely but by default the device is handled by HID. In order to prevent HID taking over the device before my driver, I can blacklist it in hid-core.c. This works but is not the best solution as I am working with several people and we all have to keep patching our HID module which is becoming a chore, especially as it often involves rebuilding initramfs and such.
I did some research on this problem and I found this mailing list post, which eventually took me to this article on LWN. This describes a mechanism for binding devices to specific drivers at runtime. This seems like exactly what I need.
So, I tried it. I was able to unbind the keyboard from HID. This worked and as expected I could no longer type on it. But when I tried to bind it to our driver I get "error: no such device" and the operation fails.
So my question is: How do I use kernel bind/unbind operations to replicate what happens when you blacklist a HID device in hid-core and supply your own driver? - that is - to replace the need to patch hid-core.c all the time?
The source of our driver is here: https://github.com/ali1234/lg4l
|
Ok, turns out the answer was staring me in the face.
Firstly, whether using our custom driver, or using the generic one that normally takes over the device, it's still all ultimately controlled by HID, and not USB.
Previously I tried to unbind it from HID, which is not the way to go. HID has sub-drivers, the one that takes over devices that have no specialized driver is called generic-usb. This is what I needed to unbind from, before binding to hid-g19. Also, I needed to use the HID address which looks like "0003:046d:c229.0036" and not the USB address which looks "1-1.1:1.1".
So before rebinding I would see this on dmesg:
generic-usb 0003:046D:C229.0036: input,hiddev0,hidraw4: USB HID v1.11 Keypad [Logitech G19 Gaming Keyboard] on usb-0000:00:13.2-3.2/input1
Then I do:
echo -n "0003:046D:C229.0036" > /sys/bus/hid/drivers/generic-usb/unbind
echo -n "0003:046D:C229.0036" > /sys/bus/hid/drivers/hid-g19/bind
And then I see on dmesg:
hid-g19 0003:046D:C229.0036: input,hiddev0,hidraw4: USB HID v1.11 Keypad [Logitech G19 Gaming Keyboard] on usb-0000:00:13.2-3.2/input1
So like I said, staring me in the face, because the two key pieces of information are the first two things on the line when the device binds...
| How to use Linux kernel driver bind/unbind interface for USB-HID devices? |
1,451,679,960,000 |
I am not sure if it is the only possible way, but
I read that in order to put a single pixel onto the screen at a location of your choice one has to write something into a place called framebuffer.
So I became curious, if it is possible to enter into this place and write something into it in order to display a single pixel somewhere on the screen.
|
yes, outside X-server, in tty, try command:
cat /dev/urandom >/dev/fb0
if colourfull pixels fills the screen, then your setup is ok, and you can try playing with this small script:
#!/usr/bin/env bash
fbdev=/dev/fb0 ; width=1280 ; bpp=4
color="\x00\x00\xFF\x00" #red colored
function pixel()
{ xx=$1 ; yy=$2
printf "$color" | dd bs=$bpp seek=$(($yy * $width + $xx)) \
of=$fbdev &>/dev/null
}
x=0 ; y=0 ; clear
for i in {1..500}; do
pixel $((x++)) $((y++))
done
where function 'pixel' should be an answer... write a pixel to screen by changing byte values (blue-green-red-alpha) on x-y offset of device /dev/fbX which is frame buffer for the video-card.
or try one liner pixel draw (yellow on x:y=200:100, if width is 1024):
printf "\x00\xFF\xFF\x00" | dd bs=4 seek=$((100 * 1024 + 200)) >/dev/fb0
UPDATE: this code works even inside X-server, if we just configure X to use frame buffer. by specifying fb0 inside /usr/share/X11/xorg.conf.d/99-fbdev.conf
| Is it possible to access to the framebuffer in order to put a pixel on the screen from the command line? |
1,451,679,960,000 |
When using sudo on Linux, it asks for root password, but only the first time you run it. If you run another sudo command, it remember you already entered the password previously and doesn't ask for it:
thomas@ubuntu:~$ sudo id
[sudo] password for thomas: ******
uid=0(root) gid=0(root) groups=0(root)
thomas@ubuntu:~$ sudo id
uid=0(root) gid=0(root) groups=0(root)
How does sudo do it? Where is this information stored? My idea is that it remembers the terminal id (like pts/1), but where is this stored? The first sudo process is ended when it's done with the command, right?
I know sudo is a setuid program, so it has root's privileges all the time, but I still can't think of a good place to store an information that a user has already entered a password. Is there some daemon process involved?
|
Where is this information stored?
It's probably under /var/db/sudo or /var/run/sudo and you'll probably find directories of usernames with files under them ordered by tty number.
The actual privileges granted, including how long the sessions lasts before you have to enter your password again depends on how sudoers is setup. There's settings to grant/restrict a lot of different things, but those aren't stored in these files which only store timestamps. How long a session lasts, or when sudo needs to prompt for your password again, is determined by a delta of current time and the session timestamp in this directory, and how long sudo is setup to allow a session to last.
| How does sudo remember you already entered root's password? |
1,451,679,960,000 |
I've heard many times now that it is, and I'm mostly using ss now. But I sometimes get frustrated with differences between the two, and I would love some insight.
Also, I can't be the only one who thinks of Hitler when using ss. Not a great name.
|
Found an article on deprecation from 2011. It seems like the whole net-tools package was not maintained for a while and so it was deprecated. In Debian 9 it is not even installed by default. From the project page it seems like there were no updates at least since 2011.
But you can easily install netstat (and e.g. ifconfig) and keep using them. I would probably only use them for listing stuff though.
Installing on Debian 9:
apt-get install net-tools
PS: For more information you might want to see another Q&A about ifconfig deprecation (ifconfig is part of the same package): https://serverfault.com/questions/458628/should-i-quit-using-ifconfig
| Why is netstat deprecated? [closed] |
1,451,679,960,000 |
If I do the following:
touch /tmp/test
and then perform
ls -la /tmp/
I could see the test file with 0 Bytes in the directory.
But how does the Operating System handle a concept of 0 Bytes. If I put it in layman terms:
0 Bytes is no memory at all, hence nothing is created.
Creation of a file, must or should at least require certain memory, right?
|
A file is (roughly) three separate things:
An "inode", a metadata structure that keeps track of who owns the file, permissions, and a list of blocks on disk that actually contain the data.
One or more directory entries (the file names) that point to that inode
The actual blocks of data themselves
When you create an empty file, you create only the inode and a directory entry pointing to that inode. Same for sparse files (dd if=/dev/null of=sparse_file bs=10M seek=1).
When you create hardlinks to an existing file, you just create additional directory entries that point to the same inode.
I have simplified things here, but you get the idea.
| What is the concept of creating a file with zero bytes in Linux? |
1,451,679,960,000 |
Is it possible to change the background of the active (current) tmux tab?
I'm using tmux 1.9 on Ubuntu 15.04.
$ tmux -V
tmux 1.9
I tried to do:
set-option -g pane-active-border-fg red
But the result was not changed:
I expected the 3-bash* to have a red background.
|
You haven't set window active background color, you only set active panel border, try:
set-window-option -g window-status-current-bg red
| Set the active tmux tab color |
1,451,679,960,000 |
In the old days I just modified /etc/inittab. Now, with systemd, it seems to start tty[1-6] automatically, how should I disable tty[4-6]?
Looks like there's only one systemd service file, and it use a %I to discern different tty sessions. I hope I don't need to remove that service, and create each [email protected] manually.
|
There is no real need to disable "extra" TTYs as under systemd gettys are generated on demand: see man systemd-getty-generator for details. Note that, by default, this automatic spawning is done for the VTs up to VT6 only (to mimic traditonal Linux systems).
As Lennart says in a blog post1:
In order to make things more efficient login prompts are now started on demand only. As you switch to the VTs the getty service is instantiated to [email protected], [email protected] and so on. Since we don't have to unconditionally start the getty processes anymore this allows us to save a bit of resources, and makes start-up a bit faster.
If you do wish to configure a specific number of gettys, you can, just modify logind.conf with the appropriate entry, in this example 3:
NAutoVTs=3
1. In fact the entire series of posts—currently numbering 18— systemd for Administrators, is well worth reading.
| How to get fewer ttys with Systemd? |
1,451,679,960,000 |
On Linux (Debian, Ubuntu Mint...),
Is there any option command or something that I can use to transfer files to another user without having to do :
sudo mv /home/poney/folderfulloffiles /home/unicorn/
sudo chown -R unicorn:unicorn /home/unicorn/folderfulloffiles
|
Use rsync(1):
rsync \
--remove-source-files \
--chown=unicorn:unicorn \
/home/poney/folderfulloffiles /home/unicorn/
| Move files and change ownership at the sametime |
1,451,679,960,000 |
I'm trying to change the cpu frequency on my laptop (running Linux), and not having any success.
Here are some details:
# uname -a
Linux yoga 3.12.21-gentoo-r1 #4 SMP Thu Jul 10 17:32:31 HKT 2014 x86_64 Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz GenuineIntel GNU/Linux
# cpufreq-info
cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009
Report errors and bugs to [email protected], please.
analyzing CPU 0:
driver: intel_pstate
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: 0.97 ms.
hardware limits: 800 MHz - 2.60 GHz
available cpufreq governors: performance, powersave
current policy: frequency should be within 800 MHz and 2.60 GHz.
The governor "powersave" may decide which speed to use
within this range.
current CPU frequency is 2.42 GHz (asserted by call to hardware).
(similar information for cpus 1, 2 and 3)
# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
performance powersave
I initially had the userspace governor built into the kernel, but then I also tried building it as a module (with the same results); it was loaded while running the above commands (and I couldn't find any system messages when loading it):
# lsmod
Module Size Used by
cpufreq_userspace 1525 0
(some other modules)
And here are the commands I tried for changing the frequency:
# cpufreq-set -f 800MHz
Error setting new values. Common errors:
- Do you have proper administration rights? (super-user?)
- Is the governor you requested available and modprobed?
- Trying to set an invalid policy?
- Trying to set a specific frequency, but userspace governor is not available,
for example because of hardware which cannot be set to a specific frequency
or because the userspace governor isn't loaded?
# cpufreq-set -g userspace
Error setting new values. Common errors:
- Do you have proper administration rights? (super-user?)
- Is the governor you requested available and modprobed?
- Trying to set an invalid policy?
- Trying to set a specific frequency, but userspace governor is not available,
for example because of hardware which cannot be set to a specific frequency
or because the userspace governor isn't loaded?
Any ideas?
|
This is because your system is using the new driver called intel_pstate. There are only two governors available when using this driver: powersave and performance.
The userspace governor is only available with the older acpi-cpufreq driver (which will be automatically used if you disable intel_pstate at boot time; you then set the governor/frequency with cpupower):
disable the current driver: add intel_pstate=disable to your kernel boot line
boot, then load the userspace module: modprobe cpufreq_userspace
set the governor: cpupower frequency-set --governor userspace
set the frequency: cpupower --cpu all frequency-set --freq 800MHz
| Can't use "userspace" cpufreq governor and set cpu frequency |
1,451,679,960,000 |
A specific file on our production servers is being modified at apparently random times which do not appear to correlate with any log activity. We can't figure out what program is doing it, and there are many suspects. How can I find the culprit?
It is always the same file, at the same path, but on different servers and at different times. The boxes are managed by puppet, but the puppet logs show no activity at the time the file is modified.
What kernel hook, tool, or technique could help us find what process is modifying this file?
lsof is unsuitible for this, because the file is being opened, modified and closed very quickly. Any solution that relies upon polling (such as running lsof often) is no good.
OS: Debian testing
Kernels: Linux, 2.6.32 through 3.9, both 32 and 64-bit.
|
You can use auditd and add a rule for that file to be watched:
auditctl -w /path/to/that/file -p wa
Then watch for entries to be written to /var/log/audit/audit.log.
| Find which process is modifying a file [duplicate] |
1,451,679,960,000 |
I have an Ubuntu server running on EC2 (which I didn't install myself, just picked up an AMI). So far I'm using putty to work with it, but I am wondering how to work on it with GUI tools (I'm not familiar with Linux UI tools, but I want to learn). Silly me, I'm missing the convenience of Windows Explorer.
I currently have only Windows at home. How do I set up GUI tools to work with a remote server? Should I even do this, or should I stick to the command line? Do the answers change if I have a local linux machine to play with?
|
You can use X11 forwarding over SSH; make sure the option
X11Forwarding yes
is enabled in /etc/ssh/sshd_config on the remote server, and either enable X11 forwarding by hand with
ssh -X remoteserver
or add a line saying
ForwardX11 yes
to the relevant host entry in ~/.ssh/config
Of course, that requires a working X display at the local end, so if you're using Windows you're going to have to install something like XMing, then set up X11 forwarding in PuTTY as demonstrated in these references:
Using PuTTY and Xming to Connect to CSE
X11 Forwarding using Xming and PuTTY
Use Linux over Windows with Xming, here or here
ETA: Reading again and seeing your clarifications in the comments,
FTP might suit your needs even better,
as it will let you 'mount' SFTP folders as if they're regular network drives.
See here, here, here (for Windows XP/7/Vista),
or here (for Windows 8).
| How do I work with GUI tools over a remote server? |
1,451,679,960,000 |
I have a process that expects an ssh tunnel connection to execute correctly and I have been using the following command:
ssh -L localhost:3306:127.0.0.1:3306 username@<mysql-machine-ip-address> -N &
I have ran this successfully for 8 months, recently our hosting provider had to make hardware changes to update our machine, and upgraded our machine with a new kernel for Ubuntu. It went from 4.8.3-x86_64-linode76 to 4.8.6-x86_64-linode78.
After a bunch of trouble shooting we've had to update the command to this:
ssh -L localhost:3306:127.0.0.1:3306 username@<mysql-machine-ip-address> -fN
When researching the ssh documentation the -f parameter
Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
When researching the bash command for "&"
Place a process into the background (see multitasking in "Intermediate Use Of >The UNIX Operating System").
Is there fundamentally any difference in these 2 commands?
|
Yes, there is. ssh & runs ssh in the background from the very beginning. ssh -f starts ssh in the foreground, allowing it to prompt for passwords etc., and only afterwards ssh puts itself in the background just before executing the requested command.
| SSH Tunnel in background |
1,451,679,960,000 |
I have always learned that the init process is the ancestor of all processes. Why does process 2 have a PPID of 0?
$ ps -ef | head -n 3
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 May14 ? 00:00:01 /sbin/init
root 2 0 0 May14 ? 00:00:00 [kthreadd]
|
First, “ancestor” isn't the same thing as “parent”. The ancestor can be the parent's parent's … parent's parent, and the kernel only keeps track of one level.
However, when a process dies, its children are adopted by init, so you will see a lot of processes whose parent is 1 on a typical system.
Modern Linux systems additionally have a few processes that execute kernel code, but are managed as user processes, as far as scheduling is concerned. (They don't obey the usual memory management rules since they're running kernel code.) These processes are all spawned by kthreadd (it's the init of kernel threads). You can recognize them by the fact that /proc/2/exe (normally a symbolic link to the process executable) can't be read. Also, ps lists them with a name between square brackets (which is possible for normal user processes, but unusual). Most processes whose parent process ID is 2 are kernel processes, but there are also a few kernel helper processes with PPID 2 (see below).
Processes 1 (init) and 2 (kthreadd) are created directly by the kernel at boot time, so they don't have a parent. The value 0 is used in their ppid field to indicate that. Think of 0 as meaning “the kernel itself” here.
Linux also has some facilities for the kernel to start user processes whose location is indicated via a sysctl parameter in certain circumstances. For example, the kernel can trigger module loading events (e.g. when new hardware is discovered, or when some network protocols are first used) by calling the program in the kernel.modprobe sysctl value. When a program dumps core, the kernel calls the program indicated by kernel.core_pattern if any. Those processes are user processes, but their parent is registered as kthreadd.
| init process: ancestor of all processes? |
1,451,679,960,000 |
I'm tuning the nofile value in /etc/security/limits.conf for my oracle user and I have a question about its behavior: does nofile limit the total number of files the user can have open for all of its processes or does it limit the total number of files the user can have open for each of its processes?
Specifically, for the following usage:
oracle hard nofile 65536
|
Most of the values¹ in limits.conf are limits that can be set with the ulimit shell command or the setrlimit system call. They are properties of a process. The limits apply independently for each process. In particular, each process can have up to nofile open files. There is no limit to the number of open files cumulated by the processes of a user.
The nproc limit is a bit of a special case, in that it does sum over all the processes of a user. Nonetheless, it still applies per-process: when a process calls fork to create a new process, the call is denied if the number of processes belonging to the process's euid is would be larger than the process's RLIMIT_NPROC value.
The limits.conf man page explains that the limits apply to a session. This means that all the processes in a session will all have these same limits (unless changed by one of these processes). It doesn't mean that any sum is done over the processes in a session (that's not even something that the operating system tracks — there is a notion of session, but it's finer-grained than that, for example each X11 application tends to end up in its own session). The way it works is that the login process sets itself some limits, and they are inherited by all child processes.
¹ The exceptions are maxlogins, maxsyslogins and chroot, which are applied as part of the login process to deny or influence login.
| Are limits.conf values applied on a per-process basis? |
1,451,679,960,000 |
I've read in many places that Linux creates a kernel thread for each user thread in a Java VM. (I see the term "kernel thread" used in two different ways:
a thread created to do core OS work and
a thread the OS is aware of and schedules to perform user work.
I am talking about the latter type.)
Is a kernel thread the same as a kernel process, since Linux processes support shared memory spaces between parent and child, or is it truly a different entity?
|
There is absolutely no difference between a thread and a process on Linux. If you look at clone(2) you will see a set of flags that determine what is shared, and what is not shared, between the threads.
Classic processes are just threads that share nothing; you can share what components you want under Linux.
This is not the case on other OS implementations, where there are much more substantial differences.
| Are Linux kernel threads really kernel processes? |
1,451,679,960,000 |
I can ping google.com for several seconds and when I press Ctrl + C, a brief summary is displayed at the bottom:
$ ping google.com
PING google.com (74.125.131.113) 56(84) bytes of data.
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=46.7 ms
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=56 time=45.0 ms
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=4 ttl=56 time=54.5 ms
^C
--- google.com ping statistics ---
4 packets transmitted, 3 received, 25% packet loss, time 3009ms
rtt min/avg/max/mdev = 44.965/48.719/54.524/4.163 ms
However, when I do the same redirecting output to log file with tee, the summary is not displayed:
$ ping google.com | tee log
PING google.com (74.125.131.113) 56(84) bytes of data.
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=1 ttl=56 time=34.1 ms
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=57.0 ms
64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=57 time=50.9 ms
^C
Can I get the summary as well when redirecting output with tee?
|
Turns out that there is an option in tee to ignore interrupt signals which are sent when CTRL+C is pressed. From man tee:
-i, --ignore-interrupts
ignore interrupt signals
When whole pipeline is interrupted by SIGINT, this signal is sent to all processes in pipeline. The problem is that tee is usually receiving SIGINT earlier than ping and then killing ping with SIGPIPE. If SIGINT is ignored in tee, it will be delivered only to ping and the summary will be displayed:
$ ping google.com | tee --ignore-interrupts log
PING google.com (142.250.150.101) 56(84) bytes of data.
64 bytes from la-in-f101.1e100.net (142.250.150.101): icmp_seq=1 ttl=104 time=48.8 ms
64 bytes from la-in-f101.1e100.net (142.250.150.101): icmp_seq=2 ttl=104 time=51.0 ms
64 bytes from la-in-f101.1e100.net (142.250.150.101): icmp_seq=3 ttl=107 time=32.2 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 32.198/44.005/50.973/8.394 ms
So ping receiving SIGINT will terminate eventually, causing tee to see the pipe writer had died, eventually causing tee to terminate, too (after having "digested" the input so far).
| Why does 'ping' not output a summary when redirecting output? |
1,451,679,960,000 |
I am using an embedded Arm with a Debian build. How does one list the compiled devices from the device tree? I want to see if a device is already supported.
For those reading this, the "Device Tree" is a specification/standard for adding devices to an (embedded) Linux kernel.
|
The device tree is exposed as a hierarchy of directories and files in /proc. You can cat the files, eg:
find /proc/device-tree/ -type f -exec head {} + | less
Beware, most file content ends with a null char, and some may contain other non-printing characters.
| How to list the kernel Device Tree [duplicate] |
1,451,679,960,000 |
I need to copy and over-write a large amount of files, I've used the following command:
# cp -Rf * ../
But then whenever a file with the same name exists on the destination folder I get this question:
cp: overwrite `../ibdata1'?
The Problem is that I have about 200 files which are going to be over-written and I don't think that pressing Y then Enter 200 times is the right way to do it.
So, what is the right way to that?
|
You do realise that RHEL and CentOS have tried to protect novice users by setting up aliases for the root user to prevent accidentally overwriting and deleting files?
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
The -i switch is what requires confirmation when modifying or removing existing files. Because alias expansion happens before execution of the command even the use of the --force -f switch will still require confirmation.
You can remove the alias permanently by editing the /root/.bashrc file, remove the alias for the duration of a session with unalias cp or for a single command use one of:
use the full path /bin/cp
use quotes "cp" or 'cp' around the command
use the command keyword e.g. command cp
escape the command \cp
| Copy over existing files without confirmation? |
1,451,679,960,000 |
This is the process I want to kill:
sooorajjj@Treako ~/Desktop/MerkMod $ sudo netstat -tunap | grep :80
tcp6 0 0 :::80 :::* LISTEN 20570/httpd
|
There are several ways to find which running process is using a port.
Using fuser it will give the PID(s) of the multiple instances associated with the listening port.
sudo apt-get install psmisc
sudo fuser 80/tcp
80/tcp: 1858 1867 1868 1869 1871
After finding out, you can either stop or kill the process(es).
You can also find the PIDs and more details using lsof
sudo lsof -i tcp:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1858 root 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1867 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1868 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1869 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1871 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
To limit to sockets that listen on port 80 (as opposed to clients that connect to port 80):
sudo lsof -i tcp:80 -s tcp:listen
To kill them automatically:
sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill
| Kill process running on port 80 |
1,451,679,960,000 |
This is related, but I believe not a duplicate, of "Why cannot I set the date of my GNU/Linux machines to the Epoch?".
I've discovered that one of my Linux kernels rejects attempts to set the clock to a time during the first six minutes past the epoch:
root@beaglebone:/# date 010100061970
date: cannot set date: Invalid argument
Thu 01 Jan 1970 12:06:00 AM UTC
root@beaglebone:/# date 010100071970
Thu 01 Jan 1970 12:07:00 AM UTC
But seven minutes past the epoch works fine.
Anybody have any idea why this might be? When I have a chance I'll check the kernel sources for clues.
(No, it's not a practical question, just curiosity, and therefore perhaps more suited to retrocomputing.se or something. My attempt to set the clock in this way was in error, and the kernel did me a favor by rejecting it. But, still, it's odd.)
I know from other evidence that the rejection is not happening in the date command, but in the kernel itself. The settimeofday syscall is returning -1, with errno set to EINVAL.
This was on a Debian 10 system running kernel 4.19.94-ti-r42.
I also tried it on a Debian 11 machine, kernel 5.10.179-1, and it behaved similarly, although the "forbidden region" was two whole months at the beginning of 1970 — I couldn't seem to set a date earlier than March 1.
Update: Not only has the question now been answered, but the weird little side mystery is also resolved, namely why different machines I tested it on had different behavior. The first machine I tested it on had just recently been rebooted, and it wouldn't let me set the time to a value less than seven minutes past the epoch. But the second machine had been up for a couple of months, and it wouldn't let me set the time to a date earlier than March — that is, not within the first two months past the epoch. But that makes sense, because one way of stating the constraint is that you can't have (time since 1970) being less than (time since boot).
|
This is caused by commit e1d7ba873555 ("time: Always make sure wall_to_monotonic isn't positive"). Per the commit message:
This patch fix the problem by prohibiting time from being set to a value which would cause a negative boot time. As a result one can't set the CLOCK_REALTIME time prior to (1970 + system uptime).
| Can't set time to wee hours of January 1, 1970 |
1,451,679,960,000 |
I tried removing LUKS encryption on my home directory using the following command:
cryptsetup luksRemoveKey /dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd
But it gives me an error saying:
Device /dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd is not a
valid LUKS device.
Puzzled, I tried the following:
cryptsetup status luks-3fd5-235-26-2625-2456f-4353fgdgd
And it says:
/dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd is active and is in use.
type: LUKS1
cipher: ...
It seems the encrypted device is active, but not valid. What could be wrong here?
|
ANSWER FROM 2013 - See other answers for happy times
Backup
Reformat
Restore
cryptsetup luksRemoveKey would only remove an encryption key if you had more than one. The encryption would still be there.
The Fedora Installation_Guide Section C.5.3 explains how luksRemoveKey works.
That it's "impossible" to remove the encryption while keeping the contents is just an educated guess. I base that on two things:
Because the LUKS container has a filesystem or LVM or whatever on top of it, just removing the encryption layer would require knowledge of the meaning of the data stored on top of it, which simply is not available. Also, a requirement would be that overwriting a part of the LUKS volume with its decrypted counterpart, would not break the rest of the LUKS content, and I'm not sure if that can be done.
Implementing it would solve a problem that is about as far away from the purpose of LUKS as you can get, and I find it very unlikely that someone would take the time to do that instead of something more "meaningful".
| How to remove LUKS encryption? |
1,451,679,960,000 |
The strings command behaves weirdly, apparently it doesn't stop writing to a file even if drive run out of space. Or perhaps I'm missing something?
I run the following:
# strings /dev/urandom > random.txt
this was keep running and didn't stop even after filling the disk (a regular usb flash).
then to be quicker I created a ramdisk and tried again the same command. it also didn't stop.
I understand that urandom isn't a regular file and also strings's output is redirected, however in both cases above, the cat command reported the error when there was no more space.
# cat /dev/urandom > random.txt
cat: write error: No space left on device
Is this normal behavior of strings? If so, why?
Where is the data written after there's no more space left?
|
If GNU cat can't write out what it read, it will exit with an error:
/* Write this block out. */
{
/* The following is ok, since we know that 0 < n_read. */
size_t n = n_read;
if (full_write (STDOUT_FILENO, buf, n) != n)
die (EXIT_FAILURE, errno, _("write error"));
}
GNU strings, on the other hand, doesn't care whether it managed to write successfully:
while (1)
{
c = get_char (stream, &address, &magiccount, &magic);
if (c == EOF)
break;
if (! STRING_ISGRAPHIC (c))
{
unget_part_char (c, &address, &magiccount, &magic);
break;
}
putchar (c);
}
So all those writes fail, but strings continues merrily along, until it reaches end of input, which will be never.
$ strace -e write strings /dev/urandom > foo/bar
write(1, "7[\\Z\n]juKw\nl [1\nTc9g\n0&}x(x\n/y^7"..., 4096) = 4096
write(1, "\nXaki%\ndHB0\n?5:Q\n6bX-\np!E[\n'&=7\n"..., 4096) = 4096
write(1, "%M6s\n=4C.%\n&7)n\nQ_%J\ncT+\";\nK*<%\n"..., 4096) = 4096
write(1, "&d<\nj~g0\nm]=o\na=^0\n%s]2W\nM7C%\nUK"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "~\nd3qQ\n^^u1#\na#5\\\n^=\t\"b\n*91_\n ]o"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "L\n6QO1x\na,yE\nk>\",@Z\nyM.ur\n~z\tF\nr"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "\n61]R\nyg9C\nfLVu\n<Ez:\n.tV-c\nw_'>e"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "\nCj)a\nT]X:uA\n_KH\"B\nRfQ4G\n3re\t\n&s"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "j\nk7@%\n9E?^N\nJ#8V\n*]i,\nXDxh?\nr_1"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "ia\tI\nQ)Zw\nnV0J\nE3-W \n@0-N2v\nK{15"..., 4096) = -1 ENOSPC (No space left on device)
write(1, "\nZ~*g\n)FQn\nUY:G\ndRbN\nn..F\nvF{,\n+"..., 4096) = -1 ENOSPC (No space left on device)
...
| Why won't the strings command stop? |
1,451,679,960,000 |
When I run chmod +w filename it doesn't give write permission to other, it just gives write permission to user and group.
After executing this command
chmod +w testfile.txt
running ls -l testfile.txt prints
-rw-rw-r-- 1 ravi ravi 20 Mar 10 18:09 testfile.txt
but in case of +r and +x it works properly.
I don't want to use chmod ugo+w filename.
|
Your specific situation
In your specific situation, we can guess that your current umask is 002 (this is a common default value) and this explains your surprise.
In that specific situation where umask value is 002 (all numbers octal).
+r means ugo+r because 002 & 444 is 000, which lets all bits to be set
+x means ugo+x because 002 & 111 is 000, which lets all bits to be set
but +w means ug+w because 002 & 222 is 002, which prevents the "o" bit to be set.
Other examples
With umask 022 +w would mean u+w.
With umask 007 +rwx would mean ug+rwx.
With umask 077 +rwx would mean u+rwx.
What would have matched your expectations
When you change umask to 000, by executing
umask 000
in your terminal, then
chmod +w file
will set permissions to ugo+w.
Side note
As suggested by ilkkachu, note that umask 000 doesn't mean that everybody can read and write all your files.
But umask 000 means everyone that has some kind of access to any user account on your machine (which may include programs running server services ofc) can read and write all the files you make with that mask active and don't change (if the containing chain of directories up to the root also allows them).
| Why does chmod +w not give write permission to other(o) |
1,451,679,960,000 |
There are a lot of constants in the Kernel named with HORKAGE,
ATA_HORKAGE_ZERO_AFTER_TRIM
ATA_HORKAGE_NODMA
ATA_HORKAGE_ATAPI_MOD16_DMA
ATA_HORKAGE_NO_DMA_LOG
ATA_HORKAGE_NO_ID_DEV_LO
ATA_HORKAGE_NO_LOG_DIR
ATA_HORKAGE_WD_BROKEN_LPM
However, these are not really documented
Force horkage according to libata.force and whine about it. For consistency with link selection, device number 15 selects the first device connected to the host link.
What does "horkage" mean?
|
It seems like the term Horkage was introduced with this patch by Alan Cox. The term "hork" means
(computing, slang) To foul up; to be occupied with difficulty, tangle, or unpleasantness; to be broken. I downloaded the program, but something is horked and it won't load.
You can also see this in The Jargon File's Glossary under "horked"
Broken. Confused. Trashed. Now common; seems to be post-1995. There is an entertaining web page of related definitions, few of which seem to be in live use but many of which would be in the recognition vocabulary of anyone familiar with the adjective.
The horkage list is a list of blacklisted functionality because hardware manufacturers failed to implement it properly ("horked" the implementation).
| What is "horkage"? |
1,451,679,960,000 |
I have a need to see some additional file properties for exe and dll files.
If I open windows explorer and add the additional columns to my view, I can see things like Company, Copyright, Product name and Product version when it exists for that file.
This data is available via windows explorer so it stands to reason that while the data/string may exist somewhere in the file itself I should be able to extract that information via command line in linux.
I've tried using 'strings' but have been met with limited success. Files where I know all the aforementioned data fields I cannot always see with 'strings'
I'm hoping that someone may have an alternative solution. Maybe something I haven't thought of yet, to see this information.
|
You can use ExifTool. Here is an example of its usage:
$ exiftool somefile.exe
ExifTool Version Number : 9.27
File Name : somefile.exe
Directory : .
File Size : 4.4 MB
File Modification Date/Time : 2013:08:09 12:43:10-04:00
File Access Date/Time : 2013:08:09 12:43:19-04:00
File Inode Change Date/Time : 2013:08:09 12:43:10-04:00
File Permissions : rw-------
File Type : Win32 EXE
MIME Type : application/octet-stream
Machine Type : Intel 386 or later, and compatibles
Time Stamp : 1992:06:19 18:22:17-04:00
PE Type : PE32
Linker Version : 2.25
Code Size : 37888
Initialized Data Size : 96256
Uninitialized Data Size : 0
Entry Point : 0x9c40
OS Version : 1.0
Image Version : 6.0
Subsystem Version : 4.0
Subsystem : Windows GUI
File Version Number : 3.3.0.0
Product Version Number : 3.3.0.0
File Flags Mask : 0x003f
File Flags : (none)
File OS : Win32
Object File Type : Executable application
File Subtype : 0
Language Code : Neutral
Character Set : Unicode
Comments : This installation was built with Inno Setup.
Company Name : Some company
File Description : Some company
File Version : 3.3
Legal Copyright : Copyright(c) 2009-2013 Some company
Product Name : Some company somefile
Product Version : 3.3
ExifTool supports a number of file types and meta information formats. From the exiftool(1) manpage:
Below is a list of file types and meta information formats currently
supported by ExifTool (r = read, w = write, c = create):
File Types
------------+-------------+-------------+-------------+------------
3FR r | EIP r | LA r | ORF r/w | RSRC r
3G2 r | EPS r/w | LNK r | OTF r | RTF r
3GP r | ERF r/w | M2TS r | PAC r | RW2 r/w
ACR r | EXE r | M4A/V r | PAGES r | RWL r/w
AFM r | EXIF r/w/c | MEF r/w | PBM r/w | RWZ r
AI r/w | EXR r | MIE r/w/c | PCD r | RM r
AIFF r | F4A/V r | MIFF r | PDF r/w | SO r
APE r | FFF r/w | MKA r | PEF r/w | SR2 r/w
ARW r/w | FLA r | MKS r | PFA r | SRF r
ASF r | FLAC r | MKV r | PFB r | SRW r/w
AVI r | FLV r | MNG r/w | PFM r | SVG r
BMP r | FPF r | MODD r | PGF r | SWF r
BTF r | FPX r | MOS r/w | PGM r/w | THM r/w
CHM r | GIF r/w | MOV r | PLIST r | TIFF r/w
COS r | GZ r | MP3 r | PICT r | TTC r
CR2 r/w | HDP r/w | MP4 r | PMP r | TTF r
CRW r/w | HDR r | MPC r | PNG r/w | VRD r/w/c
CS1 r/w | HTML r | MPG r | PPM r/w | VSD r
DCM r | ICC r/w/c | MPO r/w | PPT r | WAV r
DCP r/w | IDML r | MQV r | PPTX r | WDP r/w
DCR r | IIQ r/w | MRW r/w | PS r/w | WEBP r
DFONT r | IND r/w | MXF r | PSB r/w | WEBM r
DIVX r | INX r | NEF r/w | PSD r/w | WMA r
DJVU r | ITC r | NRW r/w | PSP r | WMV r
DLL r | J2C r | NUMBERS r | QTIF r | WV r
DNG r/w | JNG r/w | ODP r | RA r | X3F r/w
DOC r | JP2 r/w | ODS r | RAF r/w | XCF r
DOCX r | JPEG r/w | ODT r | RAM r | XLS r
DV r | K25 r | OFR r | RAR r | XLSX r
DVB r | KDC r | OGG r | RAW r/w | XMP r/w/c
DYLIB r | KEY r | OGV r | RIFF r | ZIP r
Meta Information
----------------------+----------------------+---------------------
EXIF r/w/c | CIFF r/w | Ricoh RMETA r
GPS r/w/c | AFCP r/w | Picture Info r
IPTC r/w/c | Kodak Meta r/w | Adobe APP14 r
XMP r/w/c | FotoStation r/w | MPF r
MakerNotes r/w/c | PhotoMechanic r/w | Stim r
Photoshop IRB r/w/c | JPEG 2000 r | APE r
ICC Profile r/w/c | DICOM r | Vorbis r
MIE r/w/c | Flash r | SPIFF r
JFIF r/w/c | FlashPix r | DjVu r
Ducky APP12 r/w/c | QuickTime r | M2TS r
PDF r/w/c | Matroska r | PE/COFF r
PNG r/w/c | GeoTIFF r | AVCHD r
Canon VRD r/w/c | PrintIM r | ZIP r
Nikon Capture r/w/c | ID3 r | (and more)
| viewing dll and exe file properties/attributes via the command line |
1,451,679,960,000 |
Say I am running a software, and then I run package manager to upgrade the software, I notice that Linux does not bring down the running process for package upgrade - it is still running fine. How does Linux do this?
|
The reason is Unix does not lock an executable file while it is executed or even if it does like Linux, this lock applies to the inode, not the file name. That means a process keeping it open is accessing the same (old) data even after the file has been deleted (unlinked actually) and replaced by a new one with the same name which is essentially what a package update does.
That is one of the main differences between Unix and Windows. The latter cannot update a file being locked as it is missing a layer between file names and inodes making a major hassle to update or even install some packages as it usually requires a full reboot.
| Why does a software package run just fine even when it is being upgraded? |
1,451,679,960,000 |
On Linux, when you a create folder, it automatically creates two hard links to the corresponding inode.
One which is the folder you asked to create, the other being the . special folder this folder.
Example:
$ mkdir folder
$ ls -li
total 0
124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 folder
$ ls -lai folder
total 0
124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 .
124593716 drwxr-xr-x 3 fantattitude staff 102 18 oct 16:52 ..
As you can see, both folder and .'s inside folder have the same inode number (shown with -i option).
Is there anyway to delete this special . hardlink?
It's only for experimentation and curiosity.
Also I guess the answer could apply to .. special file as well.
I tried to look into rm man but couldn't find any way to do it. When I try to remove . all I get is:
rm: "." and ".." may not be removed
I'm really curious about the whole way these things work so don't refrain from being very verbose on the subject.
EDIT: Maybe I wasn't clear with my post, but I want to understand the underlying mechanism which is responsible for . files and the reasons why they can't be deleted.
I know the POSIX standard disallows a folder with less than 2 hardlinks, but don't really get why. I want to know if it could be possible to do it anyway.
|
It is technically possible to delete ., at least on EXT4 filesystems. If you create a filesystem image in test.img, mount it and create a test folder, then unmount it again, you can edit it using debugfs:
debugfs -w test.img
cd test
unlink .
debugfs doesn't complain and dutifully deletes the . directory entry in the filesystem. The test directory is still usable, with one surprise:
sudo mount test.img /mnt/temp
cd /mnt/temp/test
ls
shows only
..
so . really is gone. Yet cd ., ls ., pwd still behave as usual!
I'd previously done this test using rmdir ., but that deletes the directory's inode (huge thanks to BowlOfRed for pointing this out), which leaves test a dangling directory entry and is the real reason for the problems encountered. In this scenario, the test folder then becomes unusable; after mounting the image, running ls produces
ls: cannot access '/mnt/test': Structure needs cleaning
and the kernel log shows
EXT4-fs error (device loop2): ext4_lookup:1606: inode #2: comm ls: deleted inode referenced: 38913
Running e2fsck in this situation on the image deletes the test directory entirely (the directory inode is gone so there's nothing to restore).
All this shows that . exists as a specific entity in the EXT4 filesystem. I got the impression from the filesystem code in the kernel that it expects . and .. to exist, and warns if they don't (see namei.c), but with the unlink .-based test I didn't see that warning. e2fsck doesn't like the missing . directory entry, and offers to fix it:
$ /sbin/e2fsck -f test.img
e2fsck 1.43.3 (04-Sep-2016)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Missing '.' in directory inode 30721.
Fix<y>?
This re-creates the . directory entry.
| How to unlink (remove) the special hardlink "." created for a folder? |
1,451,679,960,000 |
I have an awk script, new.awk:
BEGIN { FS = OFS = "," }
NR == 1 {
for (i = 1; i <= NF; i++)
f[$i] = i
}
NR > 1 {
begSecs = mktime(gensub(/[":-]/, " ", "g", $(f["DateTime"])))
endSecs = begSecs + $(f["TotalDuration"])
$(f["CallEndTime"]) = strftime("%Y-%m-%d %H:%M:%S", endSecs)
}
{ print }
I am calling this in shell
awk new.awk sample.csv
... but I can see the changes in the terminal. How to make the change in-place in the file, like when using sed -i?
|
GNU awk (commonly found on Linux systems), since version 4.1.0, can include an "awk source library" with -i or --include on the command line (see How to safely use gawk's -i option or @include directive? along with Stéphane's comment below for security issues related to this). One of the source libraries that is distributed with GNU awk is one called inplace:
$ cat file
hello
there
$ awk -i inplace '/hello/ { print "oh,", $0 }' file
$ cat file
oh, hello
As you can see, this makes the output of the awk code replace the input file. The line saying there is not kept as the program does not output it.
With an awk script in a file, you would use it like
awk -i inplace -f script.awk datafile
If the awk variable INPLACE_SUFFIX is set to a string, then the library would make a backup of the original file with that as a filename suffix.
awk -i inplace -v INPLACE_SUFFIX=.bak -f script.awk datafile
If you have several input files, each file with be individually in-place edited. But you can turn in-place editing off for a file (or a set of files) by using inplace=0 on the command line before that file:
awk -i inplace -f script.awk file1 file2 inplace=0 file3 inplace=1 file4
In the above command, file3 would not be edited in place.
For a more portable "in-place edit" of a single file, use
tmpfile=$(mktemp)
cp file "$tmpfile" &&
awk '...some program here...' "$tmpfile" >file
rm "$tmpfile"
This would copy the input file to a temporary location, then apply the awk code on the temporary file while redirecting to the original filename.
Doing the operations in this order (running awk on the temporary file, not on the original file) ensures that the file meta-data (permissions and ownership) of the original file is not modified.
| How to change a file in-place using awk? (as with "sed -i") |
1,451,679,960,000 |
Could you please explain why a binary compiled file (in, for example, /usr/sbin) has write permission for root user?
For me, this is compiled. Meaning that direct write has no use and may expose file to some security issue somehow.
A script (e.g. a bash file) may be writeable because it is a text file basically, but why is it the same for a compiled file where no write is actually necessary as far as I know?
Thank you in advance for your feedback.
|
It doesn't really matter if the files in /bin (or any other standard directory where executables are kept) are writable by root or not. On a Linux server I'm using, they are writable by root, but on my OpenBSD machine, they're not.
As long as they are not writable by the group or by "other"!
There is no security issue having, e.g.
-rwxr-xr-x 1 root root 126584 Feb 18 2016 /bin/ls
If someone wanted to overwrite it, they'd have to be root, and if they are root and overwrite it, then they are either
installing a new version, or
clumsy, or
an attacker with root permissions already.
Another thing to consider is that root can write to the file no matter if it's write protected or not, because... root.
Notice too that "a script" is as much an executable as a binary file. A script doesn't need to be writable "because it's a text file". If anything, it should probably just have the same permission as the other executables in the same directory.
Don't go changing the permissions on everything now! That can wreak all sorts of havoc and potentially confuse package managers who might verify that permissions are set properly. It may also make the system vulnerable if you accidentally change the permissions in the wrong way on a security-critical application.
Just assume that the permissions on the executables are set correctly, unless you find something that looks really odd, in which case you should probably contact the relevant package maintainer to verify rather than start changing stuff.
From the comments and on chat, there was a call for some history.
The history of the permissions on binaries on Linux is not anything I know anything about. It may be speculated that they simply inherited the permissions from the directory, or just from the default umask of Linux, but I really don't know.
What I do know is that OpenBSD installs the binaries in the base system1 with permission mode 555 by default (-r-xr-xr-x). This is specified in a Makefile fragment in /usr/share/mk/bsd.own.mk which sets BINMODE to 555 (unless it's set already). This is later used when installing the executables during make build in /usr/src.
I had a look at the annotated CVS log for this file, and found that this line in the file is unchanged since it was imported from NetBSD in 1995.
On NetBSD, the file was first put into CVS in 1993, with BINMODE set to 555.
The FreeBSD project seems to have used the exact same file as NetBSD since at least 1994, and with a later commit adds a hint in the commit message that the old files were from the 4.4BSD release of the Berkeley Software Distribution.
Beyond that, the CSRG at Berkeley kept the sources in SCCS but their repository is available in Git form on GitHub2. The file that we're giving the forencic treatement here seems to have been committed by Keith Bostic (or someone in close proximity to him) in 1990.
So that's that story. If you want the why, then I suppose we'll have to ask Keith. I was kinda hoping to see a commit message to a change saying "this needs to be 555 because ...", but no.
1 BSD systems have a stricter division into "base system" and "3rd party packages" (ports/packages) than Linux. The base system is a coherent unit that provides a complete set of facilities for running the operating system, while the ports or packages are seen as "local software" and are installed under /usr/local.
2 A more comprehensive GitHub repository of Unix releases from the 70's onwards is available too.
| Why are executables in e.g. /usr/sbin writable by root? |
1,451,679,960,000 |
Consider the following situation:
At my home, I have a router (which is connected to internet), server (S) and my main machine (M). S is reachable from the internet (it has static IP), and it is up 24/7, while M is not.
Sometimes, I want to make some app (which listens on some port on M, for example 8888) accessible from outer internet.
For that, I wanted to set up some port on S (2222) to forward to M's port 8888, so that anybody accessing S:2222 would feel like he was accessing M:8888.
I tried to use ssh port forwarding, my best attempt was as follows:
ssh -L 2222:M:8888 -N M
But that only allows me to access 2222 port from server itself, not from other machines.
Is there some way to do it properly? Preferably, I'd like it to be a simple command, which I would be able to start and shut down with ^C when I don't need that forwarding anymore.
|
Yes, this is called GatewayPorts in SSH. An excerpt from ssh_config(5):
GatewayPorts
Specifies whether remote hosts are allowed to connect to local
forwarded ports. By default, ssh(1) binds local port forwardings
to the loopback address. This prevents other remote hosts from
connecting to forwarded ports. GatewayPorts can be used to spec‐
ify that ssh should bind local port forwardings to the wildcard
address, thus allowing remote hosts to connect to forwarded
ports. The argument must be “yes” or “no”. The default is “no”.
And you can use localhost instead of M in the forwarding, as you're forwarding to the same machine as you're SSH-ing to -- if I understand your question correctly.
So, the command will become this:
ssh -L 2222:localhost:8888 -N -o GatewayPorts=yes hostname-of-M
and will look like this in netstat -nltp:
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 5113/ssh
Now anyone accessing this machine at port 2222 TCP will actually talk to localhost:8888 as seen in machine M. Note that this is not the same as plain forwarding to port 8888 of M.
| How to forward a port from one machine to another? |
1,451,679,960,000 |
I found a good replacement IDE for Delphi called Lazarus. But I don't have a question for programmers.
Will the statically linked Linux binary work on all Linux distributions? I.e. it does not matter on what Linux distro I built it and it will work on Debian / ArchLinux / Ubuntu / OpenSUSE / ... whatever?
As a result of my findings, does really only matter 32bit vs 64bit? I want to be sure before I publish.
|
This answer was first written for the more general question "will my binary run on all distros", but it addresses statically linked binaries in the second half.
For anything that is more complex than a statically linked hello world, the answer is probably no.
Without testing it on distribution X, assume the answer is no for X.
If you want to ship your software in binary form, restrict yourself to
a few popular distributions for the field of use of your software (desktop, server, embedded, ...)
the latest one or two versions of each
Otherwise you end up with houndreds of distribution of all sizes, versions and ages (ten year old distribution are still in use and supported).
Test for those. Just a few pointer on what can (and will) go wrong otherwise:
The package of a tool/library you need is named differently across distributions and even versions of the same distribution
The libraries you need are too new or too old (wrong version). Don't assume just because your program can link, it links with the right library.
The same library (file on disk) is differently named on different distributions, making linking impossible
32bit on 64bit: the 32bit environment might not be installed or some non-essential 32bit library is moved into an extra package apart from the 32on64 environment, so you have an extra dependency just for this case.
Shell: don't assume your version of Bash. Don't assume even Bash.
Tools: don't assume some non POSIX command line tool exists anywhere.
Tools: don't assume the tool recognizes an option just because the GNU version of your distro does.
Kernel interfaces: Don't assume the existence or structure of files in /proc just because they exist/have the structure on your machine
Java: are you really sure your program runs on IBM's JRE as shipped with SLES without testing it?
Bonus:
Instruction sets: binary compiled on your machine does not run on older hardware.
Is linking statically (or: bundling all the libraries you need with your software) a solution? Even if it works technically, the associated costs might be to high. So unfortunately, the answer is probably no either.
Security: you shift the responsibility to update the libraries from the user of your software to yourself.
Size and complexity: just for fun try to build a statically linked GUI program.
Interoperability: if your software is a "plugin" of any kind, you depend on the software which calls you.
Library design: if you link your program statically to GNU libc and use name services (getpwnam() etc.), you end up linked dynamically against libc's NSS (name service switch).
Library design: the library you link your program statically with uses data files or other resources (like timezones or locales).
For all the reasons mentioned above, testing is essential.
Get familiar with KVM or other virtualization techniques and have a VM of every Distribution you plan to support. Test your software on every VM.
Use minimal installations of those distributions.
Create a VM with a restricted instruction set (e.g. no SSE 4).
Statically linked or bundled only: check your binaries with ldd to see whether they are really statically linked / use only your bundled libraries.
Statically linked or bundled only: create an empty directory and copy your software into it. chroot into that directory and run your software.
| Will my linux binary work on all distros? |
1,451,679,960,000 |
According the the Unix and Linux Administration Handbook and man, logrotate has options for daily, weekly, and monthly, but is there a way to add an hourly option?
This blog post mentions you can set size 1 and remove the time option (eg: daily) and then manually call logrotate with cron - I suppose something like
logrotate -f /etc/logrotate.d/my-hourly-file
but is there a more elegant solution for rotating logs hourly?
|
Depending on your OS. Some (all?) Linux distributions have a directory /etc/cron.hourly where you can put cron jobs to be executed every hour.
Others have a directory /etc/cron.d/. There you can put cron-jobs that are to be executed as any special user with the usual cron-settings of a crontab entry (and you have to specify the username).
If you use either of these instead of the standard log rotatation script in /etc/cron.daily/ you should copy that script there and cp /dev/null to the original position. Else it will be reactivated by a logrotate patch-update.
For proper hourly rotation, also take care that the dateext directive is not set. If so, by default the first rotated file will get the extension of the current date like YYYYMMDD. Then, the second time logrotate would get active within the same day, it simply skips the rotation even if the size threshold has exceeded.
The reason is that the new name of the file to get rotated already exists, and logrotate does not append the content to the existing old file.
For example on RHEL and CentOS, the dateext directive is given by default in /etc/logrotate.conf. After removing or commenting that line, the rotated files will simply get a running number as extension until reaching the rotate value. In this way, it's possible to perform multiple rotations a day.
| How can I set up logrotate to rotate logs hourly? |
1,530,741,375,000 |
Trying to understand this piece of code:
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
I'm not sure what the -f means exactly.
|
The relevant man page to check for this is that of the shell itself, bash, because -f is functionality that the shell provides, it's a bash built-in.
On my system (CentOS 7), the fine man page covers it. The grep may not give the same results on other distributions. Nevertheless, if you run man bash and then search for '-f' it should give the results you require.
$ man bash | grep -A1 '\-f file$'
-f file
True if file exists and is a regular file.
$
| What does -f mean in an if statement in a bash script? |
1,530,741,375,000 |
Do I need to have to run perf userspace tool as system administrator (root), or can I run it (or at least some subcommands) as an ordinary user?
|
What you can do with perf without being root depends on the kernel.perf_event_paranoid sysctl setting.
kernel.perf_event_paranoid = 2: you can't take any measurements. The perf utility might still be useful to analyse existing records with perf ls, perf report, perf timechart or perf trace.
kernel.perf_event_paranoid = 1: you can trace a command with perf stat or perf record, and get kernel profiling data.
kernel.perf_event_paranoid = 0: you can trace a command with perf stat or perf record, and get CPU event data.
kernel.perf_event_paranoid = -1: you get raw access to kernel tracepoints (specifically, you can mmap the file created by perf_event_open, I don't know what the implications are).
| Do I need root (admin) permissions to run userspace 'perf' tool? (perf events are enabled in Linux kernel) |
1,530,741,375,000 |
I would like to know how Message Queues are implemented in the Linux Kernel.
|
The Linux kernel (2.6) implements two message queues:
(rather 'message lists', as the implementation is done by using a linked list not strictly following the FIFO principle)
System V IPC messages
The message queue from System V.
A process can invoke msgsnd() to send a message. He needs to pass the IPC identifier of the receiving message queue, the size of the message and a message structure, including the message type and text.
On the other side, a process invokes msgrcv() to receive a message, passing the IPC identifier of the message queue, where the message should get stored, the size and a value t.
t specifies the message returned from the queue, a positive value means the first message with its type equal to t is returned, a negative value returns the last message equal to type t and zero returns the first message of the queue.
Those functions are defined in include/linux/msg.h and implemented in ipc/msg.c
There are limitations upon the size of a message (max), the total number of messages (mni) and the total size of all messages in the queue (mnb):
$ sysctl kernel.msg{max,mni,mnb}
kernel.msgmax = 8192
kernel.msgmni = 1655
kernel.msgmnb = 16384
The output above is from a Ubuntu 10.10 system, the defaults are defined in msg.h.
More incredibly old System V message queue stuff explained here.
POSIX Message Queue
The POSIX standard defines a message queue mechanism based on System V IPC's message queue, extending it by some functionalities:
Simple file-based interface to the application
Support for message priorities
Support for asynchronous notification
Timeouts for blocking operations
See ipc/mqueue.c
Example
util-linux provides some programs for analyzing and modifying message queues and the POSIX specification gives some C examples:
Create a message queue with ipcmk; generally you would do this by calling C functions like ftok() and msgget():
$ ipcmk -Q
Lets see what happened by using ipcs or with a cat /proc/sysvipc/msg:
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 0 0
Now fill the queue with some messages:
$ cat <<EOF >msg_send.c
#include <string.h>
#include <sys/msg.h>
int main() {
int msqid = 65536;
struct message {
long type;
char text[20];
} msg;
msg.type = 1;
strcpy(msg.text, "This is message 1");
msgsnd(msqid, (void *) &msg, sizeof(msg.text), IPC_NOWAIT);
strcpy(msg.text, "This is message 2");
msgsnd(msqid, (void *) &msg, sizeof(msg.text), IPC_NOWAIT);
return 0;
}
EOF
Again, you generally do not hardcode the msqid in the code.
$ gcc -o msg_send msg_send.c
$ ./msg_send
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 40 2
And the other side, which will be receiving the messages:
$ cat <<EOF >msg_recv.c
#include <stdio.h>
#include <sys/msg.h>
int main() {
int msqid = 65536;
struct message {
long type;
char text[20];
} msg;
long msgtyp = 0;
msgrcv(msqid, (void *) &msg, sizeof(msg.text), msgtyp, MSG_NOERROR | IPC_NOWAIT);
printf("%s \n", msg.text);
return 0;
}
EOF
See what happens:
$ gcc -o msg_recv msg_recv.c
$ ./msg_recv
This is message 1
$ ./msg_recv
This is message 2
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 0 0
After two receives, the queue is empty again.
Remove it afterwards by specifying the key (-Q) or msqid (-q):
$ ipcrm -q 65536
| How is a message queue implemented in the Linux kernel? |
1,530,741,375,000 |
We all know that Linus Torvalds created Git because of issues with Bitkeeper. What is not known (at least to me) is, how were issues/tickets/bugs tracked up until then? I tried but didn't get anything interesting. The only discussion I was able to get on the subject was this one where Linus shared concerns with about using Bugzilla.
Speculation: - The easiest way for people to track bugs in the initial phase would have been to put tickets in a branch of its own but am sure that pretty quickly that wouldn't have scaled with the noise over-taking the good bugs.
I've seen and used Bugzilla and unless you know the right 'keywords' at times you would be stumped. NOTE: I'm specifically interested in the early years (1991-1995) as to how they used to track issues.
I did look at two threads, "Kernel SCM saga", and "Trivia: When did git self-host?" but none of these made mention about bug-tracking of the kernel in the early days.
I searched around and wasn't able to get of any FOSS bug-tracking software which was there in 1991-1992. Bugzilla, Request-tracker, and others came much later, so they appear to be out.
Key questions
How did then Linus, the subsystem-maintainers, and users report and track bugs in those days?
Did they use some bug-tracking software, made a branch of bugs and manually committed questions and discussions on the bug therein (would be expensive and painful to do that) or just use e-mail.
Much later, Bugzilla came along (first release 1998) and that seems to be the primary way to report bugs afterwards.
Looking forward to have a clearer picture of how things were done in the older days.
|
In the beginning, if you had something to contribute (a patch or a bug report), you mailed it to Linus. This evolved into mailing it to the list (which was [email protected] before kernel.org was created).
There was no version control. From time to time, Linus put a tarball on the FTP server. This was the equivalent of a "tag". The available tools at the beginning were RCS and CVS, and Linus hates those, so everybody just mailed patches. (There is an explanation from Linus about why he didn't want to use CVS.)
There were other pre-Bitkeeper proprietary version control systems, but the decentralized, volunteer-based development of Linux made it impossible to use them. A random person who just found a bug will never send a patch if it has to go through a proprietary version control system with licenses starting in the thousands of dollars.
Bitkeeper got around both of those problems: it wasn't centralized like CVS, and while it was not Free Software, kernel contributors were allowed to use it without paying. That made it good enough for a while.
Even with today's git-based development, the mailing lists are still where the action is. When you want to contribute something, you'll prepare it with git of course, but you'll have to discuss it on the relevant mailing list before it gets merged. Bugzilla is there to look "professional" and soak up half-baked bug reports from people who don't really want to get involved.
To see some of the old bug-reporting instructions, get the historical Linux repository. (This is a git repository containing all the versions from before git existed; mostly it contains one commit per release since it was reconstructed from the tarballs). Files of interest include README, MAINTAINERS, and REPORTING-BUGS.
One of the interesting things you can find there is this from the Linux-0.99.12 README:
- if you have problems that seem to be due to kernel bugs, please mail
them to me ([email protected]), and possibly to any other
relevant mailing-list or to the newsgroup. The mailing-lists are
useful especially for SCSI and NETworking problems, as I can't test
either of those personally anyway.
| How did the Linux Kernel project track bugs in the Early Days? |
1,530,741,375,000 |
Assume I have some issue that was fixed by a recent patch to the official Linux git repository. I have a work around, but I’d like to undo it when a release happens that contains my the fix. I know the exact git commit hash, e.g. f3a1ef9cee4812e2d08c855eb373f0d83433e34c.
What is the easiest way to answer the question: What kernel releases so far contain this patch? Bonus points if no local Linux git repository is needed.
(LWM discusses some ideas, but these do require a local repository.)
|
In GitHub kernel repository, you can check all tags/kernel versions.
Example for dc0827c128c0ee5a58b822b99d662b59f4b8e970 provided by Jim Paris:
If three-dots are clicked, full list of tags/kernel versions can be seen.
| Given a git commit hash, how to find out which kernel release contains it? |
1,530,741,375,000 |
I don't understand iotop output: it shows ~1.5 MB/s of disk write (top right), but all programs have 0.00 B/s. Why?
The video was taken as I was deleting the content of a folder with a few millions of files using perl -e 'for(<*>){((stat)[9]<(unlink))}', on Kubuntu 14.04.3 LTS x64.
iotop was launched using sudo iotop.
|
The information shown by iotop isn't gathered in the same way for individual processes and for the system as a whole. The “actual” global figures are not the sum of the per-process figures (that's what “total” is).
All information is gathered from the proc filesystem.
For each process, iotop reads data from /proc/PID/io, specifically the rchar and wchar values. These are the number of bytes passed in read and write system calls (including variants such as readv, writev, recv, send, etc.).
The global “actual” values are read from /proc/vmstat, specifically the pgpgin and pgpgout values. These measure the data exchanged between the kernel and the hardware (more precisely, this is the data shuffled around by the block device layer in the kernel).
There are many reasons why the per-process data and the block device layer data differ. In particular:
Caching and buffering mean that I/O happening at one layer may not be happening at the same time, or the same number of times, at the other layer. For example, data read from the cache is accounted as a read from the process that accesses it, but there's no corresponding read from the hardware (that already happened earlier, possibly on behalf of another process).
The process-level data includes data exchanged on pipes, sockets, and other input/output that doesn't involve an underlying disk or other block device.
The process-level data only accounts for file contents, not metadata.
That last difference explains what you're seeing here. Removing files only affects metadata, not data, so the process isn't writing anything. It may be reading directory contents to list the files to delete, but that's small enough that it may scroll by unnoticed.
I don't think Linux offers any way to monitor file metadata updates. You can monitor per-filesystem I/O via entries under /sys/fs for some filesystems. I don't think you can account metadata I/O against specific processes, it would be very complicated to do in the general case since multiple processes could be causing the same metadata to be read or changed.
| iotop showing 1.5 MB/s of disk write, but all programs have 0.00 B/s |
1,530,741,375,000 |
Which permissions affect hard link creation? Does file ownership itself matter?
Suppose user alice wants to create a hard link to the file target.txt in a directory target-dir.
Which permissions does alice need on both target.txt and target-dir?
If target.txt is owned by user bill and target-dir is owned by user chad, does that change anything?
I've tried to simulate this situation by creating the following folder/file structure on an ext4 filesystem:
#> ls -lh . *
.:
drwxr-xr-x 2 bill bill 60 Oct 1 11:29 source-dir
drwxrwxrwx 2 chad chad 60 Oct 1 11:40 target-dir
source-dir:
-r--r--r-- 1 bill bill 0 Oct 1 11:29 target.txt
target-dir:
-rw-rw-r-- 1 alice alice 0 Oct 1 11:40 dummy
While alice can create a soft link to target.txt, she can't create a hard link:
#> ln source-dir/target.txt target-dir/
ln: failed to create hard link ‘target-dir/target.txt’ => ‘source-dir/target.txt’: Operation not permitted
If alice owns target.txt and no permissions are changed, the hard link succeeds. What am I missing here?
|
To create the hard link, alice will need write+execute permissions on target-dir on all cases. The permissions needed on target.txt will vary:
If fs.protected_hardlinks = 1 then alice needs either ownership of target.txt or at least read+write permissions on it.
If fs.protected_hardlinks = 0 then any set of permissions will do; Even 000 is okay.
This answer to a similar question had the missing piece of information to answer this question.
From this commit message (emphasis mine):
On systems that have user-writable directories on the same partition
as system files, a long-standing class of security issues is the
hardlink-based time-of-check-time-of-use race, most commonly seen in
world-writable directories like /tmp. The common method of exploitation
of this flaw is to cross privilege boundaries when following a given
hardlink (i.e. a root process follows a hardlink created by another
user). Additionally, an issue exists where users can "pin" a potentially
vulnerable setuid/setgid file so that an administrator will not actually
upgrade a system fully.
The solution is to permit hardlinks to only be created when the user is
already the existing file's owner, or if they already have read/write
access to the existing file.
| Hard link creation - Permissions? |
1,530,741,375,000 |
Possible Duplicate:
Find recursively all archive files of diverse archive formats and search them for file name patterns
I need to search for a file in all zip files in a directory.
Is there a tool like find that be able to search in ZIP files?
I tried this:
find /path/ -iname '*.zip' -print -exec unzip -l {} \; |grep -i '<filename>'
But this only prints path of file in zip file and not the zip file name itself!
Thanks
|
Try:
for f in *.zip; do echo "$f: "; unzip -l $f | grep <filename>; done
| Find a file in lots of zip files (like find command for directories) [duplicate] |
1,530,741,375,000 |
What could cause touch to fail with this error message?
touch: cannot touch `foo': No such file or directory
Note that an error due to incorrect permissions looks different:
touch: cannot touch `foo': Permission denied
|
Following sequence causes this error message:
$ mkdir foo
$ cd foo
In another terminal:
$ rm -r foo
In the previous terminal:
$ touch x
touch: cannot touch `x': No such file or directory
Of course, other events that also result in invalidating the current working directory (CWD) of a process that tries to create a file there also yield this error message.
| touch: cannot touch `foo': No such file or directory |
1,530,741,375,000 |
I'm trying to detect what filesystems a kernel can support. Ideally in a little list of their names but I'll take anything you've got.
Note that I don't mean the current filesystems in use, just ones that the current kernel could, theoretically support directly (obviously, fuse could support infinite numbers more).
|
Can I list the filesystems a running kernel can support?
Well, answer /proc/filesystems is bluntly wrong — it reflects only those FSes that already were brought in use, but there are way more of them usually that kernel can support:
ls /lib/modules/$(uname -r)/kernel/fs
Another source is /proc/config.gz which might be absent in your distro (and I always wonder «why?!» in case), but a snapshot of config used to build the kernel typically can be found in the boot directory along with kernel and initrd images.
| Can I list the filesystems a running kernel can support? |
1,530,741,375,000 |
Until recently I thought the load average (as shown for example in top) was a moving average on the n last values of the number of process in state "runnable" or "running". And n would have been defined by the "length" of the moving average: since the algorithm to compute load average seems to trigger every 5 sec, n would have been 12 for the 1min load average, 12x5 for the 5 min load average and 12x15 for the 15 min load average.
But then I read this article: http://www.linuxjournal.com/article/9001. The article is quite old but the same algorithm is implemented today in the Linux kernel. The load average is not a moving average but an algorithm for which I don't know a name. Anyway I made a comparison between the Linux kernel algorithm and a moving average for an imaginary periodic load:
.
There is a huge difference.
Finally my questions are:
Why this implementation have been choosen compared to a true moving average, that has a real meaning to anyone ?
Why everybody speaks about "1min load average" since much more than the last minute is taken into account by the algorithm. (mathematically, all the measure since the boot; in practice, taking into account the round-off error -- still a lot of measures)
|
This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. The algorithm used instead has the advantage that all the kernel needs to keep is the result of the previous calculation.
Keep in mind the algorithm was a bit closer to the truth back when computer speeds and corresponding clock cycles were measured in tens of MHz instead of GHz; there's a lot more time for discrepancies to creep in these days.
| Why isn't a straightforward 1/5/15 minute moving average used in Linux load calculation? |
1,530,741,375,000 |
OS: ex.: Ubuntu 10.04 - how to know that what does the "mount -t TYPE" knows? I mean is there a command to list the supported filesystem types using with mount??
UPDATE: is the following cmd always good?:
cat /proc/filesystems | awk '{print $NF}' | sed '/^$/d'
sysfs
rootfs
bdev
proc
cgroup
cpuset
tmpfs
devtmpfs
debugfs
securityfs
sockfs
pipefs
anon_inodefs
inotifyfs
devpts
ext3
ext2
ext4
ramfs
hugetlbfs
ecryptfs
fuse
fuseblk
fusectl
mqueue
binfmt_misc
iso9660
vfat
udf
reiserfs
xfs
jfs
msdos
ntfs
minix
hfs
hfsplus
qnx4
ufs
btrfs
|
This should work for ubuntu as well as Debian, type the following:
cat /proc/filesystems
This will output what your current kernel supports
ah now i understand your question better, type:
man mount
and scroll down to -t and there will be a list of supported filesystems that mount it self supports, but this is dependent on what your kernal supports
| "mount -t TYPE /" - how to know that what could the "TYPE" be? |
1,530,741,375,000 |
I've been using Windows and Mac OS for the past 5 years and now I'm considering to use Linux on a daily basis. I've installed Ubuntu on a virtual machine and trying to understand how I can use Linux for my daily job (as a js programmer / web designer).
Sorry for the novice question but it occurs to me that sometimes when I install a program through make config & make install it changes my system in ways that is not revertible easily. In windows when you install a program, you can uninstall it and hopefully if it plays by the book there will be no traces of the program left in the file system or registery, etc. In Mac OS you simply delete an App like a file.
But in Linux there is apt-get and then there is make. I didn't quite understand how I can keep my Linux installation clean and tidy. It feels like any new app installation may break my system. But then Linux has a reputation of being very robust, so there must be something I don't understand about how app installation and uninstallation affects the system. Can anyone shed some light into this?
Update: when installing an app, its files can spread anywhere really (package managers handle part of the issue) but there is a cool hack around that: use Docker for installing apps and keep them in their sandbox, specially if you're not gonna use them too often. It is also possible to run GUI apps like Firefox entirely in a Docker "sandbox".
|
A new install will seldom break your system (unless you do weird stuff like mixing source and binary).
If you use precompiled binaries in Ubuntu then you can remove them and not have to worry about breaking your system, because a binary should list what it requires to run and your package manager will list what programs rely on that program for you to review.
When you use source, you need to be more careful so you don't remove something critical (like glib). There are no warnings or anything else when you uninstall from source. This means you can completely break your machine.
If you want to uninstall using apt-get then you'll use apt-get remove package as previously stated. Any programs that rely on that package will be uninstalled as well and you'll have a chance to review them.
If you want to uninstall then generally the process is make uninstall. There is no warning (as I said above).
make config will not alter your system, but make install will.
As a beginner, I recommend using apt-get or whatever distro you use for binary packages. It keeps things nice and organized and unless you really want to it won't break your system.
Hopefully, that clears everything up.
| Uninstalling programs in Linux |
1,530,741,375,000 |
I am wondering if it is theoretically possible to build a Linux distro that can both support rpm and debian packages.
Are there any distros live out there that support both?
And if not is it even possible?
|
Bedrock Linux does this. Not saying I've done this, or that it is a good idea, but it is being done.
| Is it possible to build a Linux distro supporting both RPM and .deb packages? |
1,530,741,375,000 |
Are there any substitutes, alternatives or bash tricks for delaying commands without using sleep? For example, performing the below command without actually using sleep:
$ sleep 10 && echo "This is a test"
|
You have alternatives to sleep: They are at and cron. Contrary to sleep these need you to provide the time at which you need them to run.
Make sure the atd service is running by executing service atd status.
Now let's say the date is 11:17 am UTC; if you need to execute a command at 11:25 UTC, the syntax is: echo "This is a test" | at 11:25.
Now keep in mind that atd by default will not be logging the completion of the jobs. For more refer this link. It's better that your application has its own logging.
You can schedule jobs in cron, for more refer : man cron to see its options or crontab -e to add new jobs. /var/log/cron can be checked for the info on execution on jobs.
FYI sleep system call suspends the current execution and schedules it w.r.t. the argument passed to it.
EDIT:
As @Gaius mentioned , you can also add minutes time to at command.But lets say time is 12:30:30 and now you ran the scheduler with now +1 minutes. Even though 1 minute, which translates to 60 seconds was specified , the at doesn't really wait till 12:31:30 to execute the job, rather it executes the job at 12:31:00. The time-units can be minutes, hours, days, or weeks. For more refer man at
e.g: echo "ls" | at now +1 minutes
| Shell: is it possible to delay a command without using `sleep`? |
1,530,741,375,000 |
Can we generate a unique id for each PC, something like uuuidgen, but it will never change unless there are hardware changes? I was thinking about merging CPUID and MACADDR and hash them to generate a consistent ID, but I have no idea how to parse them using bash script, what I know is how can I get CPUID from
dmidecode -t 4 | grep ID
and
ifconfig | grep ether
then I need to combine those hex strings and hash them using sha1 or md5 to create fixed length hex string.
How can I parse that output?
|
How about these two:
$ sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g'
52060201FBFBEBBF
$ ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g'
0126c9da2c38
You can then combine and hash them with:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum
59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f -
To remove the trailing dash, add one more pipe:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum |
awk '{print $1}'
59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f
As @mikeserv points out in his answer, the interface name can change between boots. This means that what is eth0 today might be eth1 tomorrow, so if you grep for eth0 you might get a different MAC address on different boots. My system does not behave this way so I can't really test but possible solutions are:
Grep for HWaddr in the output of ifconfig but keep all of them, not just the one corresponding to a specific NIC. For example, on my system I have:
$ ifconfig | grep HWaddr
eth1 Link encap:Ethernet HWaddr 00:24:a9:bd:2c:28
wlan0 Link encap:Ethernet HWaddr c4:16:19:4f:ac:g5
By grabbing both MAC addresses and passing them through sha256sum, you should be able to get a unique and stable name, irrespective of which NIC is called what:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep -oP 'HWaddr \K.*' | sed 's/://g') | sha256sum |
awk '{print $1}'
662f0036cba13c2ddcf11acebf087ebe1b5e4044603d534dab60d32813adc1a5
Note that the hash is different from the ones above because I am passing both MAC addresses returned by ifconfig to sha256sum.
Create a hash based on the UUIDs of your hard drive(s) instead:
$ blkid | grep -oP 'UUID="\K[^"]+' | sha256sum | awk '{print $1}'
162296a587c45fbf807bb7e43bda08f84c56651737243eb4a1a32ae974d6d7f4
| generate consistent machine unique ID |
1,530,741,375,000 |
I know that Linux is available and has been ported for many different platforms such as for X86, ARM, PowerPC etc.
However, in terms of porting, what is required exactly?
My understanding is that Linux is software written in C. Therefore when porting Linux originally from X86 to ARM or others for example, is it not just a matter of re-compiling the code with the compiler for the specific target architecture?
Putting device drivers for different peripherals aside, what else would need to be done when porting Linux to a new architecture. Does the compiler not take care of everything for us?
|
Even though most of the code in the Linux kernel is written in C, there are still many parts of that code that are very specific to the platform where it's running and need to account for that.
One particular example of this is virtual memory, which works in similar fashion on most architectures (hierarchy of page tables) but has specific details for each architecture (such as the number of levels in each architecture, and this has been increasing even on x86 with introduction of new larger chips.) The Linux kernel code introduces macros to handle traversing these hierarchies that can be elided by the compiler on architectures which have fewer levels of page tables (so that code is written in C, but takes details of the architecture into consideration.)
Many other areas are very specific to each architecture and need to be handled with arch-specific code. Most of these involve code in assembly language though. Examples are:
Context Switching: Context switching involves saving the value of all registers for the process being switched out and restoring the registers from the saved set of the process scheduled into the CPU. Even the number and set of registers is very specific to each architecture. This code is typically implemented in assembly, to allow full access to the registers and also to make sure it runs as fast as possible, since performance of context switching can be critical to the system.
System Calls: The mechanism by which userspace code can trigger a system call is usually specific to the architecture (and sometimes even to the specific CPU model, for instance Intel and AMD introduced different instructions for that, older CPUs might lack those instructions, so details for those will still be unique.)
Interrupt Handlers: Details of how to handle interrupts (hardware interrupts) are usually platform-specific and usually require some assembly-level glue to handle the specific calling conventions in use for the platform. Also, primitives for enabling/disabling interrupts are usually platform-specific and require assembly code as well.
Initialization: Details of how initialization should happen also usually include details that are specific to the platform and often require some assembly code to handle the entry point to the kernel. On platforms that have multiple CPUs (SMP), details on how to bring other CPUs online are usually platform-specific as well.
Locking Primitives: Implementation of locking primitives (such as spinlocks) usually involve platform-specific details as well, since some architectures provide (or prefer) different CPU instructions to efficiently implement those. Some will implement atomic operations, some will provide a cmpxchg that can atomically test/update (but fail if another writer got in first), others will include a "lock" modifier to CPU instructions. These will often involve writing assembly code as well.
There are probably other areas where platform- or architecture-specific code is needed in a kernel (or, specifically, in the Linux kernel.) Looking at the kernel source tree, there are architecture-specific subtrees under arch/ and under include/arch/ where you can find more examples of this.
Some are actually surprising, for instance you'll see that the number of system calls available on each architecture is distinct and some system calls will exist in some architectures and not others. (Even on x86, the list of syscalls differs between a 32-bit and a 64-bit kernel.)
In short, there's plenty of cases a kernel needs to be aware that are specific to a platform. The Linux kernel tries to abstract most of those, so higher-level algorithms (such as how memory management and scheduling works) can be implemented in C and work the same (or mostly the same) on all architectures.
| Porting Linux to another platform requirements [closed] |
1,530,741,375,000 |
In the current version of Raspian, I know it is possible to change the password of the current logged in user from the command line like so:
sudo passwd
which will then prompt the user to enter a new password twice. This will produce output like so:
Changing password for pi.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
I was wondering if there is a possible way to change a password programmatically, like from a shell script.
I'm trying to make a configuration script to deploy on my Raspberry Pis and I don't want to manually have to type in new passwords for them.
|
You're looking for the chpasswd command. You'd do something like this:
echo 'pi:newpassword' | chpasswd # change user pi password to newpassword
Note that it needs to be run as root, at least with the default PAM configuration. But presumably run as root isn't a problem for a system deployment script.
Also, you can do multiple users at once by feeding it multiple lines of input.
| Change Password Programmatically |
1,530,741,375,000 |
Good day!
I use 'ps' to see command that starts process. The issue is that command is too long and 'ps' does not show it entirely.
Example: I use command 'ps -p 2755 | less' and have following output
PID TTY STAT TIME COMMAND
2755 ? Sl 305:05 /usr/java/jdk1.6.0_37/bin/java -Xms64m -Xmx512m -Dflume.monitoring.type=GANGLIA -Dflume.monitoring.hosts=prod.hostname.ru:8649 -cp /etc/flume-ng/conf/acrs-event:/usr/lib/flume-ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop/.//bin:/usr/lib/hadoop/.//cloudera:/usr/lib/hadoop/.//etc:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//lib:/usr/lib/hadoop/.//libexec:/usr/lib/hadoop/.//sbin:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//bin:/usr/lib/hadoop-hdfs/.//cloudera:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.
So, the command line is too long and the command stops mid-phrase. How can I see it whole?
|
On Linux, with the ps from procps(-ng):
ps -fwwp 2755
In Linux versions prior to 4.2, it's still limited though (by the kernel (/proc/2755/cmdline) to 4k) and you can't get more except by asking the process to tell it to you or use a debugger.
$ sh -c 'sleep 1000' $(seq 4000) &
[1] 31149
$ gdb -p $! /bin/sh
[...]
Attaching to program: /bin/dash, process 31149
[...]
(gdb) bt
#0 0x00007f40d11f40aa in wait4 () at ../sysdeps/unix/syscall-template.S:81
[...]
#7 0x00007f40d115c995 in __libc_start_main (main=0x4022c0, argc=4003, ubp_av=0x7fff5b9f5a88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff5b9f5a78)
at libc-start.c:260
#8 0x00000000004024a5 in ?? ()
#9 0x00007fff5b9f5a78 in ?? ()
#10 0x0000000000000000 in ?? ()
(gdb) frame 7
#7 0x00007f40d115c995 in __libc_start_main (main=0x4022c0, argc=4003, ubp_av=0x7fff5b9f5a88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff5b9f5a78)
at libc-start.c:260
(gdb) x/4003s *ubp_av
0x7fff5b9ff83e: "sh"
0x7fff5b9ff841: "-c"
0x7fff5b9ff844: "sleep 1000"
0x7fff5b9ff84f: "1"
0x7fff5b9ff851: "2"
[...]
0x7fff5ba04212: "3999"
0x7fff5ba04217: "4000"
To print the 4th arg with up to 5000 characters:
(gdb) set print elements 5000
(gdb) p ubp_av[3]
If you want something non-intrusive, you could try and get the information from /proc/2755/mem (note that if the kernel.yama.ptrace_scope is not set to 0, you'll need superuser permissions for that). This below works for me (prints all the arguments and environment variables), but there's not much guarantee I would think (the error and unexpected input handling is left as an exercise to the reader):
$ perl -e '$p=shift;open MAPS, "/proc/$p/maps";
($m)=grep /\[stack\]/, <MAPS>;
($a,$b)=map hex, $m =~ /[\da-f]+/g;
open MEM, "/proc/$p/mem" or die "open mem: $!";
seek MEM,$a,0; read MEM, $c,$b-$a;
print((split /\0{2,}/,$c)[-1])' "$!" | tr \\0 \\n | head
sh
-c
sleep 1000
1
2
3
4
5
6
7
(replace "$!" with the process id). The above uses the fact that Linux puts the strings pointed to by argv[], envp[] and the executed filename at the bottom of the stack of the process.
The above looks in that stack for the bottom-most string in between two sets of two or more consecutive NUL bytes. It doesn't work if any of the arguments or env strings is empty, because then you'll have a sequence of 2 NUL bytes in the middle of those argv or envp. Also, we don't know where the argv strings stop and where the envp ones start.
A work around for that would be to refine that heuristic by looking backwards for the actual content of argv[] (the pointers). This below works on i386 and amd64 architecture for ELF executables at least:
perl -le '$p=shift;open MAPS, "/proc/$p/maps";
($m)=grep /\[stack\]/, <MAPS>;
($a,$b)=map hex, $m =~ /[\da-f]+/g;
open MEM, "/proc/$p/mem" or die "open mem: $!";
seek MEM,$a,0; read MEM, $c,$b-$a;
$c =~ /.*\0\0\K[^\0].*\0[^\0]*$/s;
@a=unpack"L!*",substr$c,0,$-[0];
for ($i = $#a; $i >=0 && $a[$i] != $a+$-[0];$i--) {}
for ($i--; $i >= 0 && ($a[$i]>$a || $a[$i]==0); $i--) {}
$argc=$a[$i++];
print for unpack"(Z*)$argc",substr$c,$a[$i]-$a;' "$!"
Basically, it does the same as above, but once it has found the first string of argv[] (or at least one of the argv[] or envp[] strings if there are empties), it knows its address, so it looks backward in the top rest of the stack for a pointer with that same value. Then keeps looking backwards until it finds a number that can't be a pointer to those, and that is argc. Then the next integer is argv[0]. And knowing argv[0] and argc, it can display the list of arguments.
That doesn't work if the process has written to its argv[] possibly overriding some NUL delimiters or if argc is 0 (argc is generally at least 1 to include argv[0]) but should work in the general case at least for ELF executables.
In 4.2 and newer, /proc/<pid>/cmdline is no longer truncated, but ps itself has a maximum display width of 128K.
| ps: full command is too long |
1,530,741,375,000 |
For example, I can do the following
touch a
or
touch ./a
Then when I do ls I can view both, so what exactly is the ./ for?
|
The dot-slash, ./, is a relative path to something in the current directory.
The dot is the current directory and the slash is a path delimiter.
When you give the command touch ./a you say "run the touch utility with the argument ./a", and touch will create (or update the timestamp for) the file a in the current directory.
There is no difference between touch a and touch ./a as both commands will act on the thing called a in the current directory.
In a similar way, touch ../a will act on the a in the directory above the current directory as .. refers to "one directory further up in the hierarchy".
. and .. are two special directory names that are present in every directory on Unix systems.
It's useful to be able to put ./ in front of a filename sometimes, as when you're trying to create or delete, or just work with, a file with a dash as the first character in its filename.
For example,
touch -a file
will not create a file called -a file, and neither would
touch '-a file'
But,
touch ./'-a file'
would.
| What does the ./ mean (dot slash) in linux? |
1,530,741,375,000 |
I want to ensure that my program can only be run by user xyz using root privilege. To do this, I set the setuid bit with:
chmod u+s program1.sh
ls -l program1.sh
rwsr-x--- 1 root house 1299 May 15 23:54 program1.sh
Also, I added user xyz to the house group so that only xyz and root can run program1.sh.
In program1.sh there is
id -u
so that it can show me the effective ID.
Running program1.sh as root, it shows root. But running with the xyz account, it shows xyz. It seems that it didn't run with root privilege. I don't know what's wrong here.
|
When executing shell scripts that have the setuid bit (e.g., perms of rwsr-xr-x), the scripts run as the user that executes them, not as the user that owns them. This is contrary to how setuid is handled for binaries (e.g., /usr/bin/passwd), which run as the user that owns them, regardless of which user executes them.
Check this page: https://access.redhat.com/site/solutions/124693
This is a security measure taken by operating system. You should use your script with sudo instead.
If you really need to use setuid o your script you can create a binary that will do the work. Create a new file “program.c” and copy the following code:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
setuid(0);
system("./program.sh"); #This line is dangerous: It allows an attacker to execute arbitrary code on your machine (even by accident).
return 0;
}
Compile and execute the code using the following commands:
$ gcc program.c -o program
$ sudo chown root.root program
$ sudo chmod 4755 program
$ ./program
This way it will work. The setuid works for compiled file, and this file can execute others files as root.
| Why does setuid not work? [duplicate] |
1,530,741,375,000 |
I'm trying to sort the /etc/passwd numerically by user id numbersb(third field) in ascending order and then send it to s4.
What command would I uses to do that? I'm on this for a while now.
|
Try the below code, Sort the /etc/passwd based on uid.
sort -n -t ':' -k3 /etc/passwd
| Sort with field separator [duplicate] |
1,530,741,375,000 |
Is updatedb necessary at all? I never use locate and my servers tend to have dozens of millions of files which usually makes updatedb to run for a long time and consume I/O needed by MySQL and/or other software.
Can I just remove it from cron and expect everything to work? (by everything I mean usual software found on server: linux, cpanel, mysql, apache, php etc.).
|
Yes you can disable it in the crons or remove the package that provides updatedb. On a Red Hat system you'd go about the steps in determining if anything requires it prior to removal.
First find out where the program is located on disk.
$ type updatedb
updatedb is /usr/bin/updatedb
Next find out what package provides updatedb.
$ rpm -qf /usr/bin/updatedb
mlocate-0.26-3.fc19.x86_64
See if anything requires mlocate.
$ rpm -q --whatrequires mlocate
no package requires mlocate
Nothing requires it so you can remove the package.
$ yum remove mlocate
| Can I just disable updatedb? |
1,530,741,375,000 |
Is it possible to move a logical volume from one volume group to another in whole?
It is possible to create a (more or less) matching lv and copy the data over, but is there any way to do this with LVM tools alone?
If not, is there a theoretical reason or a technical limitation (extent sizes)?
|
The precise answer to this question is: "No, it is not possible to (logically) move a Logical Volume (LV) from one Volume Group (VG1) to another (VG2). The data must be physically copied."
Reason: Logical Volume data is physically stored on block devices (disks, partitions) assigned to a specific Volume Group. Moving Logical Volume from VG1 consisting of /dev/sda and /dev/sdb to VG2 consisting of /dev/sdc would require to move data from /dev/sda and/or /dev/sdb to /dev/sdc which is a physical copy operation between at least two block devices (or partitions).
P.S.
If all the LV data was stored on the Physical Volume, which could be completely excluded from the VG1, then this Physical Volume could be assigned to VG2. But then it would be moving a Physical Volume from one Volume Group to another, not a move of a Logical Volume.
| Move a logical volume from one volume group to another |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.