date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,332,892,117,000 |
I'm connected to local area network with access to the Internet through gateway. There is DNS server in local network which is capable of resolving hostnames of computers from local network.
I would like to configure systemd-resolved and systemd-networkd so that lookup requests for local hostnames would be directed (routed) exclusively to local DNS server and lookup requests for all other hostnames would be directed exclusively to another, remote DNS server.
Let's assume I don't know where the configuration files are or whether I should add more files and require their path(s) to be specified in the answer.
|
In the configuration file for local network interface (a file matching the name pattern /etc/systemd/network/*.network) we have to either specify we want to obtain local DNS server address from DHCP server using DHCP= option:
[Network]
DHCP=yes
or specify its address explicitly using DNS= option:
[Network]
DNS=10.0.0.1
In addition we need to specify (in the same section) local domains using Domains= option
Domains=domainA.example domainB.example ~example
We specify local domains domainA.example domainB.example to get the following behavior (from systemd-resolved.service, systemd-resolved man page):
Lookups for a hostname ending in one of the per-interface domains are
exclusively routed to the matching interfaces.
This way hostX.domainA.example will be resolved exclusively by our local DNS server.
We specify with ~example that all domains ending in example are to be treated as route-only domains to get the following behavior (from description of this commit) :
DNS servers which have route-only domains should only be used for the
specified domains.
This way hostY.on.the.internet will be resolved exclusively by our global, remote DNS server.
Note
Ideally, when using DHCP protocol, local domain names should be obtained from DHCP server instead of being specified explicitly in configuration file of network interface above. See UseDomains= option. However there are still outstanding issues with this feature – see systemd-networkd DHCP search domains option issue.
We need to specify remote DNS server as our global, system-wide DNS server. We can do this in /etc/systemd/resolved.conf file:
[Resolve]
DNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844
Don't forget to reload configuration and to restart services:
$ sudo systemctl daemon-reload
$ sudo systemctl restart systemd-networkd
$ sudo systemctl restart systemd-resolved
Caution!
Above guarantees apply only when names are being resolved by systemd-resolved – see man page for nss-resolve, libnss_resolve.so.2 and man page for systemd-resolved.service, systemd-resolved.
See also:
Description of routing lookup requests in systemd related man pages is unclear
How to troubleshoot DNS with systemd-resolved?
References:
Man page for systemd-resolved.service, systemd-resolved
Man page for resolved.conf, resolved.conf.d
Man page for systemd-network
| How to configure systemd-resolved and systemd-networkd to use local DNS server for resolving local domains and remote DNS server for remote domains? |
1,332,892,117,000 |
I'm going through this book, Advanced Linux Programming by Mark Mitchell, Jeffrey Oldham, and Alex Samuel. It's from 2001, so a bit old. But I find it quite good anyhow.
However, I got to a point when it diverges from what my Linux produces in the shell output. On page 92 (116 in the viewer), the chapter 4.5 GNU/Linux Thread Implementation begins with the paragraph containing this statement:
The implementation of POSIX threads on GNU/Linux differs from the
thread implementation on many other UNIX-like systems in an important
way: on GNU/Linux, threads are implemented as processes.
This seems like a key point and is later illustrated with a C code. The output in the book is:
main thread pid is 14608
child thread pid is 14610
And in my Ubuntu 16.04 it is:
main thread pid is 3615
child thread pid is 3615
ps output supports this.
I guess something must have changed between 2001 and now.
The next subchapter on the next page, 4.5.1 Signal Handling, builds up on the previous statement:
The behavior of the interaction between signals and threads varies
from one UNIX-like system to another. In GNU/Linux, the behavior is
dictated by the fact that threads are implemented as processes.
And it looks like this will be even more important later on in the book. Could someone explain what's going on here?
I've seen this one Are Linux kernel threads really kernel processes?, but it doesn't help much. I'm confused.
This is the C code:
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
void* thread_function (void* arg)
{
fprintf (stderr, "child thread pid is %d\n", (int) getpid ());
/* Spin forever. */
while (1);
return NULL;
}
int main ()
{
pthread_t thread;
fprintf (stderr, "main thread pid is %d\n", (int) getpid ());
pthread_create (&thread, NULL, &thread_function, NULL);
/* Spin forever. */
while (1);
return 0;
}
|
I think this part of the clone(2) man page may clear up the difference re. the PID:
CLONE_THREAD (since Linux 2.4.0-test8)
If CLONE_THREAD is set, the child is placed in the same thread
group as the calling process.
Thread groups were a feature added in Linux 2.4 to support the
POSIX threads notion of a set of threads that share a single
PID. Internally, this shared PID is the so-called thread
group identifier (TGID) for the thread group. Since Linux
2.4, calls to getpid(2) return the TGID of the caller.
The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID(*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes.
Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer.
As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
| Are threads implemented as processes on Linux? |
1,332,892,117,000 |
I know that pkill has more filtering rules than killall. My question is, what is the difference between:
pkill [signal] name
and
killall [signal] name
I've read that killall is more effective and kill all processes and subprocesses (and recursively) that match with name program. pkill doesn't do this too?
|
The pgrep and pkill utilities were introduced in Sun's Solaris 7 and, as g33klord noted, they take a pattern as argument which is matched against the names of running processes. While pgrep merely prints a list of matching processes, pkill will send the specified signal (or SIGTERM by default) to the processes. The common options and semantics between pgrep and pkill comes in handy when you want to be careful and first review the list matching processes with pgrep, then proceed to kill them with pkill. pgrep and pkill are provided by the the procps package, which also provides other /proc file system utilities, such as ps, top, free, uptime among others.
The killall command is provided by the psmisc package, and differs from pkill in that, by default, it matches the argument name exactly (up to the first 15 characters) when determining the processes signals will be sent to. The -e, --exact option can be specified to also require exact matches for names longer than 15 characters. This makes killall somewhat safer to use compared to pkill. If the specified argument contains slash (/) characters, the argument is interpreted as a file name and processes running that particular file will be selected as signal recipients. killall also supports regular expression matching of process names, via the -r, --regexp option.
There are other differences as well. The killall command for instance has options for matching processes by age (-o, --older-than and -y, --younger-than), while pkill can be told to only kill processes on a specific terminal (via the -t option). Clearly then, the two commands have specific niches.
Note that the killall command on systems descendant from Unix System V (notably Sun's Solaris, IBM's AIX and HP's HP-UX) kills all processes killable by a particular user, effectively shutting down the system if run by root.
The Linux psmisc utilities have been ported to BSD (and in extension Mac OS X), hence killall there follows the "kill processes by name" semantics.
| What's the difference between pkill and killall? |
1,332,892,117,000 |
I'm trying to find my current logged in group without wanting to use newgrp to switch.
|
I figured I can use the following.
id -g
To get all the groups I belong
id -G
And to get the actual names, instead of the ids, just pass the flag -n.
id -Gn
This last command will yield the same result as executing
groups
| Is there a whoami to find the current group I'm logged in as? |
1,332,892,117,000 |
We can use the following in order to test telnet VIA port; in the following example we test port 6667:
[root@kafka03 ~]# telnet kafka02 6667
Trying 103.64.35.86...
Connected to kafka02.
Escape character is '^]'.
^CConnection closed by foreign host
Since on some machines we can't use telnet (for internal reasons) what are the alternatives to check ports, as telnet?
|
Netcat (nc) is one option.
nc -zv kafka02 6667
-z = sets nc to simply scan for listening daemons, without actually sending any data to them
-v = enables verbose mode
| What are the alternatives for checking open ports, besides telnet? |
1,332,892,117,000 |
What is the difference between procfs and sysfs? Why are they made as file systems? As I understand it, proc is just something to store the immediate info regarding the processes running in the system.
|
What is the difference between procfs
and sysfs?
proc is the old one, it is more or less without rules and structure. And at some point it was decided that proc was a little too chaotic and a new way was needed.
Then sysfs was created, and the new stuff that was added was put into sysfs like device information.
So in some sense they do the same, but sysfs is a little bit more structured.
Why are they made as file systems?
UNIX philosophy tells us that everything is a "file", therefore it was created so it behaves as files.
As I understand it, proc is just
something to store the immediate info
regarding the processes running in the
system.
Those parts has always been there and they will probably never move into sysfs.
But there is more old stuff that you can find in proc, that has not been moved.
| What is the difference between procfs and sysfs? |
1,332,892,117,000 |
What do I need to put in the [install] section, so that systemd runs /home/me/so.pl right before shutdown and also before /proc/self/net/dev gets destroyed?
[Unit]
Description=Log Traffic
[Service]
ExecStart=/home/me/so.pl
[Install]
?
|
The suggested solution is to run the service unit as a normal service - have a look at the [Install] section. So everything has to be thought reverse, dependencies too. Because the shutdown order is the reverse startup order. That's why the script has to be placed in ExecStop=.
The following solution is working for me:
[Unit]
Description=...
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=<your script/program>
[Install]
WantedBy=multi-user.target
RemainAfterExit=true is needed when you don't have an ExecStart action.
After creating the file, make sure to systemctl daemon-reload and systemctl enable yourservice --now.
I just got it from systemd IRC, credits are going to mezcalero.
| How to run a script with systemd right before shutdown? |
1,332,892,117,000 |
Currently, when using the ifconfig command, the following IP addresses are shown:
own IP, broadcast and mask.
Is there a way to show the related gateway IP address as well (on the same screen with all the others, not by using 'route' command)?
|
You can with the ip command, and given that ifconfig is in the process of being deprecated by most distributions it's now the preferred tool. An example:
$ ip route show
212.13.197.0/28 dev eth0 proto kernel scope link src 212.13.197.13
default via 212.13.197.1 dev eth0
| show gateway IP address when performing ifconfig command |
1,332,892,117,000 |
I can connect to Linux machines from Windows using PuTTY/SSH. I want to do the other way round - connect to a Windows machine from Linux.
Is this possible?
|
It depends on how you want to connect. You can create shares on the Windows machine and use smb/cifs to connect to the share.
The syntax would depend based on if you are in a domain or not.
# mount -t cifs //server/share /mnt/server --verbose -o user=UserName,dom=DOMAIN
You also have the ability to mount the $IPC and administrative shares. You can look into Inter-Process Communication for what you can do via the $IPC share.
There is always:
RDP
VNC
telnet
ssh
Linux on Windows
With the last 3 you need to install additional software.
Kpym (telnet / ssh server)
MobaSSH (ssh server)
Cygwin (run a Linux environment inside Windows)
DamnSmall Linux - inside Windows (like Cygwin run DSL inside Windows)
VNC can be run from a stand-alone binary or installed.
RealVNC
TightVNC
For RDP most Linux systems either already have rdesktop installed or it is available in the package manager. Using rdesktop you only have to enable RDP connections to your Windows system and then you will be able to use RDP for a full GUI Windows console.
| Can I connect to Windows machine from Linux shell? |
1,332,892,117,000 |
I've seen some people make a separate partition for /boot. What is the benefit of doing this? What problems might I encounter in the future by doing this?
Also, except for /home and /boot, which partitions can be separated? Is it recommended?
|
This is a holdover from "ye olde tymes" when machines had trouble addressing large hard drives. The idea behind the /boot partition was to make the partition always accessible to any machine that the drive was plugged into. If the machine could get to the start of the drive (lower cylinder numbers) then it could bootstrap the system; from there the linux kernel would be able to bypass the BIOS boot restriction and work around the problem. As modern machines have lifted that restriction, there is no longer a fixed need for /boot to be separate, unless you require additional processing of the other partitions, such as encryption or file systems that are not natively recognized by the bootloader.
Technically, you can get away with a single partition and be just fine, provided that you are not using really really old hardware (pre-1998 or so).
If you do decide to use a separate partition, just be sure to give it adequate room, say 200mb of space. That will be more than enough for several kernel upgrades (which consume several megs each time). If /boot starts to fill up, remove older kernels that you don't use and adjust your bootloader to recognize this fact.
| Is it good to make a separate partition for /boot? |
1,440,012,027,000 |
I've worked on *nix environments for the last four years as a application developer (mostly in C).
Please suggest some books/blogs etc. for improving my *nix internals knowledge.
|
Here are some suggestions on how to understand the "spirit" of Unix, in addition to the fine recommendations that have been done in the previous posts:
"The Unix Programming Environment" by Kernighan and Pike: an old book, but it shows the essence of the Unix environment. It will also help you become an effective shell user.
"Unix for the Impatient" is a useful resource to learn to navigate the Unix environment. One of my favorites.
If you want to become a power user, there is nothing better than O'Reilly's "Unix Power Tools" which consists of the collective tips and tricks from Unix professionals.
Another book that I have not seen mentioned that is a fun light and education reading is the "Operating Systems, Design and Implementation", the book from Andy Tanenbaum that included the source code for a complete Unix operating system in 12k lines of code.
| Recommended reading to better understand Unix/Linux internals [closed] |
1,440,012,027,000 |
I need to write a bash script wherein I have to create a file which holds the details of IP Addresses of the hosts and their mapping with corresponding MAC Addresses.
Is there any possible way with which I can find out the MAC address of any (remote) host when IP address of the host is available?
|
If you just want to find out the MAC address of a given IP address you can use the command arp to look it up, once you've pinged the system 1 time.
Example
$ ping skinner -c 1
PING skinner.bubba.net (192.168.1.3) 56(84) bytes of data.
64 bytes from skinner.bubba.net (192.168.1.3): icmp_seq=1 ttl=64 time=3.09 ms
--- skinner.bubba.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.097/3.097/3.097/0.000 ms
Now look up in the ARP table:
$ arp -a
skinner.bubba.net (192.168.1.3) at 00:19:d1:e8:4c:95 [ether] on wlp3s0
fing
If you want to sweep the entire LAN for MAC addresses you can use the command line tool fing to do so. It's typically not installed so you'll have to go download it and install it manually.
$ sudo fing 10.9.8.0/24
Using ip
If you find you don't have the arp or fing commands available, you could use iproute2's command ip neigh to see your system's ARP table instead:
$ ip neigh
192.168.1.61 dev eth0 lladdr b8:27:eb:87:74:11 REACHABLE
192.168.1.70 dev eth0 lladdr 30:b5:c2:3d:6c:37 STALE
192.168.1.95 dev eth0 lladdr f0:18:98:1d:26:e2 REACHABLE
192.168.1.2 dev eth0 lladdr 14:cc:20:d4:56:2a STALE
192.168.1.10 dev eth0 lladdr 00:22:15:91:c1:2d REACHABLE
References
Equivalent of iwlist to see who is around?
| Resolving MAC Address from IP Address in Linux |
1,440,012,027,000 |
I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable. See at the end of this post for some example error messages.
Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present.
I have collected these 5 possible causes for the error and how to verify that they are not present on my system:
There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max (source). In my case this is set to 60613.
Every thread takes some space in the stack. The stack size limit is configured using ulimit -s (source). The limit for my shell used to be 8192, but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf, so it ulimit -s now returns 32768. I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service (source, and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container.
Every thread takes some memory. A virtual memory limit is configured using ulimit -v. On my system it is set to unlimited, and 80% of my 3 GB of memory are free.
There is a limit on the number of processes using ulimit -u. Threads count as processes in this case (source). On my system, the limit is set to 30306, and for the docker daemon and inside docker containers, the limit is 1048576. The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l (source). On my system they are between 700 and 800.
There is a limit on the number of open files, which according to some sources is also relevant when creating threads. The limit is configured using ulimit -n. On my system and inside docker, the limit is set to 1048576. The number of open files can be found out using lsof | wc -l (source), on my system it is about 30000.
It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this. I have opened a bug report for Arch Linux.
What has changed in the kernel that could be causing this?
Here are some example error messages:
Crash dump was written to: erl_crash.dump
Failed to create aux thread
Jan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailable
dpkg: unrecoverable fatal error, aborting:
fork failed: Resource temporarily unavailable
E: Sub-process /usr/bin/dpkg returned an error code (2)
test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include"
/bin/sh: fork: retry: Resource temporarily unavailable
/usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include'
test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3"
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: Resource temporarily unavailable
/bin/sh: fork: Resource temporarily unavailable
make[3]: *** [install-man3] Error 254
Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable"
[Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
|
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request. After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago
Docs: https://docs.docker.com
Main PID: 2770 (docker)
Tasks: 502 (limit: 512)
CGroup: /system.slice/docker.service
Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system, but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager.
A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc.
DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax.
Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf.
| Creating threads fails with “Resource temporarily unavailable” with 4.3 kernel |
1,440,012,027,000 |
I am defining common bash files which I want to use across different distributions. I need a way to check if system is using systemd or sysvinit (/etc/init.d/). I need this so I run appropriate command to start the service. What would be safe way to check for this? I currently check for existance of systemctl command, but is that really an option as there might be the case where systemctl command might be available, but it wouldn't necessarily mean that systemd is actually used?
Here is an excerpt from my current bash script:
#!/bin/sh
if [ command -v systemctl >/dev/null ]
then
systemctl service start
else
/etc/init.d/service start
fi
|
Systemd and init have pid = 1
pidof /sbin/init && echo "sysvinit" || echo "other"
Check for systemd
pidof systemd && echo "systemd" || echo "other"
| Convenient way to check if system is using systemd or sysvinit in BASH? [duplicate] |
1,440,012,027,000 |
Am doing some work on a remote CentOS 5.6 machine and my network keeps dropping.
Is there a way that I can recover my hung sessions after I reconnect?
EDIT: am doing some updating and installing with yum and am worried this might be a problem if processes keep hanging in the middle of whatever they're doing.
|
There is no way, but to prevent this I like using tmux. I start tmux, start the operation and go on my way. If I return and find the connection has been broken, all I have to do is reconnect and type tmux attach.
Here's an example.
$ tmux
$ make <something big>
......
Connection fails for some reason
Reconect
$ tmux ls
0: 1 windows (created Tue Aug 23 12:39:52 2011) [103x30]
$ tmux attach -t 0
Back in the tmux sesion
| How to recover a shell after a disconnection |
1,440,012,027,000 |
I recently learned that (at least on Fedora and Red Hat Enterprise Linux), executable programs that are compiled as Position Independent Executables (PIE) receive stronger address space randomization (ASLR) protection.
So: How do I test whether a particular executable was compiled as a Position Independent Executable, on Linux?
|
You can use the perl script contained in the hardening-check package, available in Fedora and Debian (as hardening-includes). Read this Debian wiki page for details on what compile flags are checked. It's Debian specific, but the theory applies to Red Hat as well.
Example:
$ hardening-check $(which sshd)
/usr/sbin/sshd:
Position Independent Executable: yes
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: yes
| How to test whether a Linux binary was compiled as position independent code? |
1,440,012,027,000 |
I created a folder on the command line as the root user. Now I want to edit it and its contents in GUI mode. How do I change the permissions on it to allow me to do this?
|
If I understand you correctly, fire up a terminal, navigate to one level above that directory, change to root and issue the command:
chown -R user:group directory/
This changes the ownership of directory/ (and everything else within it) to the user user and the group group. Many systems add a group named after each user automatically, so you may want:
chown -R user:user directory/
After this, you can edit the tree under directory/ and even change the permissions of directory/ and any file/directory under it, from the GUI.
If you truly want any user to have full permissions on all files under directory/ (which may be OK if this is your personal computer, but is definitely not recommended for multi-user environments), you can issue this:
chmod -R a+rwX directory/
as root.
| How to change permissions from root user to all users? |
1,440,012,027,000 |
In unix/linux, any number of consecutive forwardslashes in a path is generally equivalent to a single forwardslash. eg.
$ cd /home/shum
$ pwd
/home/shum
$ cd /home//shum
$ pwd
/home/shum
$ cd /home///shum
$ pwd
/home/shum
Yet for some reason two forwardslashes at the beginning of an absolute path is treated specially. eg.
$ cd ////home
$ pwd
/home
$ cd ///
$ pwd
/
$ cd //
$ pwd
//
$ cd home//shum
$ pwd
//home/shum
Any other number of consecutive forwardslashes anywhere else in a patch gets truncated, but two at the beginning will remain, even if you then navigate around the filesystem relative to it.
Why is this? Is there any difference between /... and //... ?
|
For the most part, repeated slahes in a path are equivalent to a single slash. This behavior is mandated by POSIX and most applications follow suit. The exception is that “a pathname that begins with two successive slashes may be interpreted in an implementation-defined manner” (but ///foo is equivalent to /foo).
Most unices don't do anything special with two initial slashes. Linux, in particular, doesn't. Cygwin does: //hostname/path accesses a network drive (SMB).
What you're seeing is not, in fact, Linux doing anything special with //: it's bash's current directory tracking. Compare:
$ bash -c 'cd //; pwd'
//
$ bash -c 'cd //; /bin/pwd'
/
Bash is taking the precaution that the OS might be treating // specially and keeping it. Dash does the same. Ksh and zsh don't when they're running on Linux, I guess (I haven't checked) they have a compile-time setting.
| unix, difference between path starting with '/' and '//' [duplicate] |
1,440,012,027,000 |
On my fedora VM, when running with my user account I have /usr/local/bin in my path:
[justin@justin-fedora12 ~]$ env | grep PATH
PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/justin/bin
And likewise when running su:
[justin@justin-fedora12 ~]$ su -
Password:
[root@justin-fedora12 justin]# env | grep PATH
PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/justin/bin
However, when running via sudo, this directory is not in the path:
[root@justin-fedora12 justin]# exit
[justin@justin-fedora12 ~]$ sudo bash
[root@justin-fedora12 ~]# env | grep PATH
PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/sbin:/bin:/usr/sbin:/usr/bin
Why would the path be different when running via sudo?
|
Q: "Why would the path be different when running via sudo?"
Perhaps the best explanation is found in the Command environment section of man 5 sudoers:
Since environment variables can influence program behavior, sudoers provides a means to restrict which variables from the user's environment are inherited by the command to be run. There are two distinct ways sudoers can deal with environment variables.
This section goes on to explain how to modify the restrictions on environment variables that are passed - or blocked - by sudo. These default restrictions will vary by OS and distro. Consequently, your first stop should be sudo -V to learn the statusof your environment variables. Run this command from the root prompt; for example:
$ sudo -i
# sudo -V
... a long list ...
This output shows each and every environment variable for your system, grouped by its status: removed, preserved, check.
If your PATH environment variable is in the removed list, you may change that with this line:
Defaults env_keep += "path PATH"
You may add this line to your sudoers file configuration by editing with:
$ sudo visudo
Alternatively - and a bit cleaner to my way of thinking, is to add it as a "code snippet" to /etc/sudoers.d - if it exists. If it doesn't exist, you may be able to create it by adding the line #includedir /etc/sudoers.d to the tail of your sudoers file.
This is basically the same process as editing sudoers; the "code snippets" may be created and edited as follows:
$ sudo visudo -f /etc/sudoers.d/10_mypathsnippet
You can verify the change is set by running sudo -V again (from root prompt!); in this case your PATH variable should now be in the preserved list.
| Why are PATH variables different when running via sudo and su? |
1,440,012,027,000 |
I want know how I can run a command for a specified time say, one minute and if it doesn't complete execution then I should be able to stop it.
|
Use timeout:
NAME
timeout - run a command with a time limit
SYNOPSIS
timeout [OPTION] DURATION COMMAND [ARG]...
timeout [OPTION]
(Just in case, if you don't have this command or if you need to be compatible with very very old shells and have several other utterly specific requirements… have a look at this this question ;-))
| Run a command for a specified time and then abort if time exceeds |
1,440,012,027,000 |
I am running the following command on my ubuntu server
root@slot13:~# lxc-stop --name pavan --logfile=test1.txt --logpriority=trace
It seems to hang indefinitely. Whenever this happened on AIX, I simply used to get the PID of the offending process and say
$ procstack <pid_of_stuck_process>
and it used to show the whole callstack of the process. Is there any equivalent of procstack in linux/ubuntu?
|
My first step would be to run strace on the process, best
strace -s 99 -ffp 12345
if your process ID is 12345. This will show you all syscalls the program is doing. How to strace a process tells you more.
If you insist on getting a stacktrace, google tells me the equivalent is pstack. But as I do not have it installed I use gdb:
tweedleburg:~ # sleep 3600 &
[2] 2621
tweedleburg:~ # gdb
(gdb) attach 2621
(gdb) bt
#0 0x00007feda374e6b0 in __nanosleep_nocancel () from /lib64/libc.so.6
#1 0x0000000000403ee7 in ?? ()
#2 0x0000000000403d70 in ?? ()
#3 0x000000000040185d in ?? ()
#4 0x00007feda36b8b05 in __libc_start_main () from /lib64/libc.so.6
#5 0x0000000000401969 in ?? ()
(gdb)
| How to know where a program is stuck in linux? |
1,440,012,027,000 |
Is there a diagram that shows how the various performance tools such as ip, netstat, perf, top, ps, etc. interact with the various subsystems within the Linux kernel?
|
I came across this diagram which shows exactly this.
In the above you can see where tools such as strace, netstat, etc. interact with the Linux kernel's subsystems. I like this diagram because it succinctly shows where each tool latches on to the Linux kernel, which can be extremely helpful when you're first learning about all the tools and their applications.
Source: Linux PerfTools
References
Linux Performance
| Diagram of Linux kernel vs. performance tools? |
1,440,012,027,000 |
I have a bash script with the following:
#!/bin/bash -e
egrep "^username" /etc/passwd >/dev/null
if[ $? -eq 0 ]
then
echo "doesn't exist"
fi
This script will not run without the -e. What does the -e do for this script? Also, what does the $? do in this context?
|
Your post actually contains 2 questions.
The -e flag instructs the script to exit on error. More flags
If there is an error it will exit right away.
The $? is the exit status of the last command. In Linux an exit status of 0 means that the command was successful. Any other status would mean an error occurred.
To apply these answers to your script:
egrep "^username" /etc/passwd >/dev/null
would look for the username in the /etc/passwd file.
If it finds it then the exit status $? will be equal to 0.
If it doesn't find it the exit status will be something else (not 0). Here, you will want to execute the echo "doesn't exist" part of the code.
Unfortunately there is an error in your script, and you would execute that code if the user exists - change the line to
if [ $? -ne 0 ]
to get the logic right.
However if the user doesn't exist, egrep will return an error code, and due to the -e option the shell will immediately exit after that line, so you would never reach that part of the code.
| What does the -e do in a bash shebang? |
1,440,012,027,000 |
For a given process in /proc/<pid>/smaps, for a given mapping entry what are:
Shared_Clean
Shared_Dirty
Private_Clean
Private_Dirty
Is Shared_Clean + Shared_Dirty the amount of memory that is shared with other processes? So it is like shared RSS?
Similarly is Private_Clean + Private_Dirty the amount of memory that is available for only one process? So it is like private RSS?
Is the PSS value = PrivateRSS + (SharedRSS / number of processes sharing it)?
Some more questions after reading this link: LWN
Now lets talk about the process as a whole, whose smaps entry we are looking at.
I noticed that if I do Shared_Clean + Shared_Dirty + Private_Clean + Private_Dirty for every smaps entry for the process I get the RSS of the process as
reported by ps, which is pretty cool. For e.g.
ps -p $$ -o pid,rss
Will give me the (approx) same value for rss as the sum of every Shared_Clean , Shared_Dirty , Private_Clean , Private_Dirty entry in /proc/$$/smaps.
But what about PSS for the entire process? So, from the example above how do I get the PSS for $$ ? Can I just add the PSS entry for every smaps mapping and arrive at PSS for $$ ?
And what about USS for the entire process? Again taking the example above I am guessing that I can arrive at the USS for $$ by summing up only the Private_* entries for every smaps entry for $$..right?
Notes:
PSS= Proportional set size.
USS= Unique set size.
|
Clean pages are pages that have not been modified since they were mapped (typically, text sections from shared libraries are only read from disk (when necessary), never modified, so they'll be in shared, clean pages).
Dirty pages are pages that are not clean (i.e. have been modified).
Private pages are available only to that process, shared pages are mapped by other processes*.
RSS is the total number of pages, shared or not, currently mapped into the process. So Shared_Clean + Shared_Dirty would be the shared part of the RSS (i.e. the part of RSS that is also mapped into other processes), and Private_Clean + Private_Dirty the private part of RSS (i.e. only mapped in this process).
PSS (proportional share size) is as you describe. Private pages are summed up as is, and each shared mapping's size is divided by the number of processes that share it.
So if a process had 100k private pages, 500k pages shared with one other process, and 500k shared with four other processes, the PSS would be:
100k + (500k / 2) + (500k / 5) = 450k
Further readings:
ELC: How much memory are applications really using?
Documentation/filesystems/proc.txt in the kernel source
man proc(5)
Linux Memory Management Overview
Memory Management at TLDP.org
LinuxMM
Regarding process-wide sums:
RSS can be (approximately+) obtained by summing the Rss: entries in smaps (you don't need to add up the shared/private shared/dirty entries).
awk '/Rss:/{ sum += $2 } END { print sum }' /proc/$$/smaps
You can sum up Pss: values the same way, to get process-global PSS.
USS isn't reported in smaps, but indeed, it is the sum of private mappings, so you can obtain it the same way too
*Note that a "share-able" page is counted as a private mapping until it is actually shared. i.e. if there is only one process currently using libfoo, that library's text section will appear in the process's private mappings. It will be accounted in the shared mappings (and removed from the private ones) only if/when another process starts using that library.
+The values don't add up exactly for all processes. Not exactly sure why... sorry.
| Getting information about a process' memory usage from /proc/pid/smaps |
1,440,012,027,000 |
My computer says:
$ uptime
10:20:35 up 1:46, 3 users, load average: 0,03, 0,10, 0,13
And if I check last I see:
reboot system boot 3.19.0-51-generi Tue Apr 12 08:34 - 10:20 (01:45)
And then I check:
$ ls -l /var/log/boot.log
-rw-r--r-- 1 root root 4734 Apr 12 08:34 boot.log
Then I see in /var/log/syslog the first line of today being:
Apr 12 08:34:39 PC... rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="820" x-info="http://www.rsyslog.com"] start
So all seems to converge in 8:34 being the time when my machine has booted.
However, I wonder: what is the exact time uptime uses? Is uptime a process that launches and checks some file or is it something on the hardware?
I'm running Ubuntu 14.04.
|
On my system it gets the uptime from /proc/uptime:
$ strace -eopen uptime
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib/libproc-3.2.8.so", O_RDONLY|O_CLOEXEC) = 3
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/proc/version", O_RDONLY) = 3
open("/sys/devices/system/cpu/online", O_RDONLY|O_CLOEXEC) = 3
open("/etc/localtime", O_RDONLY|O_CLOEXEC) = 3
open("/proc/uptime", O_RDONLY) = 3
open("/var/run/utmp", O_RDONLY|O_CLOEXEC) = 4
open("/proc/loadavg", O_RDONLY) = 4
10:52:38 up 3 days, 23:38, 4 users, load average: 0.00, 0.02, 0.05
From the proc manpage:
/proc/uptime
This file contains two numbers: the uptime of the system
(seconds), and the amount of time spent in idle process
(seconds).
The proc filesystem contains a set of pseudo files. Those are not real files, they just look like files, but they contain values that are provided directly by the kernel. Every time you read a file, such as /proc/uptime, its contents are regenerated on the fly. The proc filesystem is an interface to the kernel.
In the linux kernel source code of the file fs/proc/uptime.c at line 49, you see a function call:
proc_create("uptime", 0, NULL, &uptime_proc_fops);
This creates a proc filesystem entry called uptime (the procfs is usually mounted under /proc), and associates a function to it, which defines valid file operations on that pseudo file and the functions associated to them. In case of uptime it's just read() and open() operations. However, if you trace the functions back you will end up here, where the uptime is calculated.
Internally, there is a timer-interupt which updates periodically the systems uptime (besides other values). The interval, in which the timer-interupt ticks, is defined by the preprocessor-macro HZ, whose exact value is defined in the kernel config file and applied at compilation time.
The idle time and the number of CPU cycles, combined with the frequency HZ (cycles per second) can be calculated in a number (of seconds) since the last boot.
To address your question: When does “uptime” start counting from?
Since the uptime is a kernel internal value, which ticks up every cycle, it starts counting when the kernel has initialized. That is, when the first cycle has ended. Even before anything is mounted, directly after the bootloader gives control to the kernel image.
| On Linux, when does "uptime" start counting from? |
1,440,012,027,000 |
Hi I have read Here that lsof is not an accurate way of getting the number of File Descriptors that are currently open. He recommended to use this command instead
cat /proc/sys/fs/file-nr
While this command displays the number of FD's, how do you display the list of open file descriptors that the command above just counted?
|
There are two reasons lsof | wc -l doesn't count file descriptors. One is that it lists things that aren't open files, such as loaded dynamically linked libraries and current working directories; you need to filter them out. Another is that lsof takes some time to run, so can miss files that are opened or closed while it's running; therefore the number of listed open files is approximate. Looking at /proc/sys/fs/file-nr gives you an exact value at a particular point in time.
cat /proc/sys/fs/file-nr is only useful when you need the exact figure, mainly to check for resource exhaustion. If you want to list the open files, you need to call lsof, or use some equivalent method such as trawling /proc/*/fd manually.
| How to display open file descriptors but not using lsof command |
1,440,012,027,000 |
Is it possible to setup a Linux system so that it provides more than 65,535 ports? The intent would be to have more than 65k daemons listening on a given system.
Clearly there are ports being used so this is not possible for those reasons, so think of this as a theoretical exercise in trying to understand where TCP would be restrictive in doing something like this.
|
Looking at the RFC for TCP: RFC 793 - Transmission Control Protocol, the answer would seem to be no because of the fact that a TCP header is limited to 16-bits for the source/destination port field.
Does IPv6 improve things?
No. Even though IPv6 will give us a much larger IP address space, 32-bit vs. 128-bits, it makes no attempt to improve the TCP packet limitation of 16-bits for the port numbers. Interestingly the RFC for IPv6: Internet Protocol, Version 6 (IPv6) Specification, the IP field needed to be expanded.
When TCP runs over IPv6, the method used to compute the checksum is changed, as per RFC 2460:
Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.
So how can you get more ports?
One approach would be to stack additional IP addresses using more interfaces. If your system has multiple NICs this is easier, but even with just a single NIC, one can make use of virtual interfaces (aka. aliases) to allocate more IPs if needed.
NOTE: Using aliases have been supplanted by iproute2 which you can use to stack IP addresses on a single interface (i.e. eth0) instead.
Example
$ sudo ip link set eth0 up
$ sudo ip addr add 192.0.2.1/24 dev eth0
$ sudo ip addr add 192.0.2.2/24 dev eth0
$ ip addr show dev eth0
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
pfifo_fast state DOWN qlen 1000
link/ether 00:d0:b7:2d:ce:cf brd ff:ff:ff:ff:ff:ff
inet 192.0.2.1/24 brd 192.0.2.255 scope global eth1
inet 192.0.2.2/24 scope global secondary eth1
Source: iproute2: Life after ifconfig
References
OpenWrt Wiki » Documentation » Networking » Linux Network Interfaces
Some useful command with iproute2
Linux Advanced Routing & Traffic Control HOWTO
Multiple default routes / public gateway IPs under Linux
iproute2 cheat sheet - Daniil Baturin's website
| Can TCP provide more than 65535 ports? |
1,440,012,027,000 |
I have a fedora guest OS in VMware. I want to expand /boot partition, so I add another virtual disk to this VM, and try to clone the disk.
After dd if=/dev/sda1 of=/dev/sdb1, blkid report that /dev/sda1 and /dev/sdb1 have same UUID/GUID.
It's weird that there're 2 same UUIDs in the universe, how to change one of them to another UUID value?
Update 2017-01-25
Subject changed, UUID here means filesystem UUID, not partition UUID.
Since it's filesystem UUID, filesystem specific utils are needed to change the UUID, or use hexeditor to modify raw data on disk (DANGEROUS, not recommended unless you know what you are doing).
|
To generate a random new UUID, one can use:
$ uuidgen
To actually change the UUID is file system dependent.
Assuming ext-family filesystem
# tune2fs -U <output of uuidgen> /dev/sdb1
Or if you're confident uuidgen is going to work:
# tune2fs -U $(uuidgen) /dev/sdb1
Assuming btrfs filesystem
# btrfstune -U $(uuidgen) /dev/sdb1
The UUID is stored in the superblock, so a byte-for-byte copy of the filesystem will have the same UUID.
| How to change filesystem UUID (2 same UUID)? |
1,440,012,027,000 |
I have a need to find all of the writable storage devices attached to a given machine, whether or not they are mounted.
The dopey way to do this would be to try every entry in /dev that corresponds to a writable devices (hd* and sd*).
Is there a better solution, or should I stick with this one?
|
If one is interested only in block storage devices, one can use lsblk from widely-available util-linux package:
$ lsblk -o KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sda disk 149.1G TOSHIBA MK1637GS
sda1 part 23.3G
sda2 part 28G
sda3 part 93.6G
sda4 part 4.3G
sr0 rom 1024M CD/DVDW TS-L632M
It lends itself well to scripting with many other columns available.
| Finding all storage devices attached to a Linux machine |
1,440,012,027,000 |
I wanted to have a go at creating my very own Linux Distribution. Could you suggest some nice and easy-to-follow tutorials (preferably text based and not videos).
I have heard something about Arch Linux but I don't know how to go from there. What do I need?
|
Part of the answer depends on what you mean by your own distro. if you mean a version of Linux custom built to your own purposes for you to use on your own machines, or even in your own office, there are a couple of pretty cool tools that allow you to customize existing distributions that are known working.
http://www.centos.org/docs/5/html/Installation_Guide-en-US/ch-kickstart2.html covers kickstart installations of CentOS (also applies to Scientific, Fedora and RedHat.) There's also http://susestudio.com/ which allows you to make a customized installation disk of SuSe Linux, meaning you can get the packages you want installed right off the bat. The advantage to this method, more so with the kickstart, is that you can choose individual packages and leave out whatever fluff you don't want to bother with, but also get the advantages of knowing that updated packages will be available to you and work without a significant amount of testing and overhead on your part.
If you're just looking to make it look the way you want to look, custom splash screens, logos, etc, there are a ton of guides available for making these kinds of changes.
Now, if you really just want to get nuts and bolts and really do up your own thing, then the suggestion by @vfbsilva to look at LFS is irreplaceable. You really do learn how things get put together and what the requirements are to make Linux ... well, Linux. However, doing this a couple of times was just enough for me personally to realize I didn't want to have to deal with rebuilding every package that had a security update released on a weekly basis. :)
| How to easily build your own Linux Distro? |
1,440,012,027,000 |
I'm trying to run a minecraft server on my unRAID server.
The server will run in the shell, and then sit there waiting for input. To stop it, I need to type 'stop' and press enter, and then it'll save the world and gracefully exit, and I'm back in the shell. That all works if I run it via telnetting into the NAS box, but I want to run it directly on the box.
this is what I previously had as a first attempt:
#define USER_SCRIPT_LABEL Start Minecraft server
#define USER_SCRIPT_DESCR Start minecraft server. needs sde2 mounted first
cd /mnt/disk/sde2/MCunraid
screen -d -m -S minecraft /usr/lib/java/bin/java -Xincgc -Xmx1024M -jar CraftBukkit.jar
MCunraid is the folder where I have the Craftbukkit.jar and all the world files etc. If I type that screen line in directly, the screen does setup detached and the server launches. If I execute that line from within the script it doesn't seem to set up a screen
for stopping the server, I need to 'type' in STOP and then press enter. My approach was
screen -S minecraft -X stuff "stop $(echo -ne '\r')"
to send to screen 'minecraft' the text s-t-o-p and a carriage return. But that doesn't work, even if I type it directly onto the command line. But if I 'screen -r' I can get to the screen with the server running, then type 'stop' and it shuts down properly.
The server runs well if I telnet in and do it manually, just need to run it without being connected from my remote computer.
|
I can solve at least part of the problem: why the stop part isn't working. Experimentally, when you start a Screen session in detached mode (screen -d -m), no window is selected, so input later sent with screen -X stuff is just lost. You need to explicitly specify that you want to send the keystrokes to window 0 (-p 0). This is a good idea anyway, in case you happen to create other windows in that Screen session for whatever reason.
screen -S minecraft -p 0 -X stuff "stop^M"
(Screen translate ^M to control-M which is the character sent by the Enter key.)
The problem with starting the session from a script is likely related to unMENU.
| sending text input to a detached screen |
1,440,012,027,000 |
This question is motivated by my shock when I discovered that Mac OS X kernel uses 750MB of RAM.
I have been using Linux for 20 years, and I always "knew" that the kernel RAM usage is dwarfed by X (is it true? has it ever been true?).
So, after some googling, I tried slabtop which told me:
Active / Total Size (% used) : 68112.73K / 72009.73K (94.6%)
Does this mean that my kernel is using ~72MB of RAM now?
(Given that top reports Xorg's RSS as 17M, the kernel now dwarfs X, not the other way around).
What is the "normal" kernel RAM usage (range) for a laptop?
Why does MacOS use an order of magnitude more RAM than Linux?
PS. No answer here addressed the last question, so please see related questions:
Is it a problem if kernel_task is routinely above 130MB on mid 2007 white MacBook?
kernel_task using way too much memory
What is included under kernel_task in Activity Monitor?
|
Kernel is a bit of a misnomer. The Linux kernel is comprised of several proceses/threads + the modules (lsmod) so to get a complete picture you'd need to look at the whole ball and not just a single component.
Incidentally mine shows slabtop:
Active / Total Size (% used) : 173428.30K / 204497.61K (84.8%)
The man page for slabtop also had this to say:
The slabtop statistic header is tracking how many bytes of slabs are being used and it not a measure of physical memory. The 'Slab' field in the /proc/meminfo file is tracking information about used slab physical memory.
Dropping caches
Dropping my caches as @derobert suggested in the comments under your question does the following for me:
$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
$
Active / Total Size (% used) : 61858.78K / 90524.77K (68.3%)
Sending a 3 does the following: free pagecache, dentries and inodes. I discuss this more in this U&L Q&A titled: Are there any ways or tools to dump the memory cache and buffer?". So 110MB of my space was being used by just maintaining the info regarding pagecache, dentries and inodes.
Additional Information
If you're interested I found this blog post that discusses slabtop in a bit more details. It's titled: Linux command of the day: slabtop.
The Slab Cache is discussed in more detail here on Wikipedia, titled: Slab allocation.
So how much RAM is my Kernel using?
This picture is a bit foggier to me, but here are the things that I "think" we know.
Slab
We can get a snapshot of the Slab usage using this technique. Essentially we can pull this information out of /proc/meminfo.
$ grep Slab /proc/meminfo
Slab: 100728 kB
Modules
Also we can get a size value for Kernel modules (unclear whether it's their size from on disk or when in RAM) by pulling these values from /proc/modules:
$ awk '{print $1 " " $2 }' /proc/modules | head -5
cpufreq_powersave 1154
tcp_lp 2111
aesni_intel 12131
cryptd 7111
aes_x86_64 7758
Slabinfo
Much of the details about the SLAB are accessible in this proc structure, /proc/slabinfo:
$ less /proc/slabinfo | head -5
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_ffff8801f2b30000 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0
fuse_request 100 125 632 25 4 : tunables 0 0 0 : slabdata 5 5 0
fuse_inode 21 21 768 21 4 : tunables 0 0 0 : slabdata 1 1 0
Dmesg
When your system boots there is a line that reports memory usage of the Linux kernel just after it's loaded.
$ dmesg |grep Memory:
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init)
References
Where is the memory going? Memory usage in the 2.6 kernel
| How much RAM does the kernel use? |
1,440,012,027,000 |
Linux uses a virtual memory system where all of the addresses are virtual addresses and not physical addresses. These virtual addresses are converted into physical addresses by the processor.
To make this translation easier, virtual and physical memory are divided into pages. Each of these pages is given a unique number; the page frame number.
Some page sizes can be 2 KB, 4 KB, etc. But how is this page size number determined? Is it influenced by the size of the architecture? For example, a 32-bit bus will have 4 GB address space.
|
You can find out a system's default page size by querying its configuration via the getconf command:
$ getconf PAGE_SIZE
4096
or
$ getconf PAGESIZE
4096
NOTE: The above units are typically in bytes, so the 4096 equates to 4096 bytes or 4kB.
This is hardwired in the Linux kernel's source here:
Example
$ more /usr/src/kernels/3.13.9-100.fc19.x86_64/include/asm-generic/page.h
...
...
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 12
#ifdef __ASSEMBLY__
#define PAGE_SIZE (1 << PAGE_SHIFT)
#else
#define PAGE_SIZE (1UL << PAGE_SHIFT)
#endif
#define PAGE_MASK (~(PAGE_SIZE-1))
How does shifting give you 4096?
When you shift bits, you're performing a binary multiplication by 2. So in effect a shifting of bits to the left (1 << PAGE_SHIFT) is doing the multiplication of 2^12 = 4096.
$ echo "2^12" | bc
4096
| how is page size determined in virtual address space? |
1,440,012,027,000 |
How can I ask ps to display only user processes and not kernel threads?
See this question to see what I mean...
|
This should do (under Linux):
ps --ppid 2 -p 2 --deselect
kthreadd (PID 2) has PPID 0 (on Linux 2.6+) but ps does not allow to filter for PPID 0; thus this work-around.
See also this equivalent answer.
| Can ps display only non kernel processes on Linux? |
1,440,012,027,000 |
I have a Debian (Buster) laptop with 8 GB RAM and 16GB swap. I'm running a very long running task. This means my laptop has been left on for the past six days while it churns through.
While doing this I periodically need to use my laptop as a laptop. This shouldn't be a problem; the long running task is I/O bound, working through stuff on a USB hard disk and doesn't take much RAM (<200 MB) or CPU (<4%).
The problem is when I come back to my laptop after a few hours, it will be very sluggish and can take 30 minutes to come back to normal. This is so bad that crash-monitors flag their respective applications as having frozen (especially browser windows) and things start incorrectly crashing out.
Looking on the system monitor, of the 2.5 GB used around half gets shifted into swap. I've confirmed this is the problem by removing the swap space (swapoff /dev/sda8). If I leave it without swap space it comes back to life almost instantly even after 24 hours. With swap, it's practically a brick for the first five minutes having been left for only six hours. I've confirmed that memory usage never exceeds 3 GB even while I'm away.
I have tried reducing the swappiness (see also: Wikipedia) to values of 10 and 0, but the problem still persists. It seems that after a day of inactivity the kernel believes the entire GUI is no longer needed and wipes it from RAM (swaps it to disk). The long running task is reading through a vast file tree and reading every file. So it might be the kernel is confused into thinking that caching would help. But on a single sweep of a 2 TB USB HD with ~1 billion file names, an extra GB RAM isn't going to help performance much. This is a cheap laptop with a sluggish hard drive. It simply can't load data back into RAM fast enough.
How can I tell Linux to only use swap space in an emergency? I don't want to run without swap. If something unexpected happens, and the OS suddenly needs an extra few GBs then I don't want tasks to get killed and would prefer start using swap. But at the moment, if I leave swap enabled, my laptop just can't be used when I need it.
The precise definition of an "emergency" might be a matter for debate. But to clarify what I mean: An emergency would be where the system is left without any other option than to swap or kill processes.
What is an emergency? - Do you really have to ask?... I hope you never find yourself in a burning building!
It's not possible for me to define everything that might constitute an emergency in this question. But for example, an emergency might be when the kernel is so pushed for memory that it has start killing processes with the OOM Killer. An emergency is NOT when the kernel thinks it can improve performance by using swap.
Final Edit: I've accepted an answer which does precisely what I've asked for at the operating system level. Future readers should also take note of the answers offering application level solutions.
|
Having such a huge swap nowadays is often a bad idea. By the time the OS swapped just a few GB of memory to swap, your system had already crawled to death (like what you saw)
It's better to use zram with a small backup swap partition. Many OSes like ChromeOS, Android and various Linux distros (Lubuntu, Fedora) have enabled zram by default for years, especially for systems with less RAM. It's much faster than swap on HDD and you can clearly feel the system responsiveness in this case. Less so on an SSD, but according to the benchmark results here it still seems faster even with the default lzo algorithm. You can change to lz4 for even better performance with a little bit less compression ratio. It's decoding speed is nearly 5 times faster than lzo based on official benchmark
In fact Windows 10 and macOS also use similar pagefile compression techniques by default
There's also zswap although I've never used it. Probably worth a try and compare which one is better for your usecases
After that another suggestion is to reduce the priority of those IO-bound processes and possibly leave a terminal running on higher priority so that you can run commands on it right away even when the system is on a high load
Further reading
Arch Linux - Improving performance - Zram or zswap
Enable ZSwap to increase performance
Enable zRAM for improved memory handling and less swapping
Running out of RAM in Ubuntu? Enable ZRAM
Difference between ZRAM and ZSWAP
zram vs zswap vs zcache Ultimate guide: when to use which one
Linux, SSD and swap
https://wiki.debian.org/ZRam
https://www.kernel.org/doc/Documentation/blockdev/zram.txt
https://wiki.gentoo.org/wiki/Zram
| How do I use swap space for emergencies only? |
1,440,012,027,000 |
According to FHS-3.0, /tmp is for temporary files and /run is for run-time variable data. Data in /run must be deleted at next boot, which is not required for /tmp, but still programs must not assume that the data in /tmp will be available at the next program start. All this seems quite similar to me.
So, what is the difference between the two? By which criterion should a program decide whether to put temporary data into /tmp or into /run?
According to the FHS:
Programs may have a subdirectory of /run; this is encouraged for
programs that use more than one run-time file.
This indicates that the distinction between "system programs" and "ordinary programs" is not a criterion, neither is the lifetime of the program (like, long-running vs. short-running process).
Although the following rationale is not given in the FHS, /run was introduced to overcome the problem that /var was mounted too late such that dirty tricks were needed to make /var/run available early enough. However, now with /run being introduced, and given its description in the FHS, there does not seem to be a clear reason to have both /run and /tmp.
|
The directories /tmp and /usr/tmp (later /var/tmp) used to be the dumping ground for everything and everybody. The only protection mechanism for files in these directories is the sticky bit which restricts deletion or renaming of files there to their owners. As marcelm pointed out in a comment, there's in principle nothing that prevents someone to create files with names that are used by services (such as nginx.pid or sshd.pid). (In practice, the startup scripts could remove such bogus files first, though.)
/run was established for non-persistent runtime data of long lived services such as locks, sockets, pid files and the like. Since it is not writable for the public, it shields service runtime data from the mess in /tmp and jobs that clean up there. Indeed: Two distributions that I run (no pun intended) have permissions 755 on /run, while /tmp and /var/tmp (and /dev/shm for that matter) have permissions 1777.
| What's the difference between /tmp and /run? |
1,440,012,027,000 |
Is there any difference between /run directory and var/run directory. It seems the latter is a link to the former. If the contents are one and the same what is the need for two directories?
|
From the Wikipedia page on the Filesystem Hierarchy Standard:
Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs) which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, this data should be stored in /var/run but this was a problem in some cases because this directory isn't always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory isn't intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only.
So if you have already made a temporary filesystem for /run, linking /var/run to it would be the next logical step (as opposed to keeping the files on disk or creating a separate tmpfs).
| Difference between /run and /var/run |
1,440,012,027,000 |
I'm basically trying to figure out how one would go about making a GUI from absolute scratch with nothing but the linux kernel and programming in C.
I am not looking to create a GUI desktop environment from scratch, but I would like to create some desktop applications and in my search for knowledge, all the information I have been able to find is on GUI APIs and toolkits. I would like to know, at the very least for my understanding of the fundamentals of how linux GUI is made, how one would go about making a GUI environment or a GUI appllication without using any APIs or toolkits.
I am wondering if for example:
existing APIs and toolkits work via system calls to the kernel (and the kernel is responsible at the lowest level for constructing a GUI image in pixels or something)
these toolkits perform syscalls which simply pass information to screen drivers (is there a standard format for sending this information that all screen drivers abide by or do GUI APIs need to be able to output this information in multiple formats depending on the specific screen/driver?) and also if this is roughly true, does the the raw linux kernel usually just send information to the screen in the form of 8-bit characters?
I just really want to understand what happens between the linux kernel, and what I see on my screen (control/information flow through both software and hardware if you know, what format the information takes, etc). I would so greatly appreciate a detailed explanation, I understand this might be a dousie to explain in sufficient detail, but I think such an explanation would be a great resource for others who are curious and learning. For context I'm a 3rd year comp sci student who recently started programming in C for my Systems Programming course and I have an intermediate(or so I would describe it) understanding of linux and programming. Again Thank you to anyone who helps me out!!!
|
How it works (Gnu/Linux + X11)
Overview
It looks something like this (not draws to scale)
┌───────────────────────────────────────────────┐
│ User │
│ ┌─────────────────────────────────────────┤
│ │ Application │
│ │ ┌──────────┬─────┬─────┬─────┤
│ │ │ ... │ SDL │ GTK │ QT │
│ │ ├──────────┴─────┴─────┴─────┤
│ │ │ xLib │
│ │ ├────────────────────────────┤
├─────┴───┬────────┴──┐ X11 │
│ Gnu │ Libraries │ Server │
│ Tools │ │ │
├─────────┘ │ │
├─────────────────────┤ │
│ Linux (kernel) │ │
├─────────────────────┴─────────────────────────┤
│ Hardware │
└───────────────────────────────────────────────┘
We see from the diagram that X11 talks mostly with the hardware. However it needs to talk via the kernel, to initially get access to this hardware.
I am a bit hazy on the detail (and I think it changed since I last looked into it). There is a device /dev/mem that gives access to the whole of memory (I think physical memory), as most of the graphics hardware is memory mapped, this file (see everything is a file) can be used to access it. X11 would open the file (kernel uses file permissions to see if it can do this), then X11 uses mmap to map the file into virtual memory (make it look like memory), now the memory looks like memory. After mmap, the kernel is not involved.
X11 needs to know about the various graphics hardware, as it accesses it directly, via memory.
(this may have changes, specifically the security model, may no longer give access to ALL of the memory.)
Linux
At the bottom is Linux (the kernel): a small part of the system. It provides access to hardware, and implements security.
Gnu
Then Gnu (Libraries; bash; tools:ls, etc; C compiler, etc). Most of the operating system.
X11 server (e.g. x.org)
Then X11 (Or Wayland, or ...), the base GUI subsystem. This runs in user-land (outside of the kernel): it is just another process, with some privileges.
The kernel does not get involved, except to give access to the hardware. And providing inter-process communication, so that other processes can talk with the X11 server.
X11 library
A simple abstraction to allow you to write code for X11.
GUI libraries
Libraries such as qt, gtk, sdl, are next — they make it easier to use X11, and work on other systems such as wayland, Microsoft's Windows, or MacOS.
Applications
Applications sit on top of the libraries.
Some low-level entry points, for programming
xlib
Using xlib, is a good way to learn about X11. However do some reading about X11 first.
SDL
SDL will give you low level access, direct to bit-planes for you to directly draw to.
Going lower
If you want to go lower, then I am not sure what good current options are, but here are some ideas.
Get an old Amiga, or simulator. And some good documentation. e.g. https://archive.org/details/Amiga_System_Programmers_Guide_1988_Abacus/mode/2up (I had 2 books, this one and similar).
Look at what can be done on a raspberry pi. I have not looked into this.
Links
X11
https://en.wikipedia.org/wiki/X_Window_System
Modern ways
Writing this got my interest, so I had a look at what the modern fast way to do it is. Here are some links:
https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/
| How does a linux GUI work at the lowest level? |
1,440,012,027,000 |
I'm unable to return to the GUI with Ctrl+Alt+F7 (or any of the 12 function keys). I have some unsaved work and I don't want to lose them. Are there any other key combinations that will allow me to switch back?
Here is what I did:
I pressed Ctrl+Alt+F1 and it showed a text-based login screen as usual
Then I pressed Ctrl+Alt+F7 and it showed a screen full of text (I can't remember what they were)
Then I pressed Ctrl+Alt+F8 and it showed log messages that resembles /var/log/messages. Some entries are from automount, some from sendmail, and none are errors.
Pressing any of the Ctrl+Alt+Fn combinations now has no effect. The cap-lock and num-lock LED no longer respond to their corresponding keys. I can use the mouse to highlight the text on the screen, but nothing else.
Any idea what happened?
I can still login to the system via SSH. GUI applications that I was using (e.g. opera) are still running and consuming tiny amounts of CPU as usual, as reported by top. Is it possible to switch back to the GUI via the command line? If possible, I don't want to restart X, because doing so will kill all the GUI applications.
System info:
Red Hat Enterprise Linux Client release 5.7
Linux 2.6.18-238.12.1.el5 SMP x86_64
gnome-desktop: 2.16.0-1.fc6
xorg-x11-server-Xorg: 1.1.1-48.76.el5_7.5
Thanks to Shawn I was able to get back using chvt 9.
Further experiments shows that if I go to the 8th virtual terminal (either by Ctrl+Alt+F8 or chvt 8), I will not be able to switch to any other terminals using Ctrl+Alt+Fx keys. Now sure if this is a bug.
|
chvt allows you to change your virtual terminal.
From man chvt:
The command chvt N makes /dev/ttyN the foreground terminal. (The
corresponding screen is created if it did not exist yet. To get rid of
unused VTs, use deallocvt(1).) The key combination (Ctrl-)LeftAlt-FN
(with N in the range 1-12) usually has a similar effect.
| Command line to return to the GUI after Ctrl-Alt-F1? |
1,440,012,027,000 |
How do you find the line number in Bash where an error occurred?
Example
I create the following simple script with line numbers to explain what we need. The script will copy files from
cp $file1 $file2
cp $file3 $file4
When one of the cp commands fail then the function will exit with exit 1. We want to add the ability to the function to also print the error with the line number (for example, 8 or 12).
Is this possible?
Sample script
1 #!/bin/bash
2
3
4 function in_case_fail {
5 [[ $1 -ne 0 ]] && echo "fail on $2" && exit 1
6 }
7
8 cp $file1 $file2
9 in_case_fail $? "cp $file1 $file2"
10
11
12 cp $file3 $file4
13 in_case_fail $? "cp $file3 $file4"
14
|
Rather than use your function, I'd use this method instead:
$ cat yael.bash
#!/bin/bash
set -eE -o functrace
file1=f1
file2=f2
file3=f3
file4=f4
failure() {
local lineno=$1
local msg=$2
echo "Failed at $lineno: $msg"
}
trap 'failure ${LINENO} "$BASH_COMMAND"' ERR
cp -- "$file1" "$file2"
cp -- "$file3" "$file4"
This works by trapping on ERR and then calling the failure() function with the current line number + bash command that was executed.
Example
Here I've not taken any care to create the files, f1, f2, f3, or f4. When I run the above script:
$ ./yael.bash
cp: cannot stat ‘f1’: No such file or directory
Failed at 17: cp -- "$file1" "$file2"
It fails, reporting the line number plus command that was executed.
| How do I find the line number in Bash when an error occured? |
1,440,012,027,000 |
I know that with ps I can see the list or tree of the current processes running in the system. But what I want to achieve is to "follow" the new processes that are created when using the computer.
As analogy, when you use tail -f to follow the new contents appended to a file or to any input, then I want to keep a follow list of the process that are currently being created.
Is this even posible?
|
If kprobes are enabled in the kernel you can use execsnoop from perf-tools:
In first terminal:
% while true; do uptime; sleep 1; done
In another terminal:
% git clone https://github.com/brendangregg/perf-tools.git
% cd perf-tools
% sudo ./execsnoop
Tracing exec()s. Ctrl-C to end.
Instrumenting sys_execve
PID PPID ARGS
83939 83937 cat -v trace_pipe
83938 83934 gawk -v o=1 -v opt_name=0 -v name= -v opt_duration=0 [...]
83940 76640 uptime
83941 76640 sleep 1
83942 76640 uptime
83943 76640 sleep 1
83944 76640 uptime
83945 76640 sleep 1
^C
Ending tracing...
| How to track newly created processes in Linux? |
1,440,012,027,000 |
The following report is thrown in my messages log:
kernel: Out of memory: Kill process 9163 (mysqld) score 511 or sacrifice child
kernel: Killed process 9163, UID 27, (mysqld) total-vm:2457368kB, anon-rss:816780kB, file-rss:4kB
Doesn't matter if this problem is for httpd, mysqld or postfix but I am curious how can I continue debugging the problem.
How can I get more info about why the PID 9163 is killed and I am not sure if linux keeps history for the terminated PIDs somewhere.
If this occur in your message log file how you will troubleshoot this issue step by step?
# free -m
total used free shared buffers cached
Mem: 1655 934 721 0 10 52
-/+ buffers/cache: 871 784
Swap: 109 6 103`
|
The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in /var/log/messages, depending on how your (r)syslogd is configured. Try:
grep oom /var/log/*
grep total_vm /var/log/*
The former should show up a bunch of times and the latter in only one or two places. That is the file you want to look at.
Find the original "Out of memory" line in one of the files that also contains total_vm. Thirty second to a minute (could be more, could be less) before that line you'll find something like:
kernel: foobar invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
You should also find a table somewhere between that line and the "Out of memory" line with headers like this:
[ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
This may not tell you much more than you already know, but the fields are:
pid The process ID.
uid User ID.
tgid Thread group ID.
total_vm Virtual memory use (in 4 kB pages)
rss Resident memory use (in 4 kB pages)
nr_ptes Page table entries
swapents Swap entries
oom_score_adj Usually 0; a lower number indicates the process will be less likely to die when the OOM killer is invoked.
You can mostly ignore nr_ptes and swapents although I believe these are factors in determining who gets killed. This is not necessarily the process using the most memory, but it very likely is. For more about the selection process, see here. Basically, the process that ends up with the highest oom score is killed -- that's the "score" reported on the "Out of memory" line; unfortunately the other scores aren't reported but that table provides some clues in terms of factors.
Again, this probably won't do much more than illuminate the obvious: the system ran out of memory and mysqld was choosen to die because killing it would release the most resources. This does not necessary mean mysqld is doing anything wrong. You can look at the table to see if anything else went way out of line at the time, but there may not be any clear culprit: the system can run out of memory simply because you misjudged or misconfigured the running processes.
| Debug out-of-memory with /var/log/messages |
1,440,012,027,000 |
My machine has an SSD, where I installed the system and an HDD, which I use as a storage for large and/or infrequently used files. Both are encrypted, but I chose to use the same passphrase for them. SSD is mounted at / and HDD at /usr/hdd (individual users each have a directory on it and can symlink as they like from home directory).
When the system is booted, it immediately asks for passphrase for the SSD, and just a couple seconds later for the one for HDD (it is auto-mounted). Given that both passphrases are the same, is there a way to configure the system to ask just once?
|
Debian based distributions:
Debian and Ubuntu ship a password caching script decrypt_keyctl with cryptsetup package.
decrypt_keyctl script provides the same password to multiple encrypted LUKS targets, saving you from typing it multiple times. It can be enabled in crypttab with keyscript=decrypt_keyctl option. The same password is used for targets which have the same identifier in keyfile field. On boot password for each identifier is asked once.
An example crypttab:
<target> <source> <keyfile> <options>
part1_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctl
part2_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctl
The decrypt_keyctl script depends on the keyutils package (which is only suggested, and therefore not necessarily installed).
After you've updated your cryptab, you will also have to update initramfs to apply the changes. Use update-initramfs -u.
Full readme for decrypt_keyctl is located in /usr/share/doc/cryptsetup/README.keyctl
Distributions which do not provide decrypt_keyctl script:
If decrypt_keyctrl isn't provided by your distribution, the device can be unlocked using a keyfile in encrypted root file system. This when root file system can be unlocked and mounted before of any other encrypted devices.
LUKS supports multiple key slots. This allows you to alternatively unlock the device using password if the key file is unavailable/lost.
Generate the key with random data and set its permissions to owner readable only to avoid leaking it. Note that the key file needs to be on the root partition which is unlocked first.
dd if=/dev/urandom of=<path to key file> bs=1024 count=1
chmod u=rw,g=,o= <path to key file>
Add the key to your LUKS device
cryptsetup luksAddKey <path to encrypted device> <path to key file>
Configure crypttab to use the key file. First line should be the root device, since devices are unlocked in same order as listed in crypttab. Use absolute paths for key files.
<target> <source> <keyfile> <options>
root_crypt /dev/disk/... none luks
part1_crypt /dev/disk/... <path to key file> luks
| Using a single passphrase to unlock multiple encrypted disks at boot |
1,440,012,027,000 |
Being new to Linux administration, I'm a little confused about the following commands:
useradd
usermod
groupadd
groupmod
I've just finished reading the user administration book in the Linux/Unix Administrator's handbook, but some things are still a little hazy.
Basically useradd seems straight forward enough:
useradd -c "David Hilbert" -d /home/math/hilbert -g faculty -G famous -m -s /bin/sh hilbert
I can add "David Hilbert" with username hilbert , setting his default directory, shell, and groups. And I think that -g is his primary/default group and -G are his other groups.
So these are my next questions:
Would this command still work if the groups faculty and famous did not exist? Would it just create them?
If not, what command do I use to create new groups?
If I remove the user hilbert and there are no other users in those groups, will they still exist? Should I remove them?
After I run the useradd command above, how do I remove David from the famous group, and reassign his primary group to hilbert which does not yet exist?
|
The usermod command will allow you to change a user's primary group, supplementary group or a number of other attributes. The -g switch controls the primary group.
For your other questions...
If you specify a group, groupname, that does not exist during the useradd stage, you will receive an error - useradd: unknown group groupname
The groupadd command creates new groups.
The group will remain if you remove all users contained within. You don't necessarily have to remove the empty group.
Create the hilbert group via groupadd hilbert. Then move David's primary group using usermod -g hilbert hilbert. (Please note that the first hilbert is the group name and the second hilbert is the username. This is important in cases, where you are moving a user to a group with a different name)
You may be complicating things a bit here, though. In many Linux distributions, a simple useradd hilbert will create the user hilbert and a group of the same name as the primary. I would add supplementary groups specified together using the -G switch.
| How can I change a user's default group in Linux? |
1,440,012,027,000 |
I basically need to do this:
DUMMY=dummy
sudo su - ec2-user -c 'echo $DUMMY'
This doesn't work. How can I pass the env variable $DUMMY to su? -p doesn't work with -l.
|
You can do it without calling login shell:
sudo DUMMY=dummy su ec2-user -c 'echo "$DUMMY"'
or:
sudo DUMMY=dummy su -p - ec2-user -c 'echo "$DUMMY"'
The -p option of su command preserve environment variables.
| how to pass environment variable to sudo su |
1,440,012,027,000 |
I need to delete all folders inside a folder using a daily script. The folder for that day needs to be left.
Folder 'myfolder' has 3 sub folder: 'test1', 'test2' and 'test3'
I need to delete all except 'test2'.
I am trying to match exact name here:
find /home/myfolder -type d ! -name 'test2' | xargs rm -rf
OR
find /home/myfolder -type d ! -name 'test2' -delete
This command always tries to delete the main folder 'myfolder' also !
Is there a way to avoid this ?
|
This will delete all folders inside ./myfolder except that ./myfolder/test2 and all its contents will be preserved:
find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' -delete
How it works
find starts a find command.
./myfolder tells find to start with the directory ./myfolder and its contents.
-mindepth 1 not to match ./myfolder itself, just the files and directories under it.
! -regex '^./myfolder/test2\(/.*\)?' tells find to exclude (!) any file or directory matching the regular expression ^./myfolder/test2\(/.*\)?. ^ matches the start of the path name. The expression (/.*\)? matches either (a) a slash followed by anything or (b) nothing at all.
-delete tells find to delete the matching (that is, non-excluded) files.
Example
Consider a directory structure that looks like;
$ find ./myfolder
./myfolder
./myfolder/test1
./myfolder/test1/dir1
./myfolder/test1/dir1/test2
./myfolder/test1/dir1/test2/file4
./myfolder/test1/file1
./myfolder/test3
./myfolder/test3/file3
./myfolder/test2
./myfolder/test2/file2
./myfolder/test2/dir2
We can run the find command (without -delete) to see what it matches:
$ find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?'
./myfolder/test1
./myfolder/test1/dir1
./myfolder/test1/dir1/test2
./myfolder/test1/dir1/test2/file4
./myfolder/test1/file1
./myfolder/test3
./myfolder/test3/file3
We can verify that this worked by looking at the files which remain:
$ find ./myfolder
./myfolder
./myfolder/test2
./myfolder/test2/file2
./myfolder/test2/dir2
| Delete all folders inside a folder except one with specific name |
1,440,012,027,000 |
According to the btrfs Readonly snapshots patch it's possible to "set a snapshot readonly/writable on the fly." So I should be able to turn my readonly snapshot (created with btrfs snapshot -r) writable, somehow.
But neither the btrfs subvolume manpage nor any other part of that manpage seems to give a way to do that.
|
The btrfs manpage fails to document the property subcommand, which I found by grep'ing the source. It's also in btrfs --help.
To set a snapshot to read-write, you do something like this:
btrfs property set -ts /path/to/snapshot ro false
Change that to true to set it to read-only.
You can also use list to see the available properties:
btrfs property list -ts /path/to/snapshot
ro Set/get read-only flag of subvolume.
-t specifies the type of object to work on, s means subvolume. Other options are f (filesystem), i (inode), and d (device). If you don't specify, it'll show all applicable ones (for list) or try to guess for get/set.
Edit: in the newest btrfs tools, there is a btrfs-property manpage documenting that subcommand, although it's not mentioned in the main manpage at all. It's also available as the btrfs-property page on the wiki.
(Note: This requires a new-enough btrfs-tools. Apparently on Debian Wheezy, you'll have to install the updated one from wheezy-backports; thanks artfulrobot).
| How to make a btrfs snapshot writable? |
1,440,012,027,000 |
According to Wikipedia
http://en.wikipedia.org/wiki/Unix_directory_structure
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
extra disks should not be mounted in /mnt/ according to the first link, but doesn't seams so according to the second link.
Both say thought that they shouldn't be in /media.
Question
So where should extra permanent disks be mounted?
|
According to the FHS link that you gave
/mnt/ Temporarily mounted filesystems.
So I assume that you must mean permenantly mounted non-root non-system (meaning not /var/log or similar) filesystems.
I've always put them in /mnt/fsidentifier and then symlinked to then where needed.
So for instance, I had /mnt/website at one point, /mnt/appdata, /mnt/whatever, then symlink that. I never mounted anything directly to /mnt
If you wanted a "clean" solution, you could write a script to take the UUID of the filesystem, create a mount point for it (under /mnt or wherever you wanted), then mount the filesystem to the mountpoint.
| Where in "/" should extra permanent disks be mounted? |
1,429,900,113,000 |
Is there any tool available to sync files between two or more linux servers immediately after writing the file in the disk? The rsync command does not suit me on this, because if I set rsync in cron, the minimum time I can set is 1 minute, but I need it on real-time basis.
|
Haven't used it myself but read about it recently. There is a daemon called lsyncd, which I presume does exactly what you need.
Read more about it HERE
| Real-time file synchronization |
1,429,900,113,000 |
My question originates from my problem in getting ffmpeg started.
I have installed ffmpeg and it is displayed as installed:
whereis ffmpeg
ffmpeg: /usr/bin/ffmpeg /usr/bin/X11/ffmpeg /usr/share/ffmpeg /usr/share/man/man1/ffmpeg.1.gz
Later, I figured out, that some programs depend on libraries that do not come with the installation itself, so I checked with ldd command what is missing:
# ldd /usr/bin/ffmpeg
linux-vdso.so.1 => (0x00007fff71fe9000)
libavfilter.so.0 => not found
libpostproc.so.51 => not found
libswscale.so.0 => not found
libavdevice.so.52 => not found
libavformat.so.52 => not found
libavcodec.so.52 => not found
libavutil.so.49 => not found
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5f20bdf000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5f209c0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5f205fb000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5f20f09000)
As it turns out my ffmpeg is cut off from 7 libraries too work. I first thought that each of those libraries have to be installed, but than I figured out, that some or all might be installed, but their location unknown to ffmpeg.
I read that /etc/ld.so.conf and /etc/ld.so.cache contain the paths to the libraries, but I was confused, because, there was only one line in
/etc/ld.so.conf
cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf
but a very long /etc/ld.so.cache.
I am now at a point where I feel lost how to investigate further,
It might be a helpful next step to figure out, how I can determine if a given library is indeed installed even if its location unknown to ffmpeg.
---------Output---of----apt-cache-policy-----request---------
apt-cache policy
Package files:
100 /var/lib/dpkg/status
release a=now
500 http://archive.canonical.com/ubuntu/ trusty/partner Translation-en
500 http://archive.canonical.com/ubuntu/ trusty/partner i386 Packages
release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner
origin archive.canonical.com
500 http://archive.canonical.com/ubuntu/ trusty/partner amd64 Packages
release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner
origin archive.canonical.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/universe Translation-en
500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted Translation-en
500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse Translation-en
500 http://security.ubuntu.com/ubuntu/ trusty-security/main Translation-en
500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse i386 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/universe i386 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted i386 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/main i386 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/universe amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted
origin security.ubuntu.com
500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main
origin security.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse i386 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe i386 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted i386 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main i386 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages
release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/universe Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty/restricted Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty/main Translation-en
500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse i386 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/restricted i386 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/main i386 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse amd64 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/restricted amd64 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted
origin archive.ubuntu.com
500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main
origin archive.ubuntu.com
700 http://extra.linuxmint.com/ rebecca/main i386 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main
origin extra.linuxmint.com
700 http://extra.linuxmint.com/ rebecca/main amd64 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main
origin extra.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/import i386 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import
origin packages.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/upstream i386 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream
origin packages.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/main i386 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main
origin packages.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/import amd64 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import
origin packages.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/upstream amd64 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream
origin packages.linuxmint.com
700 http://packages.linuxmint.com/ rebecca/main amd64 Packages
release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main
origin packages.linuxmint.com
Pinned packages:
|
Look in /usr/lib and /usr/lib64 for those libraries. If you find one of the ones ffmpeg is missing, symlink it so it exists in the other directory.
You can also run a find for 'libm.so.6' and see where that file is at. There is a good chance ffmpeg is looking in the same directory for the missing ones. Symlink them over there once you find them.
If they don't exist on your server, install the package that includes them. If they are included in ffmpeg package but you don't see them, try reinstalling ffmpeg.
| How to check if a shared library is installed? |
1,429,900,113,000 |
Given a directory of font files (TTF and OTF) I'd like to inspect each font and determine what style (regular, italic, bold, bold-italic) it is. Is there a command line tool for unix flavored operating systems that can do this? Or does anyone know how to extract the metadata from a TTF or OTF font file?
|
I think you're looking for otfinfo. There doesn't seem to be an option to get at the Subfamily directly, but you could do:
otfinfo --info *.ttf | grep Subfamily
Note that a number of the fonts I looked at use "Oblique" instead of "Italic".
| Is there a unix command line tool that can analyze font files? |
1,429,900,113,000 |
I would like to know which are the standard commands available in every Linux system.
For example if you get a debian/ubuntu/redhat/suse/arch/slackware etc, you will always find there commands like:
cd, mkdir, ls, echo, grep, sed, awk, ping etc.
I know that some of the mentioned commands are shell-builtin but others are not but they are still always there (based on my knowledge and experience so far).
On the other hand commands like gawk, parted, traceroute and other quite famous commands are not installed by default in different Linux distributions.
I made different web searches but I haven't found a straight forward answer to this.
The purpose is that I would like to create a shell script and it should make some sanity checks if the commands used in the script are available in the system. If not, it should prompt the user to install the needed binaries.
|
Unfortunately there is no guarantee of anything being available.
However, most systems will have GNU coreutils. That alone provides about 105 commands. You can probably rely on those unless it's an embedded system, which might use BusyBox instead.
You can probably also rely on bash, cron, GNU findutils, GNU grep, gzip, iproute2, iputils, man-db, module-init-tools, net-tools, passwd (passwd or shadow), procps, tar, and util-linux.
Note that some programs might have some differences between distributions. For example /usr/bin/awk might be gawk or mawk. /bin/sh might be dash or bash in POSIX mode. On some older systems, /usr/bin/host does not have the same syntax as the BIND version, so it might be better to use dig.
If you're looking for some standards, the Linux Standard Base defines some commonly found programs, but not all distributions claim to conform to the standard, and some only do so if you install an optional LSB compatibility package. As an example of this, some systems I've seen don't come with lsb_release in a default install.
As well as this, the list of commands standardized by POSIX could be helpful.
Another approach to your problem is to package your script using each distribution's packaging tools (e.g. RPM for Red Hat, DEB for Debian, etc.) and declare a dependency on any other programs or packages you need. It's a bit of work, but it means users will see a friendlier error message, telling them not just what's missing, but what packages they need to install.
More info:
RPM - Adding Dependency Information to a Package (historical)
RPM - Dependencies
Debian - Declaring Relationships Between Packages
PKGBUILD - Dependencies
| Which are the standard commands available in every Linux based distribution? |
1,429,900,113,000 |
I have been reading RedHat iptables documentation but can't figure out what does the following line do:
... -j REJECT --reject-with icmp-host-prohibited
|
The REJECT target rejects the packet. If you do not specify which ICMP message to reject with, the server by default will send back ICMP port unreachable (type 3, code 3).
--reject-with modifies this behaviour to send a specific ICMP message back to the source host. You can find information about --reject-with and the available rejection messages in man iptables:
REJECT
This is used to send back an error packet in response to the matched packet: otherwise it is equivalent to DROP so it is a terminating TARGET, ending rule traversal. This target is only valid in the INPUT, FORWARD and OUTPUT chains, and user-defined chains which are only called from those chains. The following option controls the nature of the error packet returned:
--reject-with type
The type given can be:
icmp-net-unreachable
icmp-host-unreachable
icmp-port-unreachable
icmp-proto-unreachable
icmp-net-prohibited
icmp-host-prohibited or
icmp-admin-prohibited (*)
which return the appropriate ICMP error message (port-unreachable is the default). The option tcp-reset can be used on rules which only match the TCP protocol: this causes a TCP RST packet to be sent back. This is mainly useful for blocking ident (113/tcp) probes which frequently occur when sending mail to broken mail hosts (which won't accept your mail otherwise).
(*) Using icmp-admin-prohibited with kernels that do not support it will result in a plain DROP instead of REJECT
| What -A INPUT -j REJECT --reject-with icmp-host-prohibited Iptables line does exactly? |
1,429,900,113,000 |
Can't install Java8
apt-get install openjdk-8-jre-headless
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-8-jre-headless : Depends: ca-certificates-java but it is not going to be installed
E: Unable to correct problems, you have held broken packages
I've searched Google and I've added repos and other suggestions, but nothing has allowed me to install Java 8 yet.
ideas?
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.7 (jessie)
Release: 8
Codename: jessie
|
is this jessie? With backports
apt install -t jessie-backports openjdk-8-jre-headless ca-certificates-java
| openjdk-8-jre-headless : Depends: ca-certificates-java but it is not going to be installed |
1,429,900,113,000 |
Red Hat 5/6 when I do mount it says type nfs, I would like to know how to determine version if it isn't listed in mount options or fstab. Please don't say remount it with the version option, I want to know how to determine the currently mounted NFS version. I am guessing it will default based on NFS server/client settings, but how to I determine what it is currently? I am pretty sure it's NFS v3 because nfs4_setfacl is not supported it seems.
|
Here are 2 ways to do it:
mount
Using mount's -v switch:
$ mount -v | grep /home/sam
mulder:/export/raid1/home/sam on /home/sam type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)
nfsstat
Using nfsstat -m:
$ nfsstat -m | grep -A 1 /home/sam
/home/sam from mulder:/export/raid1/home/sam
Flags: rw,vers=3,rsize=16384,wsize=16384,hard,intr,proto=tcp,timeo=600,retrans=2,sec=sys,addr=mulder
| How to determine if NFS mount is mounted as v3 or v4? |
1,429,900,113,000 |
As part of the program I wrote, I constantly read and write data from files. I noticed that as part of doing so, I am inadvertently creating swap .swp files.
What do you think is going on? What would cause swap files to appear if you had to reproduce the problem?
|
The .swp file is not a swap file in the OS sense. It is a state file. It keeps your changes since the last save (except the last 200 characters), buffers that you have saved, unsaved macros and the undo structure.
You can read more in VIM's help: vim +help\ swap-file. If there is a crash (power failure, OS crash, etc.), then you can recover your changes using this swap-file. After saving the changes from the swap file to the original file, you will need to exit vim and remove the swap file yourself.
| What causes swap files to be created? |
1,429,900,113,000 |
How can I check if my CPU supports the AES-NI instruction set under Linux/UNIX.
|
Look in /proc/cpuinfo. If you have the aes flag then your CPU has AES support.
You can use this command:
grep aes /proc/cpuinfo
If you have some output, which will be like
flags : a bunch of flags aes another bunch of flags
, then you have AES.
| How to check that AES-NI is supported by my CPU? |
1,429,900,113,000 |
Quite often in the course of troubleshooting and tuning things I find myself thinking about the following Linux kernel settings:
net.core.netdev_max_backlog
net.ipv4.tcp_max_syn_backlog
net.core.somaxconn
Other than fs.file-max, net.ipv4.ip_local_port_range, net.core.rmem_max, net.core.wmem_max, net.ipv4.tcp_rmem, and net.ipv4.tcp_wmem, they seems to be the important knobs to mess with when you are tuning a box for high levels of concurrency.
My question: How can I check to see how many items are in each of those queues ? Usually people just set them super high, but I would like to log those queue sizes to help predict future failure and catch issues before they manifest in a user noticeable way.
|
I too have wondered this and was motivated by your question!
I've collected how close I could come to each of the queues you listed with some information related to each. I welcome comments/feedback, any improvement to monitoring makes things easier to manage!
net.core.somaxconn
net.ipv4.tcp_max_syn_backlog
net.core.netdev_max_backlog
$ netstat -an | grep -c SYN_RECV
Will show the current global count of connections in the queue, you can break this up per port and put this in exec statements in snmpd.conf if you wanted to poll it from a monitoring application.
From:
netstat -s
These will show you how often you are seeing requests from the queue:
146533724 packets directly received from backlog
TCPBacklogDrop: 1029
3805 packets collapsed in receive queue due to low socket buffer
fs.file-max
From:
http://linux.die.net/man/5/proc
$ cat /proc/sys/fs/file-nr
2720 0 197774
This (read-only) file gives the number of files presently opened. It
contains three numbers: The number of allocated file handles, the
number of free file handles and the maximum number of file handles.
net.ipv4.ip_local_port_range
If you can build an exclusion list of services (netstat -an | grep LISTEN) then you can deduce how many connections are being used for ephemeral activity:
netstat -an | egrep -v "MYIP.(PORTS|IN|LISTEN)" | wc -l
Should also monitor (from SNMP):
TCP-MIB::tcpCurrEstab.0
It may also be interesting to collect stats about all the states seen in this tree(established/time_wait/fin_wait/etc):
TCP-MIB::tcpConnState.*
net.core.rmem_max
net.core.wmem_max
You'd have to dtrace/strace your system for setsockopt requests. I don't think stats for these requests are tracked otherwise. This isn't really a value that changes from my understanding. The application you've deployed will probably ask for a standard amount. I think you could 'profile' your application with strace and configure this value accordingly. (discuss?)
net.ipv4.tcp_rmem
net.ipv4.tcp_wmem
To track how close you are to the limit you would have to look at the average and max from the tx_queue and rx_queue fields from (on a regular basis):
# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:0FB1 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262030037 1 ffff810759630d80 3000 0 0 2 -1
1: 00000000:A133 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262029925 1 ffff81076d1958c0 3000 0 0 2 -1
To track errors related to this:
# netstat -s
40 packets pruned from receive queue because of socket buffer overrun
Should also be monitoring the global 'buffer' pool (via SNMP):
HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Memory Buffers
HOST-RESOURCES-MIB::hrStorageSize.1 = INTEGER: 74172456
HOST-RESOURCES-MIB::hrStorageUsed.1 = INTEGER: 51629704
| how to check rx ring, max_backlog, and max_syn_backlog size |
1,429,900,113,000 |
I have seen that recent GNU/Linux are using ConsoleKit and PolicyKit. What are they for? How do they work?
The best answer should explain what kind of problem each one tries to solve, and how they manage to solve it.
I am a long-time GNU/Linux user, from a time when such things did not exist. I have been using Slackware and recently Gentoo. I am an advanced user/admin/developer, so the answer can (and should!) be as detailed and as accurate as possible. I want to understand how these things work, so I can use them (as a user or as a developer) the best possible way.
|
ConsoleKit (documentation) was a service which tracks user sessions (i.e. where a user is logged in). It allows switching users without logging out (many users can be logged in on the same hardware at the same time with one user active). It is also used to check if a session is "local" i.e. if a user has direct access to hardware (which may be considered more secure than remote access).
Currently the ConsoleKit is largely replaced by logind, which is part of systemd, although there is standalone version elogind.
polkit (née PolicyKit) documentation allows fine-tuned capabilities in a desktop environment. Traditionally only a privileged user (root) was allowed to configure network. However, while in a server environment it is a reasonable assumption that it would be too limiting to not be allowed to connect to a hotspot on laptop, for example. However, you may still not want to give full privileges to this person (like installing programs) or may want to limit options for some people (for example on your children laptops only 'trusted' networks with parental filters can be used). As far as I remember it works like:
Program send message to daemon via dbus about action
Daemon uses polkit libraries/configuration (in fact polkit daemon) to determine if a user is allowed to perform an action. It may happen that certain conditions must be fulfilled (like entering password or hardware access).
Daemon performs action according to it (returns auth error or performs action)
| What are ConsoleKit and PolicyKit? How do they work? |
1,429,900,113,000 |
As per my knowledge, to determine the current shell we use echo $0 in the shell. Rather I want my script to check in which shell it is running. So, I tried to print $0 in the script and it returns the name of the script as it should. So, my question is how can I find which shell is my script running in during runtime?
|
On linux you can use /proc/PID/exe.
Example:
# readlink /proc/$$/exe
/bin/zsh
| determine shell in script during runtime |
1,429,900,113,000 |
I already asked a question about how to list all namespaces in Linux, but there wasn't any correct and exact answers, so I want to find out a method which can help me to find out the namespace of PID of some process or group of processes. How can it be done in Linux?
|
I'll try and answer both this and your earlier question as they are related.
The doors to namespaces are files in /proc/*/ns/* and /proc/*/task/*/ns/*.
A namespace is created by a process unsharing its namespace. A namespace can then be made permanent by bind-mounting the ns file to some other place.
That's what ip netns does for instance for net namespaces. It unshares its net namespace and bind-mounts /proc/self/ns/net to /run/netns/netns-name.
In a /proc mounted in the root pid namespace, you can list all the namespaces that have a process in them by doing:
# readlink /proc/*/task/*/ns/* | sort -u
ipc:[4026531839]
mnt:[4026531840]
mnt:[4026531856]
mnt:[4026532469]
net:[4026531956]
net:[4026532375]
pid:[4026531836]
pid:[4026532373]
uts:[4026531838]
The number in square brackets is the inode number.
To get that for a given process:
# ls -Li /proc/1/ns/pid
4026531836 /proc/1/ns/pid
Now, there may be permanent namespaces that don't have any process in them. Finding them out can be a lot trickier AFAICT.
First, you have to bear in mind that there can be several mount namespaces.
# awk '$9 == "proc" {print FILENAME,$0}' /proc/*/task/*/mountinfo | sort -k2 -u
/proc/1070/task/1070/mountinfo 15 19 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
/proc/19877/task/19877/mountinfo 50 49 0:3 / /run/netns/a rw,nosuid,nodev,noexec,relatime shared:2 - proc proc rw
/proc/19877/task/19877/mountinfo 57 40 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
/proc/1070/task/1070/mountinfo 66 39 0:3 / /run/netns/a rw,nosuid,nodev,noexec,relatime shared:2 - proc proc rw
/proc/19877/task/19877/mountinfo 68 67 0:3 / /mnt/1/a rw,nosuid,nodev,noexec,relatime unbindable - proc proc rw
Those /mnt/1/a, /run/netns/a may be namespace files.
We can get an inode number:
# nsenter --mount=/proc/19877/task/19877/ns/mnt -- ls -Li /mnt/1/a
4026532471 /mnt/1/a
But that doesn't tell us much other than it's not in the list computed above.
We can try and enter it as any of the different types:
# nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --pid=/mnt/1/a true
nsenter: reassociate to namespace 'ns/pid' failed: Invalid argument
# nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --mount=/mnt/1/a true
nsenter: reassociate to namespace 'ns/mnt' failed: Invalid argument
# nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --net=/mnt/1/a true
#
OK, that was a net namespace file.
So it would seem we have a method to list the name spaces: list the ns directories of all the tasks, then find all the proc mountpoints in all the /proc/*/task/*/mountinfo and figure out their type by trying to enter them.
| How to find out namespace of a particular process? |
1,429,900,113,000 |
How can I determine or set the size limit of /etc/hosts? How many lines can it have?
|
Problematical effects include slow hostname resolution (unless the OS somehow converts the linear list into a faster-to-search structure?) and the potential for surprising interaction with shell tab completion well before any meaningful file size is reached.
For example! If one places 500,000 host entries in /etc/hosts
# perl -E 'for (1..500000) { say "127.0.0.10 $_.science" }' >> /etc/hosts
for science, the default hostname tab completion in ZSH takes about ~25 seconds on my system to return a completion prompt (granted, this is on a laptop from 2009 with a 5400 RPM disk, but still).
| What is the /etc/hosts size limit? |
1,429,900,113,000 |
Is there a way to tell ping to show its usual termination statistics without stopping the execution?
For instance, I'd like to quickly view:
--- 8.8.8.8 ping statistics ---
2410 packets transmitted, 2274 received, +27 errors, 5% packet loss, time 2412839ms
rtt min/avg/max/mdev = 26.103/48.917/639.493/52.093 ms, pipe 3
without having to stop the program, thus losing the accumulated data.
|
From the ping manpage (emphasis mine):
When the specified number of packets have been sent (and received) or if the program is terminated with a SIGINT, a brief summary is displayed. Shorter current statistics can be obtained without termination of process with signal SIGQUIT.
So this will work if you're fine with your stats being slightly less verbose:
# the second part is only for showing you the PID
ping 8.8.8.8 & jobs ; fg
<... in another terminal ...>
kill -SIGQUIT $PID
Short statistics look like this:
19/19 packets, 0% loss, min/avg/ewma/max = 0.068/0.073/0.074/0.088 ms
| Check ping statistics without stopping |
1,429,900,113,000 |
netstat -s prints out a lot of very detailed protocol statistics like number of TCP reset messages received or number of ICMP "echo request" messages sent or number of packets dropped because of a missing route.
When in Linux netstat is considered deprecated at nowadays, then is there an alternative?
Statistics provided by ss -s are superficial compared to the ones provided by netstat.
|
netstat has indeed been deprecated by many distributions, though it's really much of the "net-tools" package (including ifconfig, route and arp) that has been deprecated in favour of the "iproute2" package. iproute2 has evolved along with the latest Linux networking features, and the traditional utilities have not.
The iproute2 equivalent that you want is the little known nstat, this provides the netstat -s counters, albeit in a slightly different form:
raw counter names from /proc are used, each prefixed with its class ("Udp", "Tcp", "TcpExt" etc)
netstat's long (and possibly localised) descriptions are not available
zero-value counters omitted by default
using consistent columnar output with the name and value in the first and second columns
third column shows the average over a configurable time window if you have started a background nstat (-d daemon mode), or 0.0 if not
e.g. nstat prints "UdpInDatagrams NNN" not "Udp: InDatagrams", and not the verbose netstat version of "Udp: NNN packets received".
nstat also assumes you want incremental rather than absolute numbers, so the closest equivalent to netstat -s is /sbin/nstat -asz where the options are -a use absolute counters, -s don't keep history file, -z don't omit zero-value counters.
ss takes over the "socket" parts of netstat, but not its complete function as you have found out. (ss is actually better than netstat in many cases, two specific ones are the ability to use filter expressions and the optional capability to use the tcp_diag and inet_diag Linux kernel modules to access kernel socket data more directly than via /proc.)
Should you need to confirm the mapping for descriptive names, the net-tools source is the definitive reference: http://sourcecodebrowser.com/net-tools/1.60/statistics_8c_source.html
Doug Vitale provides a useful guide for finding the iproute2 equivalents of the older commands (it is unmaintained and slightly incomplete, it omits any reference to nstat which has been part of the iproute2 package since at least 2004 kernel 2.6.x time).
net-tools lives on however, and you should be able to find a package for your distribution (or compile it yourself).
| alternative to "netstat -s" |
1,429,900,113,000 |
My xorg session is on tty1 and if I want to issue a command from tty (because I cannot do it from xorg session for some reasons), I press Ctrl+Alt+F2, for example, and type a command. But I cannot start graphical applications from any tty except first since there is no xorg session in it. Then I am curious how can I switch to tty1 where xorg session is running and back to the session?
|
how can I switch to tty1 where xorg session is running and back to the session?
Because X is running on tty1, but not on tty2. A tty is a "virtual terminal", meaning it is supposed to represent an actual physical screen and keyboard, etc. The terminals are all on simultaneously, but since you only have enough hardware to interface with one at a time, that's what you get.
You can in fact run multiple X sessions on different ttys and switch between them. You need a valid ~/.xinit or ~/.Xclients first. If you don't, for illustration:
echo -e "#!/bin/sh\n mwm" > ~/.xinit
chmod u+x ~/.xinit
Check first that mwm exists by trying it from the command line. As long as it doesn't say "command not found" you're good. Now from tty2 try startx.
If there isn't a display manager doing something totalitarian, you should get a plain black window with a big X mouse cursor. Left clicking should give a crude looking menu from which you can now "Quit"; but before that CtrlAltF1 will take you to the other X session on tty1 (and F2 gets you back, etc.).
| How to switch between tty and xorg session |
1,429,900,113,000 |
Is it possible to export a block device such as a DVD or CDROM and make it so that it's mountable on another computer as a block device?
NOTE: I'm not interested in doing this using NFS or Samba, I actually want the optical drive to show up as a optical drive on a remote computer.
|
I think you might be able to accomplish what you want using network block devices (NBD). Looking at the wikipedia page on the subject there is mention of a tool called nbd. It's comprised of a client and server component.
Example
In this scenario I'm setting up a CDROM on my Fedora 19 laptop (server) and I'm sharing it out to an Ubuntu 12.10 system (client).
installing
$ apt-cache search ^nbd-
nbd-client - Network Block Device protocol - client
nbd-server - Network Block Device protocol - server
$ sudo apt-get install nbd-server nbd-client
sharing a CD
Now back on the server (Fedodra 19) I do a similar thing using its package manager YUM. Once complete I pop a CD in and run this command to share it out as a block device:
$ sudo nbd-server 2000 /dev/sr0
** (process:29516): WARNING **: Specifying an export on the command line is deprecated.
** (process:29516): WARNING **: Please use a configuration file instead.
$
A quick check to see if it's running:
$ ps -eaf | grep nbd
root 29517 1 0 12:02 ? 00:00:00 nbd-server 2000 /dev/sr0
root 29519 29071 0 12:02 pts/6 00:00:00 grep --color=auto nbd
Mounting the CD
Now back on the Ubuntu client we need to connect to the nbd-server using nbd-client like so. NOTE: the name of the nbd-server is greeneggs in this example.
$ sudo nbd-client greeneggs 2000 /dev/nbd0
Negotiation: ..size = 643MB
bs=1024, sz=674983936 bytes
(On some systems - e.g. Fedora - one has to modprobe nbd first.)
We can confirm that there's now a block device on the Ubuntu system using lsblk:
$ sudo lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
sda1 8:1 0 243M 0 part /boot
sda2 8:2 0 1K 0 part
sda5 8:5 0 465.5G 0 part
ubuntu-root (dm-0) 252:0 0 461.7G 0 lvm /
ubuntu-swap_1 (dm-1) 252:1 0 3.8G 0 lvm [SWAP]
sr0 11:0 1 654.8M 0 rom
nbd0 43:0 0 643M 1 disk
nbd0p1 43:1 0 643M 1 part
And now we mount it:
$ sudo mount /dev/nbd0p1 /mnt/
mount: block device /dev/nbd0p1 is write-protected, mounting read-only
$
did it work?
The suspense is killing me, and we have liftoff:
$ sudo ls /mnt/
EFI GPL isolinux LiveOS
There's the contents of a LiveCD of CentOS that I mounted in the Fedora 19 laptop and was able to mount it as a block device of the network on Ubuntu.
| How can I mount a block device from one computer to another via the network as a block device? |
1,429,900,113,000 |
I want to delete one or more specific line numbers from a file. How would I do this using sed?
|
To delete lines 2, 12-17 and line 57 from file data.txt using sed you could do something like this:
sed -e '2d;12,17d;57d' data.txt
to create a backup of the original file (with a .bak extension) use -i.bak with the command.
sed -i.bak -e '2d;12,17d;57d' data.txt
| Delete specific line number(s) from a text file using sed? |
1,429,900,113,000 |
My server program received a SIGTERM and stopped (with exit code 0). I am surprised by this, as I am pretty sure that there was plenty of memory for it. Under what conditions does linux (busybox) send a SIGTERM to a process?
|
I'll post this as an answer so that there's some kind of resolution if this turns out to be the issue.
An exit status of 0 means a normal exit from a successful program. An exiting program can choose any integer between 0 and 255 as its exit status. Conventionally, programs use small values. Values 126 and above are used by the shell to report special conditions, so it's best to avoid them.
At the C API level, programs report a 16-bit status¹ that encodes both the program's exit status and the signal that killed it, if any.
In the shell, a command's exit status (saved in $?) conflates the actual exit status of the program and the signal value: if a program is killed by a signal, $? is set to a value greater than 128 (with most shells, this value is 128 plus the signal number; ATT ksh uses 256 + signal number and yash uses 384 + signal number, which avoids the ambiguity, but the other shells haven't followed suit).
In particular, if $? is 0, your program exited normally.
Note that this includes the case of a process that receives SIGTERM, but has a signal handler for it, and eventually exits normally (perhaps as an indirect consequence of the SIGTERM signal, perhaps not).
To answer the question in your title, SIGTERM is never sent automatically by the system. There are a few signals that are sent automatically like SIGHUP when a terminal goes away, SIGSEGV/SIGBUS/SIGILL when a process does things it shouldn't be doing, SIGPIPE when it writes to a broken pipe/socket, etc. And there are a few signals that are sent due to a key press in a terminal, mainly SIGINT for Ctrl+C, SIGQUIT for Ctrl+\ and SIGTSTP for Ctrl+Z, but SIGTERM is not one of those. If a process receives SIGTERM, some other process sent that signal.
¹ roughly speaking
| When does the system send a SIGTERM to a process? |
1,429,900,113,000 |
I'm trying to tail a log file on multiple remote machines and forward the output to my local workstation. I want connections to close when pressing Ctrl-C.
At the moment I have the following function that almost works as intended.
function dogfight_tail() {
logfile=/var/log/server.log
pids=""
for box in 02 03; do
ssh server-$box tail -f $logfile | grep $1 &
pids="$pids $!"
done
trap 'kill -9 $pids' SIGINT
trap wait
}
The connections close and I receive the output from tail. BUT, there is some kind of buffering going on because the output come in batches.
And here's the fun part…
I can see the same buffering behaviour when executing the following and append "test" to the file /var/log/server.log on the remote machines 4-5 times…
ssh server-01 "tail -f /var/log/server.log | grep test"
…and found two ways of disabling it…
Add -t flag to ssh.
ssh -t server-01 "tail -f /var/log/server.log | grep test"
Remove quotation from the remote command.
ssh server-01 tail -f /var/log/server.log | grep test
However, neither of these approaches work for the function that execute on multiple machines mentioned above.
I have tried dsh, which have the same buffering behaviour when executing.
dsh -m server-01,server-02 -c "tail -f /var/log/server.log | grep test"
Same here, if I remove the quotation, the buffering goes away and everything works fine.
dsh -m server-01,server-02 -c tail -f /var/log/server.log | grep test
Also tried parallel-ssh which works exactly the same as dsh. Can somebody explain what's going on here?
How do I fix this problem? Would be ideal to go with straight ssh if possible.
P.S. I do not want to use multitail or similar since I want to be able to execute arbitrary commands.
|
What you see is effect of a standard stdout buffer in grep provided by Glibc. The best solution is to disable it by using --line-buffered (GNU grep, I'm not sure what other implementations might support it or something similar).
As for why this only happens in some cases:
ssh server "tail -f /var/log/server.log | grep test"
runs the whole command in the quotes on the server - thus grep waits to fill its buffer.
ssh server tail -f /var/log/server.log | grep test
runs grep on your local machine on the output tail sent through the ssh channel.
The key part here is, that grep adjusts its behaviour depending on whether its stdin is a terminal or not. When you run ssh -t, the remote command is running with a controlling terminal and thus the remote grep behaves like your local one.
| Tail log file on multiple machines over ssh |
1,429,900,113,000 |
I just spun up an Ubuntu 11.10 box and then ran apt-get install apache2 php5 to install apache2 and PHP 5 on the box. Now it is functioning as a "web server" and it loads the "It Works!" page. Now I'm trying to tighten up security and I have the following questions about linux web servers:
Who should apache be running as?
What group(s) should this user be in?
What package(s) can make PHP (and Apache?) run as the owner of the files? (like on shared web hosts) Should I use these packages? Are they easy / feasible to maintain on a small system?
What should the default permissions be for files and folders being served out to the web with apache running as www-data? For apache/php running as the user?
I have done the following things in examination of the default setup:
File Structure
When I cd / and do a ls -al listing of the contents, I see /var:
drwxr-xr-x 13 root root 4096 2012-02-04 20:47 var/
If I cd into var and do ls -al I see:
drwxr-xr-x 2 root root 4096 2012-02-04 20:47 www/
Finally, inside /var/www I see:
drwxr-xr-x 2 root root 4096 2012-02-04 20:47 ./
drwxr-xr-x 13 root root 4096 2012-02-04 20:47 ../
-rw-r--r-- 1 root root 177 2012-02-04 20:47 index.html
My key takeaway is that so far all of these files belong to root:root, files have permissions of 644, and directories have permissions of 755.
Apache's Permissions
If I create a file as root in /var/www/test.php with the contents:
<?php echo shell_exec('whoami');
and load that file into a browser it tells me www-data, which is the same as in the /etc/apache2/envvars file:
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
If I do ps aux | grep -i apache I see the following:
root 1916 1.2 104664 7488 Ss 20:47 /usr/sbin/apache2 -k start
www-data 1920 0.8 105144 5436 S 20:47 /usr/sbin/apache2 -k start
www-data 1921 1.0 105144 6312 S 20:47 /usr/sbin/apache2 -k start
www-data 1922 0.7 104688 4624 S 20:47 /usr/sbin/apache2 -k start
www-data 1923 0.7 104688 4624 S 20:47 /usr/sbin/apache2 -k start
www-data 1924 0.7 104688 4624 S 20:47 /usr/sbin/apache2 -k start
www-data 1925 0.7 104688 4624 S 20:47 /usr/sbin/apache2 -k start
So who is apache running as? It looks like perhaps the first process is as root, maybe from the /etc/init.d/apache script when the system started, and the other ones as www-data spawned from the first. Is that correct?
Next, if I type in groups www-data then I see www-data : www-data - so it looks to only be in the www-data group. I'm guessing this is standard practice as well.
Shared Hosting and Security
So if I understand things correctly, if apache is running as www-data and I want apache to be able to read a directory, the x bit needs to be set for the world (other) group (o+x), and that also needs to be set on all parent directories all the way up the chain (www, var). And if I want apache to be able to read from a file, then the o+r bit needs to be set.
Unfortunately I believe this introduces a security hole for multiple applications and/or multiple users on the same linux box: All web files need to be world-readable, and so they are also accessible by other applications and other users on the system. If one application installed on the system had a security vulnerability that allowed raw, unvalidated user input, which was then executed by PHP, a remote attacker could then browse all the other files on the web system which were world readable. Likewise, if the box had multiple users, and a user knew the path of another user's web files, s/he could then read the file contents (and see sensitive things like database connection strings, etc).
I've heard of two packages, suphp and phpsuexec that deal with allowing users' files to be served out "as them" on a shared system. One of the niceties of this is that it allows web applications (like Wordpress) to create and modify files - very helpful for adding themes, plugins, and upgrading software. Of course it is probably more secure to do these things manually, but can a compromise be made perhaps with one of the packages mentioned above? Or by possibly using chown to make the wordpress directory group belong to www-data and set the sticky bit on the group (g+s)?
I have only used these as the end user of a web hosting company, and so I don't know the ins-and-outs of them, and if they are even reasonable to install on a small system, or if there are some other security measures I should use instead, but I thought I would mention them here as they seem like one possible way to address some of my concerns.
Back to the Questions
Who should apache be running as?
What group(s) should this user be in?
What package(s) can make PHP (and Apache?) run as the owner of the files? (like on shared web hosts) Should I use these packages? Are they easy / feasible to maintain on a small system?
What should the default permissions be for files and folders being served out to the web with apache running as www-data? For apache/php running as the user?
|
not root
not root
SuEXEC
Depends. 644 for files and 755 for folders are a safeish default.
Don't change ownership of anything to www-data unless you want php to be able to edit the contents of that file/folder
Irrespective of anything else you do: folders need read and execute permissions for the user to find files; files need read permissions for the user to read them. If you get any permissions errors when changing things - you've managed to remove these fundamentally required permissions.
If you are not writing any files via your php application, you can leave files owned by you:you. In this circumstance the world permission (xx4/5) is the one which applies.
If you leave the files as owned by you:you with file permissions of 644 (files) what that would mean is that only you can edit the website files - www-data is not you - so it cannot edit the files.
If you want to restrict access to apache + you and block out all other access chown -R you:www-data *. With file permissions of 640 and folder permissions of 750 you can edit, www-data can read - because then apache reads the group permission (x4/5x).
Restrict to a minimum the paths you allow apache/php to write to - if there's a tmp dir the application needs to write to - allow it to write to that folder only - and for any writable locations if at all possible make sure it's outside the document root or take steps to ensure this writable path is not web-accessible.
Note that "you" should not be root. Allowing direct ssh access as root is an indicator of other security lapses (such as not disallowing password login), but that's a whole bunch of questions unto itself.
| What user should apache and PHP be running as? What permissions should /var/www files have? |
1,429,900,113,000 |
I have debian squeeze amd64. My current shell is bash. If I write the following in my terminal, it works:
$ uname -a
Linux core 2.6.32-5-amd64 #1 SMP Fri May 10 08:43:19 UTC 2013 x86_64 GNU/Linux
$ echo $SHELL
/bin/bash
$ echo $(realpath test.sh)
/home/ffortier/test.sh
My test.sh file looks like this:
#!/bin/bash
echo $(realpath "$1")
If I try to execute the following, I get an error
$ ./test.sh test.sh
./test.sh: line 2: realpath: command not found
How can I use the realpath command within a bash file?
Additional Information
$ type -a realpath
realpath is a function
realpath ()
{
f=$@;
if [ -d "$f" ]; then
base="";
dir="$f";
else
base="/$(basename "$f")";
dir=$(dirname "$f");
fi;
dir=$(cd "$dir" && /bin/pwd);
echo "$dir$base"
}
|
Is realpath a actual command or a script? I would check to see where it is coming from.
$ type -a realpath
I'm not familiar with this tool, and so it's likely not part of your normal distribution, perhaps it's installed in a non-standard location which isn't present on Bash's $PATH but is within your login environment's $PATH.
In any event, the above type command will show you where the command is coming from, at which point you can alter the method you're calling it in your script like so:
echo $(/path/to/realpath test.sh)
Or amend your script's $PATH so that it also includes this non-standard location.
Functions in the shell
Much of your environment does not get called when you invoke a shell script. If you think about this, this makes a lot of sense, since you generally don't want scripts to have all the additional baggage that a user's environment may have.
You can either determine which source file is providing this function and either source it, or simply instruct Bash to incorporate your login environment.
#!/bin/bash -l
echo $(realpath "$1")
| realpath command not found |
1,429,900,113,000 |
Possible Duplicate:
What does a kernel source tree contain? Is this related to Linux kernel headers?
I know that if I want to compile my own Linux kernel I need the Linux kernel headers, but what exactly are they good for?
I found out that under /usr/src/ there seem to be dozens of C header files. But what is their purpose, aren't they included in the kernel sources directly?
|
The header files define an interface: they specify how the functions in the source file are defined.
They are used so that a compiler can check if the usage of a function is correct as the function signature (return value and parameters) is present in the header file.
For this task the actual implementation of the function is not necessary.
You could do the same with the complete kernel sources but you will install a lot of unnecessary files.
Example: if I want to use the function
int foo(double param);
in a program I do not need to know how the implementation of foo is, I just need to know that it accepts a single param (double) and returns an integer.
| What exactly are Linux kernel headers? [duplicate] |
1,429,900,113,000 |
While logged in, I can do the following:
mkdir foo
touch foo/bar
chmod 400 foo/bar
chmod 500 foo
Then I can open vim (not as root), edit bar, force a write with w!, and the file is modified.
How can I make the operating system disallow any file modification?
UPDATE Mar 02 2017
chmod 500 foo is a red herring: the write permission on a directory has nothing to do with the ability to modify a file's contents--only the ability to create and delete files.
chmod 400 foo/bar does in fact prevent the file's contents from being changed. But, it does not prevent a file's permissions from being changed--a file's owner can always change his file's permissions (assuming they can access the file i.e. execute permission on all ancestor directories). In fact, strace(1) reveals that this is what vim (7.4.576 Debian Jessie) is doing--vim calls chmod(2) to temporarily add the write permission for the file's owner, modifies the file, and then calls chmod(2) again to remove the write permission. That is why using chattr +i works--only root can call chattr -i. Theoretically, vim (or any program) could do the same thing with chattr as it does with chmod on an immutable file if run as root.
|
You can set the "immutable" attribute with most filesystems in Linux.
chattr +i foo/bar
To remove the immutable attribute, you use - instead of +:
chattr -i foo/bar
To see the current attributes for a file, you can use lsattr:
lsattr foo/bar
The chattr(1) manpage provides a description of all the available attributes. Here is the description for i:
A file with the `i' attribute cannot be modified: it cannot be deleted
or renamed, no link can be created to this file and no data can be
written to the file. Only the superuser or a process possessing the
CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
| How do I make a file NOT modifiable? |
1,429,900,113,000 |
I have a general question, which might be a result of misunderstanding of how processes are handled in Linux.
For my purposes I am going to define a 'script' as a snippet of bash code saved to a text file with execute permissions enabled for the current user.
I have a series of scripts that call each other in tandem. For simplicity's sake I'll call them scripts A, B, and C. Script A carries out a series of statements and then pauses, it then executes script B, then it pauses, then it executes script C. In other words, the series of steps is something like this:
Run Script A:
Series of statements
Pause
Run Script B
Pause
Run Script C
I know from experience that if I run script A until the first pause, then make edits in script B, those edits are reflected in the execution of the code when I allow it to resume. Likewise if I make edits to script C while script A is still paused, then allow it to continue after saving changes, those changes are reflected in the execution of the code.
Here is the real question then, is there any way to edit Script A while it is still running? Or is editing impossible once its execution begins?
|
In Unix, most editors work by creating a new temporary file containing the edited contents. When the edited file is saved, the original file is deleted and the temporary file renamed to the original name. (There are, of course, various safeguards to prevent dataloss.) This is, for example, the style used by sed or perl when invoked with the -i ("in-place") flag, which is not really "in-place" at all. It should have been called "new place with old name".
This works well because unix assures (at least for local filesystems) that an opened file continues to exist until it is closed, even if it is "deleted" and a new file with the same name is created. (It's not coincidental that the unix system call to "delete" a file is actually called "unlink".) So, generally speaking, if a shell interpreter has some source file open, and you "edit" the file in the manner described above, the shell won't even see the changes since it still has the original file open.
[Note: as with all standards-based comments, the above is subject to multiple interpretations and there are various corner-cases, such as NFS. Pedants are welcome to fill the comments with exceptions.]
It is, of course, possible to modify files directly; it's just not very convenient for editing purposes, because while you can overwrite data in a file, you cannot delete or insert without shifting all following data, which would imply quite a lot of rewriting. Furthermore, while you were doing that shifting, the contents of the file would be unpredictable and processes which had the file open would suffer. In order to get away with this (as with database systems, for example), you need a sophisticated set of modification protocols and distributed locks; stuff which is well beyond the scope of a typical file editing utility.
So, if you want to edit a file while its being processed by a shell, you have two options:
You can append to the file. This should always work.
You can overwrite the file with new contents of exactly the same length. This may or may not work, depending on whether the shell has already read that part of the file or not. Since most file I/O involves read buffers, and since all the shells I know read an entire compound command before executing it, it is pretty unlikely that you can get away with this. It certainly wouldn't be reliable.
I don't know of any wording in the Posix standard which actually requires the possibility of appending to a script file while the file is being executed, so it might not work with every Posix compliant shell, much less with the current offering of almost- and sometimes-posix-compliant shells. So YMMV. But as far as I know, it does work reliably with bash.
As evidence, here's a "loop-free" implementation of the infamous 99 bottles of beer program in bash, which uses dd to overwrite and append (the overwriting is presumably safe because it substitutes the currently executing line, which is always the last line of the file, with a comment of exactly the same length; I did that so that the end result can be executed without the self-modifying behaviour.)
#!/bin/bash
if [[ $1 == reset ]]; then
printf "%s\n%-16s#\n" '####' 'next ${1:-99}' |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^#### $0 | cut -f1 -d:) bs=1 2>/dev/null
exit
fi
step() {
s=s
one=one
case $beer in
2) beer=1; unset s;;
1) beer="No more"; one=it;;
"No more") beer=99; return 1;;
*) ((--beer));;
esac
}
next() {
step ${beer:=$(($1+1))}
refrain |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^next\ $0 | cut -f1 -d:) bs=1 conv=notrunc 2>/dev/null
}
refrain() {
printf "%-17s\n" "# $beer bottles"
echo echo ${beer:-No more} bottle$s of beer on the wall, ${beer:-No more} bottle$s of beer.
if step; then
echo echo Take $one down, pass it around, $beer bottle$s of beer on the wall.
echo echo
echo next abcdefghijkl
else
echo echo Go to the store, buy some more, $beer bottle$s of beer on the wall.
fi
}
####
next ${1:-99} #
| What happens if you edit a script during execution? |
1,429,900,113,000 |
Recently we had a rather unpleasant situation with our customer - Raspberry Pi based "kiosk" used to display remote sensing data (nothing more fancy than a kiosk mode browser displaying a self-updating webpage from the data-collection server) failed to boot due to filesystem corruption. Ext4, Manual fsck required, the system will be a part of tomorrow's important presentation, service required immediately. Of course we can't require the customer to shut down the system nicely when switching it off for the night; the system must simply withstand such mistreatment.
I'd like to avoid such situations in the future, and I'd like to move the OS to a filesystem that would prevent this. There's a bunch of filesystems intended for MTD devices, where getting them to run on SD card (a standard block device) requires some serious hoop-jumping. There are also some other filesystems (journalling etc) that boast good resistance against corruption. I still need to see some reasonable comparison of their pros and cons.
Which filesystem available in Linux would provide best resistance against corruption on unexpected power failures and not require jumping through impossible hoops like yaffs2 in order to install to SD.
Wear-balancing is a plus, but not a requirement - SD cards usually have their own mechanisms, if less than perfect, though the system should be "gentle for flash" (systems like NTFS can murder an SD card within a month).
|
The best resistance against corruption on a single SD card would be offered by BTRFS in RAID1 mode with automatic scrub run every predefined period of time.
The benefits:
retaining ability to RW to the filesystem
modern, fully featured filesystem with very useful options for an RPi, like transparent compression and snapshots
designed with flash memory in mind (among other things)
Here is how to do it:
I run my RaspberryPi on ArchARM linux and my card is in the SD reader, so modify those instructions accordingly for other distros and /dev interfaces.
Here is an example partition layout:
/dev/mmcblk0p1: fat32 boot partition
/dev/mmcblk0p2: to be used as btrfs partition
/dev/mmcblk0p3: to be used as btrfs partition (mirrored with the above)
/dev/mmcblk0p4 (optional): swap
To get btrfs into RAID1, you create the filesystem like so:
mkfs.btrfs -m raid1 -d raid1 /dev/mmcblk0p2 /dev/mmcblk0p3
Then you rsync -aAXv to it your previously backed up system.
To get it to boot from BTRFS in raid1, you need to modify initramfs. Therefore, you need to do the following while you still have your system running on your old filesystem.
Raspberry does not normally use mkinitcpio so you must install it. Then, you need to add “btrfs” to MODULES array in mkinitcpio.conf and recreate initramfs with
mkinitcpio -g /boot/initrd -k YOUR_KERNEL_VERSION
To know what to type instead of YOUR_KERNEL_VERSION, run
ls /lib/modules
If you update the kernel, you MUST recreate initramfs BEFORE you reboot.
Then, you need to modify RPi’s boot files.
In cmdline.txt, you need to have
root=/dev/mmcblk0p2 initrd=0x01f00000 rootfstype=btrfs
and in config.txt, you need to add
initramfs initrd 0x01f00000
Once you’ve done all that and successfully booted into your btrfs RAID1 system, the only thing left is to set up periodic scrub (every 3-7 days) either with systemd timer (preferred), or cron (dcron) like so:
btrfs scrub start /
It will run on your filesystem comparing checksums of all the files and fixing them (replacing with the correct copy) if it finds any corruption.
The combination of BTRFS RAID1, single medium and Raspberry Pi make this pretty arcane stuff. It took some time and work to put all the pieces together, but here it is.
| Corruption-proof SD card filesystem for embedded Linux? |
1,429,900,113,000 |
Going through the linux 2.6.36 source code at lxr.linux.no, I could not find the ioctl() method in file_operations. Instead I found two new calls: unlocked_ioctl() and compat_ioctl().
What is the difference between ioctl(), unlocked_ioctl(), and compat_ioctl()?
|
Meta-answer: All the raw stuff happening to the Linux kernel goes through lkml (the Linux kernel mailing list). For explicative summaries, read or search lwn (Linux weekly news).
Answer: From The new way of ioctl() by Jonathan Corbet:
ioctl() is one of the remaining parts of the kernel which runs under the Big Kernel Lock (BKL). In the past, the usage of the BKL has made it possible for long-running ioctl() methods to create long latencies for unrelated processes.
Follows an explanation of the patch that introduced unlocked_ioctl and compat_ioctl into 2.6.11. The removal of the ioctl field happened a lot later, in 2.6.36.
Explanation: When ioctl was executed, it took the Big Kernel Lock (BKL), so nothing else could execute at the same time. This is very bad on a multiprocessor machine, so there was a big effort to get rid of the BKL. First, unlocked_ioctl was introduced. It lets each driver writer choose what lock to use instead. This can be difficult, so there was a period of transition during which old drivers still worked (using ioctl) but new drivers could use the improved interface (unlocked_ioctl). Eventually all drivers were converted and ioctl could be removed.
compat_ioctl is actually unrelated, even though it was added at the same time. Its purpose is to allow 32-bit userland programs to make ioctl calls on a 64-bit kernel. The meaning of the last argument to ioctl depends on the driver, so there is no way to do a driver-independent conversion.
| What is the difference between ioctl(), unlocked_ioctl() and compat_ioctl()? |
1,429,900,113,000 |
I've been seeing this in a lot of docker-entrypoint.sh scripts recently, and can't find an explanation online. My first thoughts are that it is something to do with signaling but that's a pretty wild guess.
|
The "$@" bit will expand to the list of positional parameters (usually the command line arguments), individually quoted to avoid word splitting and filename generation ("globbing").
The exec will replace the current process with the process resulting from executing its argument.
In short, exec "$@" will run the command given by the command line parameters in such a way that the current process is replaced by it (if the exec is able to execute the command at all).
| What does `exec "$@"` do? |
1,429,900,113,000 |
What is the difference between the device representation in /dev and the one in /sys/class?
Is one preferred over the other? Is there something one offers and the other doesn't?
|
The files in /dev are actual devices files which UDEV creates at run time. The directory /sys/class is exported by the kernel at run time, exposing the hierarchy of the hardware through sysfs.
From the libudev and Sysfs Tutorial
excerpt
On Unix and Unix-like systems, hardware devices are accessed through special files (also called device files or nodes) located in the /dev directory. These files are read from and written to just like normal files, but instead of writing and reading data on a disk, they communicate directly with a kernel driver which then communicates with the hardware. There are many online resources describing /dev files in more detail. Traditonally, these special files were created at install time by the distribution, using the mknod command. In recent years, Linux systems began using udev to manage these /dev files at runtime. For example, udev will create nodes when devices are detected and delete them when devices are removed (including hotplug devices at runtime). This way, the /dev directory contains (for the most part) only entries for devices which actually exist on the system at the current time, as opposed to devices which could exist.
another excerpt
The directories in Sysfs contain the heirarchy of devices, as they are attached to the computer. For example, on my computer, the hidraw0 device is located under:
/sys/devices/pci0000:00/0000:00:12.2/usb1/1-5/1-5.4/1-5.4:1.0/0003:04D8:003F.0001/hidraw/hidraw0
Based on the path, the device is attached to (roughly, starting from the end) configuration 1 (:1.0) of the device attached to port number 4 of device 1-5, connected to USB controller 1 (usb1), connected to the PCI bus. While interesting, this directory path doesn't do us very much good, since it's dependent on how the hardware is physically connected to the computer.
Fortunately, Sysfs also provides a large number of symlinks, for easy access to devices without having to know which PCI and USB ports they are connected to. In /sys/class there is a directory for each different class of device.
Usage?
In general you use rules in /etc/udev/rules.d to augment your system. Rules can be constructed to run scripts when various hardware is present.
Once a system is up you can write scripts to work against either /dev or /sys, and it really comes down to personal preferences, but I would usually try and work against /sys and make use of tools such as udevadm to query UDEV for locations of various system resources.
$ udevadm info -a -p $(udevadm info -q path -n /dev/sda) | head -15
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda':
KERNEL=="sda"
SUBSYSTEM=="block"
DRIVER==""
ATTR{ro}=="0"
ATTR{size}=="976773168"
ATTR{stat}==" 6951659 2950164 183733008 41904530 16928577 18806302 597365181 580435555 0 138442293 622621324"
ATTR{range}=="16"
...
| Difference between /dev and /sys/class? |
1,429,900,113,000 |
I'd like to know if there is a way that I could cat file like php.ini and remove all lines starting with ;
For example, if the file contained this:
; - Show all errors, except for notices
;
;error_reporting = E_ALL & ~E_NOTICE
;
; - Show only errors
;
;error_reporting = E_COMPILE_ERROR|E_ERROR|E_CORE_ERROR
;
; - Show all errors except for notices
;
error_reporting = E_ALL & ~E_NOTICE
and I ran the correct command cat | {remove comments command}, then I would end up with:
error_reporting = E_ALL & ~E_NOTICE
Note - I assumed that cat would be the best way to do this but I'm actually fine with the answer using another utility like awk, sed, egrep, etc.
|
You can use:
sed -e '/^;/d' php.ini
| How can I "cat" a file and remove commented lines? |
1,429,900,113,000 |
I want to try cgroup v2 but am not sure if it is installed on my linux machine
>> uname -r
4.14.66-041466-generic
Since cgroup v2 is available in 4.12.0-rc5, I assume it should be available in the kernel version I am using.
https://www.infradead.org/~mchehab/kernel_docs/unsorted/cgroup-v2.html
However, it does not seem like my system has cgroup v2 as the memory interface files mentioned in its documentation are not available on my system.
https://www.kernel.org/doc/Documentation/cgroup-v2.txt
It seems like I still have cgroup v1.
/sys/fs/cgroup/memory# ls
cgroup.clone_children memory.kmem.failcnt memory.kmem.tcp.usage_in_bytes memory.memsw.usage_in_bytes memory.swappiness
cgroup.event_control memory.kmem.limit_in_bytes memory.kmem.usage_in_bytes memory.move_charge_at_immigrate memory.usage_in_bytes
cgroup.procs memory.kmem.max_usage_in_bytes memory.limit_in_bytes memory.numa_stat memory.use_hierarchy
cgroup.sane_behavior memory.kmem.slabinfo memory.max_usage_in_bytes memory.oom_control notify_on_release
docker memory.kmem.tcp.failcnt memory.memsw.failcnt memory.pressure_level release_agent
memory.failcnt memory.kmem.tcp.limit_in_bytes memory.memsw.limit_in_bytes memory.soft_limit_in_bytes tasks
memory.force_empty memory.kmem.tcp.max_usage_in_bytes memory.memsw.max_usage_in_bytes memory.stat
Follow-up questions
Thanks Brian for the help. Please let me know if I should be creating a new question but I think it might be helpful to other if I just ask my questions here.
1) I am unable to add cgroup controllers, following the command in the doc
>> echo "+cpu +memory -io" > cgroup.subtree_control
However, I got "echo: write error: Invalid argument". Am I missing a prerequisite to this step?
2) I ran a docker container but the docker daemon log complained about not able to find "/sys/fs/cgroup/cpuset/docker/cpuset.cpus". It seems like docker is still expecting cgroupv1. What is the best way to enable cgroupv2 support on my docker daemon?
docker -v
Docker version 17.09.1-ce, build aedabb7
|
The easiest way is to attempt to mount the pseudo-filesystem. If you can mount it to a location, then you can attempt to manage processes with the interface:
mount -t cgroup2 none $MOUNT_POINT
I see that you cited the documentation above. One of the points you may be missing is that the paths still need to be created. There's no reason you must manage cgroup resources at any particular location. It's just convention.
For example, you could totally present procfs at /usr/monkeys... as long as the directory /usr/monkeys exists:
$ sudo mkdir /usr/monkeys
$ sudo mount -t proc none /usr/monkeys
$ ls -l /usr/monkeys
...
...
-r--r--r--. 1 root root 0 Sep 25 19:00 uptime
-r--r--r--. 1 root root 0 Sep 25 23:17 version
-r--------. 1 root root 0 Sep 25 23:17 vmallocinfo
-r--r--r--. 1 root root 0 Sep 25 18:57 vmstat
-r--r--r--. 1 root root 0 Sep 25 23:17 zoneinfo
$ sudo umount /usr/monkeys
In the same way I can do this with the cgroup v2 pseudo-filesystem:
$ sudo mount -t cgroup2 none /usr/monkeys
$ ls -l /usr/monkeys
total 0
-r--r--r--. 1 root root 0 Sep 23 16:58 cgroup.controllers
-rw-r--r--. 1 root root 0 Sep 23 16:58 cgroup.max.depth
-rw-r--r--. 1 root root 0 Sep 23 16:58 cgroup.max.descendants
-rw-r--r--. 1 root root 0 Sep 23 16:58 cgroup.procs
-r--r--r--. 1 root root 0 Sep 23 16:58 cgroup.stat
-rw-r--r--. 1 root root 0 Sep 23 16:58 cgroup.subtree_control
-rw-r--r--. 1 root root 0 Sep 23 16:58 cgroup.threads
drwxr-xr-x. 2 root root 0 Sep 23 16:58 init.scope
drwxr-xr-x. 2 root root 0 Sep 23 16:58 machine.slice
drwxr-xr-x. 59 root root 0 Sep 23 16:58 system.slice
drwxr-xr-x. 4 root root 0 Sep 23 16:58 user.slice
$ sudo umount /usr/monkeys
| How do I check cgroup v2 is installed on my machine? |
1,429,900,113,000 |
A solution that does not require additional tools would be prefered.
|
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir -- "$LOCKDIR"
then
# Do important, exclusive stuff
if rmdir -- "$LOCKDIR"
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
| How to make sure only one instance of a bash script runs? |
1,429,900,113,000 |
Occasionally I have a thought that I want to write into a file while I am at the terminal. I would want these notes all in the same file, just listed one after the other. I would also like a date / time tag on each one.
Is it possible to do this without having to open the file each time? Can I just enter it into the terminal and have it appended to the file each time with a command or script?
I am using GNU BASH.
|
Write yourself a shell script called "n". Put this in it:
#!/bin/sh
notefile=/home/me/notefile
date >> $notefile
emacs $notefile -f end-of-buffer
I recommend this instead of cat >> notefile because:
One day you'll be in such a hurry that you'll fumblefinger the >> and type > instead and blow away your file.
Emacs starts in five one-hundredths of a second on my Mac Mini. It takes a tenth of a second to start on a ten year old Celeron-based system I have sitting around. If you can't wait that long to start typing, then you're already a machine and don't need to take notes. :)
If you insist on avoiding a text editor, use a shell function:
n () { date >> /home/me/notefile; cat >> /home/me/notefile; }
which should work in all shells claiming Bourne shell compatibility.
| What's the quickest way to add text to a file from the command line? |
1,429,900,113,000 |
I expected to see number of symbols in the libc.so.6 file including printf. I used the nm tool to find them, however it says there is no symbol in libc.so.6.
|
It's probably got its regular symbols stripped and what's left is its dynamic symbols, which you can get with nm -D.
| Why nm shows no symbols for /lib/i386-linux-gnu/libc.so.6? |
1,429,900,113,000 |
I need to check with a script, whether eth0 is configured. If so, the script will do nothing. Otherwise it will start wlan0. (I don't want both eth0 and wlan0 to be up at the same time).
What would be the easiest way to check, whether eth0 is already up?
I am using Debian Wheezy
CLARIFICATION:
I would like to check not only that the cable in eth0 is plugged in, but rather that the interface is configured (i.e. it has either static IP set, or it has received a DHCP IP). If cable is plugged in, but eth0 is not configured correctly, I want to start wlan0
|
You can do it many ways. Here an example:
$ cat /sys/class/net/eth0/operstate
up
| check if interface eth0 is up (configured) |
1,429,900,113,000 |
How do I get read and write IOPS separately in Linux, using command line or in a programmatic way? I have installed sysstat package.
Please tell me how do I calculate these separately using sysstat package commands.
Or, is it possible to calculate them using file system?
ex: /proc or /sys or /dev
|
iostat is part of the sysstat package, which is able to show overall iops if desired, or show them separated by reads/writes.
Run iostat with the -d flag to only show the device information page, and -x for detailed information (separate read/write stats). You can specify the device you want information for by simply adding it afterwards on the command line.
Try running iostat -dx and looking at the summary to get a feel for the output. You can also use iostat -dx 1 to show a continuously refreshing output, which is useful for troubleshooting or live monitoring,
Using awk, field 4 will give you reads/second, while field 5 will give you writes/second.
Reads/second only:
iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4; }'
Writes/sec only:
iostat -dx <your disk name> | grep <your disk name> | awk '{ print $5; }'
Reads/sec and writes/sec separated with a slash:
iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4"/"$5; }'
Overall IOPS (what most people talk about):
iostat -d <your disk name> | grep <your disk name> | awk '{ print $2; }'
For example, running the last command with my main drive, /dev/sda, looks like this:
dan@daneel ~ $ iostat -dx sda | grep sda | awk '{ print $4"/"$5; }'
15.59/2.70
Note that you do not need to be root to run this either, making it useful for non-privileged users.
TL;DR: If you're just interested in sda, the following command will give you overall IOPS for sda:
iostat -d sda | grep sda | awk '{ print $2; }'
If you want to add up the IOPS across all devices, you can use awk again:
iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}'
This produces output like so:
dan@daneel ~ $ iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}'
18.88
| How to get total read and total write IOPS in Linux? |
1,354,072,861,000 |
I can't figure out how to properly bring up the wi-fi card on my laptop. When I turn it on and issue
$ sudo iwconfig wlan0 txpower auto
$ sudo iwlist wlan0 scan
wlan0 Interface doesn't support scanning : Network is down
it reports that the network is down. Trying to bring it up fails too:
$ sudo ifup wlan0
wlan0 no private ioctls.
Failed to bring up wlan0.
Apparently I'm missing some basic low-level iw... command.
When I issue dhclient on the interface:
$ sudo dhclient -v wlan0
Internet Systems Consortium DHCP Client 4.2.2
Copyright 2004-2011 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
^C$
and interrupt it, it brings the device up somehow and then scanning etc. works. I'd like to avoid this obviously superfluous step.
|
sudo ip link set wlan0 up or sudo ifconfig wlan0 up.
Answer from Apr 13'17:
To elaborate on the answer by Martin:
ifup and ifdown commands are part of ifupdown package, which now is considered a legacy frontend for network configuration, compared to newer ones, such as network manager.
Upon ifup ifupdown reads configuration settings from /etc/network/interfaces; it runs pre-up, post-up and post-down scripts from /etc/network, which include starting /etc/wpasupplicant/ifupdown.sh that processes additional wpa-* configuration options for wpa wifi, in /etc/network/interfaces (see zcat /usr/share/doc/wpasupplicant/README.Debian.gz for documentation). For WEP wireless-tools package plays similar role to wpa-supplicant. iwconfig is from wireless-tools, too.
ifconfig at the same time is a lower level tool, which is used by ifupdown and allows for more flexibility. For instance, there are 6 modes of wifi adapter functioning and IIRC ifupdown covers only managed mode (+ roaming mode, which formally isn't mode?). With iwconfig and ifconfig you can enable e.g. monitor mode of your wireless card, while with ifupdown you won't be able to do that directly.
ip command is a newer tool that works on top of netlink sockets, a new way to configure the kernel network stack from userspace (tools like ifconfig are built on top of ioctl system calls).
| How to bring up a wi-fi interface from a command line? |
1,354,072,861,000 |
I have ubuntu server on digitalocean and I want to give someone a folder for their domain on my server, my problem is, I don't want that user to see my folders or files or to be able to move out their folder.
How can I restrict this user in their folder and not allow to him to move out and see other files/directories ?
|
I solved my problem by this way:
Create a new group
$ sudo addgroup exchangefiles
Create the chroot directory
$ sudo mkdir /var/www/GroupFolder/
$ sudo chmod g+rx /var/www/GroupFolder/
Create the group-writable directory
$ sudo mkdir -p /var/www/GroupFolder/files/
$ sudo chmod g+rwx /var/www/GroupFolder/files/
Give them both to the new group
$ sudo chgrp -R exchangefiles /var/www/GroupFolder/
after that I went to /etc/ssh/sshd_config and added to the end of the file:
Match Group exchangefiles
# Force the connection to use SFTP and chroot to the required directory.
ForceCommand internal-sftp
ChrootDirectory /var/www/GroupFolder/
# Disable tunneling, authentication agent, TCP and X11 forwarding.
PermitTunnel no
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no
Now I'm going to add new user with obama name to my group:
$ sudo adduser --ingroup exchangefiles obama
Now everything is done, so we need to restart the ssh service:
$ sudo service ssh restart
notice: the user now can't do any thing out file directory
I mean all his file must be in file Folder.
| How to restrict a user to one folder and not allow them to move out his folder |
1,354,072,861,000 |
I'm aware its best to create temporary files with mktemp, but what about named pipes?
I prefer things to be as POSIX compliant as possible, but Linux only is acceptable. Avoiding Bashisms is my only hard criteria, as I write in dash.
|
tmppipe=$(mktemp -u)
mkfifo -m 600 "$tmppipe"
Unlike regular file creation, which is prone to being hijacked by an existing file or a symbolic link, the creation of a name pipe through mkfifo or the underlying function either creates a new file in the specified place or fails. Something like : >foo is unsafe because if the attacker can predict the output of mktemp then the attacker can create the target file for himself. But mkfifo foo would fail in such a scenario.
If you need full POSIX portability, mkfifo -m 600 /tmp/myfifo is safe against hijacking but prone to a denial of service; without access to a strong random file name generator, you would need to manage retry attempts.
If you don't care for the subtle security problems around temporary files, you can follow a simple rule: create a private directory, and keep everything in there.
tmpdir=
cleanup () {
trap - EXIT
if [ -n "$tmpdir" ] ; then rm -rf "$tmpdir"; fi
if [ -n "$1" ]; then trap - $1; kill -$1 $$; fi
}
tmpdir=$(mktemp -d)
trap 'cleanup' EXIT
trap 'cleanup HUP' HUP
trap 'cleanup TERM' TERM
trap 'cleanup INT' INT
mkfifo "$tmpdir/pipe"
| Shell Script mktemp, what's the best method to create temporary named pipe? |
1,354,072,861,000 |
I have used of hdparm -n and smartctl -A but it always seem to be a "per drive" technique as a drive may answer for only one of these tools.
So, is there a standard way to get the drive temperature on Linux (HDD or SSD)? If not, what (other) tools can I use to get this information?
|
I like hddtemp, which provides a pretty standard way of getting the temperature for supported devices. It requires SMART support though.
Example Usage: sudo hddtemp /dev/sd[abcdefghi]
Example Response:
/dev/sda: WDC WD6401AALS-00J7B0: 31°C
/dev/sdb: WDC WD7501AALS-00J7B0: 30°C
| How to check drive temperature on Linux? |
1,354,072,861,000 |
I'm trying to remove some characters from file(UTF-8). I'm using tr for this purpose:
tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
File contains some foreign characters (like "Латвийская" or "àé"). tr doesn't seem to understand them: it treats them as non-alpha and removes too.
I've tried changing some of my locale settings:
LC_CTYPE=C LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8 tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
Unfortunately, none of these worked.
How can I make tr understand Unicode?
|
That's a known (1, 2, 3, 4, 5, 6) limitation of the GNU implementation of tr.
It's not as much that it doesn't support foreign, non-English or non-ASCII characters, but that it doesn't support multi-byte characters.
Those Cyrillic characters would be treated OK, if written in the iso8859-5 (single-byte per character) character set (and your locale was using that charset), but your problem is that you're using UTF-8 where non-ASCII characters are encoded in 2 or more bytes.
GNU's got a plan (see also) to fix that and work is under way but not there yet.
FreeBSD or Solaris tr don't have the problem.
In the mean time, for most use cases of tr, you can use GNU sed or GNU awk which do support multi-byte characters.
For instance, your:
tr -cs '[[:alpha:][:space:]]' ' '
could be written:
gsed -E 's/( |[^[:space:][:alpha:]])+/ /'
or:
gawk -v RS='( |[^[:space:][:alpha:]])+' '{printf "%s", sep $0; sep=" "}'
To convert between lower and upper case (tr '[:upper:]' '[:lower:]'):
gsed 's/[[:upper:]]/\l&/g'
(that l is a lowercase L, not the 1 digit).
or:
gawk '{print tolower($0)}'
For portability, perl is another alternative:
perl -Mopen=locale -pe 's/([^[:space:][:alpha:]]| )+/ /g'
perl -Mopen=locale -pe '$_=lc$_'
If you know the data can be represented in a single-byte character set, then you can process it in that charset:
(export LC_ALL=ru_RU.iso88595
iconv -f utf-8 |
tr -cs '[:alpha:][:space:]' ' ' |
iconv -t utf-8) < Russian-file.utf8
| How to make tr aware of non-ascii(unicode) characters? |
1,354,072,861,000 |
I've recently begun supporting Linux installed on devices with built-in nvme ssds. I noticed the device files had an extra number, beyond a number identifying the drive number and the partition number. IDE/SATA/SCSI drives normally only have a drive letter and partition number.
For example: /dev/nvme0n1p2
I got to wondering what the n1 part was, and after a bit of searching, it looks like that identifies an nvme 'namespace'. The definitions for it were kind of vague: "An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks."
So, does this act like a partition that is defined at the hardware controller level, and not in an MBR or GPT partition table? Can a namespace span multiple physical nvme ssd's? E.g. can you create a namespace that pools together storage from multiple ssd's into a single logical namespace, similar to RAID 0?
What would you do with an NVME namespace that you can't already achieve using partition tables or LVM or a filesystem that can manage multiple volumes (like ZFS, Btrfs, etc)?
Also, why does it seem like the namespace numbering starts at 1 instead of 0? Is that just something to do with how NVME tracks the namespace numbers at a low level (e.g. partitions also start at 1, not 0, because that is how the standard for partition numbers was set, so the Linux kernel just uses whatever the partition number that is stored on disk is - I guess nvme works the same way?)
|
In NVM Express and related standards, controllers give access to storage divided into one or more namespaces. Namespaces can be created and deleted via the controller, as long as there is room for them (or the underlying storage supports thin provisioning), and multiple controllers can provide access to a shared namespace. How the underlying storage is organised isn’t specified by the standard, as far as I can tell.
However typical NVMe SSDs can’t be combined, since they each provide their own storage and controller attached to a PCI Express port, and the access point is the controller, above namespaces — thus a namespace can’t group multiple controllers (multiple controllers can provide access to a shared namespace). It’s better to think of namespaces as something akin to SCSI LUNs as used in enterprise storage (SANs etc.).
Namespace numbering starts at 1 because that’s how per-controller namespace identifiers work. Namespaces also have longer, globally-unique identifiers.
Namespaces can be manipulated using the nvme command, which provides support for low-level NVMe features including:
formatting, which performs a low-level format and allows various features to be used (secure erase, LBA format selection...);
attaching and detaching, which allows controllers to be attached to or detached from a namespace (if they support it and the namespace allows it).
Attaching and detaching isn’t something you’ll come across in laptop or desktop NVMe drives. You’d use it with NVMe storage bays such as those sold by Dell EMC, which replace the iSCSI SANs of the past.
See the NVM Express standards for details (they’re relatively easy to read), and this NVM Express tutorial presentation for a good introduction.
| What are nvme namespaces? How do they work? |
1,354,072,861,000 |
I know there are many differences between OSX and Linux, but what makes them so totally different, that makes them fundamentally incompatible?
|
The whole ABI is different, not just the binary format (Mach-O versus ELF) as sepp2k mentioned.
For example, while both Linux and Darwin/XNU (the kernel of OS X) use sc on PowerPC and int 0x80/sysenter/syscall on x86 for syscall entry, there's not much more in common from there on.
Darwin directs negative syscall numbers at the Mach microkernel and positive syscall numbers at the BSD monolithic kernel — see xnu/osfmk/mach/syscall_sw.h and xnu/bsd/kern/syscalls.master. Linux's syscall numbers vary by architecture — see linux/arch/powerpc/include/asm/unistd.h, linux/arch/x86/include/asm/unistd_32.h, and linux/arch/x86/include/asm/unistd_64.h — but are all nonnegative. So obviously syscall numbers, syscall arguments, and even which syscalls exist are different.
The standard C runtime libraries are different too; Darwin mostly inherits FreeBSD's libc, while Linux typically uses glibc (but there are alternatives, like eglibc and dietlibc and uclibc and Bionic).
Not to mention that the whole graphics stack is different; ignoring the whole Cocoa Objective-C libraries, GUI programs on OS X talk to WindowServer over Mach ports, while on Linux, GUI programs usually talk to the X server over UNIX domain sockets using the X11 protocol. Of course there are exceptions; you can run X on Darwin, and you can bypass X on Linux, but OS X applications definitely do not talk X.
Like Wine, if somebody put the work into
implementing a binary loader for Mach-O
trapping every XNU syscall and converting it to appropriate Linux syscalls
writing replacements for OS X libraries like CoreFoundation as needed
writing replacements for OS X services like WindowServer as needed
then running an OS X program "natively" on Linux could be possible. Years ago, Kyle Moffet did some work on the first item, creating a prototype binfmt_mach-o for Linux, but it was never completed, and I know of no other similar projects.
(In theory this is quite possible, and similar efforts have been done many times; in addition to Wine, Linux itself has support for running binaries from other UNIXes like HP-UX and Tru64, and the Glendix project aims to bring Plan 9 compatiblity to Linux.)
Somebody has put in the effort to implement a Mach-O binary loader and API translator for Linux!
shinh/maloader - GitHub takes the Wine-like approach of loading the binary and trapping/translating all the library calls in userspace. It completely ignores syscalls and all graphical-related libraries, but is enough to get many console programs working.
Darling builds upon maloader, adding libraries and other supporting runtime bits.
| What makes OSX programs not runnable on Linux? |
1,354,072,861,000 |
In Linux, in /proc/PID/fd/X, the links for file descriptors that are pipes or sockets have a number, like:
l-wx------ 1 user user 64 Mar 24 00:05 1 -> pipe:[6839]
l-wx------ 1 user user 64 Mar 24 00:05 2 -> pipe:[6839]
lrwx------ 1 user user 64 Mar 24 00:05 3 -> socket:[3142925]
lrwx------ 1 user user 64 Mar 24 00:05 4 -> socket:[3142926]
lr-x------ 1 user user 64 Mar 24 00:05 5 -> pipe:[3142927]
l-wx------ 1 user user 64 Mar 24 00:05 6 -> pipe:[3142927]
lrwx------ 1 user user 64 Mar 24 00:05 7 -> socket:[3142930]
lrwx------ 1 user user 64 Mar 24 00:05 8 -> socket:[3142932]
lr-x------ 1 user user 64 Mar 24 00:05 9 -> pipe:[9837788]
Like on the first line: 6839. What is that number representing?
|
That's the inode number for the pipe or socket in question.
A pipe is a unidirectional channel, with a write end and a read end. In your example, it looks like FD 5 and FD 6 are talking to each other, since the inode numbers are the same. (Maybe not, though. See below.)
More common than seeing a program talking to itself over a pipe is a pair of separate programs talking to each other, typically because you set up a pipe between them with a shell:
shell-1$ ls -lR / | less
Then in another terminal window:
shell-2$ ...find the ls and less PIDs with ps; say 4242 and 4243 for this example...
shell-2$ ls -l /proc/4242/fd | grep pipe
l-wx------ 1 user user 64 Mar 24 12:18 1 -> pipe:[222536390]
shell-2$ ls -l /proc/4243/fd | grep pipe
l-wx------ 1 user user 64 Mar 24 12:18 0 -> pipe:[222536390]
This says that PID 4242's standard output (FD 1, by convention) is connected to a pipe with inode number 222536390, and that PID 4243's standard input (FD 0) is connected to the same pipe.
All of which is a long way of saying that ls's output is being sent to less's input.
Getting back to your example, FD 1 and FD 2 are almost certainly not talking to each other. Most likely this is the result of tying stdout (FD 1) and stderr (FD 2) together, so they both go to the same destination. You can do that with a Bourne shell like this:
$ some-program 2>&1 | some-other-program
So, if you poked around in /proc/$PID_OF_SOME_OTHER_PROGRAM/fd, you'd find a third FD attached to a pipe with the same inode number as is attached to FDs 1 and 2 for the some-program instance. This may also be what's happening with FDs 5 and 6 in your example, but I have no ready theory how these two FDs got tied together. You'd have to know what the program is doing internally to figure that out.
| /proc/PID/fd/X link number |
1,354,072,861,000 |
I have the following file:
---------- 1 Steve Steve 341 2017-12-21 01:51 myFile.txt
I switched the user to root in the terminal, and I have noticed the following behaviors:
I can read this file and write to it.
I can't execute this file.
If I set the x bit in the user permissions (---x------) or the group permissions (------x---) or the others permissions (---------x) of the file, then I would be able to execute this file.
Can anyone explain to me or point me to a tutorial that explains all of the rules that apply when the root user is dealing with files and directories?
|
Privileged access to files and directories is actually determined by capabilities, not just by being root or not. In practice, root usually has all possible capabilities, but there are situations where all/many of them could be dropped, or some given to other users (their processes).
In brief, you already described how the access control checks work for a privileged process. Here's how the different capabilities actually affect it:
The main capability here is CAP_DAC_OVERRIDE, a process that has it can "bypass file read, write, and execute permission checks". That includes reading and writing to any files, as well as reading, writing and accessing directories.
It doesn't actually apply to executing files that are not marked as executable. The comment in generic_permission (fs/namei.c), before the access checks for files, says that
Read/write DACs are always overridable. Executable DACs are overridable when there is at least one exec bit set.
And the code checks that there's at least one x bit set if you're trying to execute the file. I suspect that's only a convenience feature, to prevent accidentally running random data files and getting errors or odd results.
Anyway, if you can override permissions, you could just make an executable copy and run that. (Though it might make a difference in theory for setuid files of a process was capable of overriding file permissions (CAP_DAC_OVERRIDE), but didn't have other related capabilities (CAP_FSETID/CAP_FOWNER/CAP_SETUID). But having CAP_DAC_OVERRIDE allows editing /etc/shadow and stuff like that, so it's approximately equal to just having full root access anyway.)
There's also the CAP_DAC_READ_SEARCH capability that allows to read any files and access any directories, but not to execute or write to them; and CAP_FOWNER that allows a process to do stuff that's usually reserved only for the file owner, like changing the permission bits and file group.
Overriding the sticky bit on directories is mentioned only under CAP_FOWNER, so it seems that CAP_DAC_OVERRIDE would not be enough to ignore that. (It would give you write permission, but usually in sticky directories you have that anyway, and +t limits it.)
(I think special devices count as "files" here. At least generic_permission() only has a type check for directories, but I didn't check outside of that.)
Of course, there are still situations where even capabilities will not help you modify files:
some files in /proc and /sys, since they're not really actual files
SELinux and other security modules that might limit root
chattr immutable +i and append only +a flags on ext2/ext3/ext4, both of which stop even root, and prevent also file renames etc.
network filesystems, where the server can do its own access control, e.g. root_squash in NFS maps root to nobody
FUSE, which I assume could do anything
read-only mounts
read-only devices
| How do file permissions work for the "root" user? |
1,354,072,861,000 |
I was using a Makefile from the book "Advanced Linux Programming (2001)" [code]. It was strange for me to see that GNU make does compile the code correctly, without even specifying a compiler in the Makefile. It's like baking without any recipe!
This is a minimal version of the code:
test.c
int main(){}
Makefile
all: test
and make really works! This is the command it executes:
cc test.c -o test
I couldn't find anything useful in the documentation. How this is possible?
P.S. One additional note: Even the language is not specified; Because test.c is available, GNU make uses cc. If there exists test.cpp or test.cc (when there is no test.c), it uses g++ (and not c++).
|
Make does this using its built-in rules. These tell it in particular how to compile C code and how to link single-object programs.
You actually don't even need a Makefile:
make test
would work without one.
To see the hidden rules that make all of this possible, use the -p option with no Makefile:
make -p -f /dev/null
The -r option disables these built-in rules.
As pointed out by alephzero, Make has had built-in rules for a very long time (if not always); Stuart Feldman's first version in Unix V7 defines them in files.c, and his 1979 paper mentions them. They're also part of the POSIX specification. (This doesn't mean that all implementations of Make support them — the old Borland Make for DOS doesn't, at least up to version 3.0.)
| How does this Makefile makes C program without even specifying a compiler? |
1,354,072,861,000 |
I have few directores inside a folder like below -
teckapp@machineA:/opt/keeper$ ls -ltrh
total 8.0K
drwxr-xr-x 10 teckapp cloudmgr 4.0K Feb 9 10:22 keeper-3.4.6
drwxr-xr-x 3 teckapp cloudmgr 4.0K Feb 12 01:44 data
I have some other folder as well in some other machines for which I need to change the permission to the above one like this drwxr-xr-x.
Meaning how can I change any folder permissions to drwxr-xr-x? I know I need to use chmod command with this but what should be the value with chown that I should use for this?
|
To apply those permissions to a directory:
chmod 755 directory_name
To apply to all directories inside the current directory:
chmod 755 */
If you want to modify all directories and subdirectories, you'll need to combine find with chmod:
find . -type d -exec chmod 755 {} +
| How to set the permission drwxr-xr-x to other folders? |
1,354,072,861,000 |
I was just wondering why the Linux NFS server is implemented in the kernel as opposed to a userspace application?
I know a userspace NFS daemon exists, but it's not the standard method for providing NFS server services.
I would think that running NFS server as a userspace application would be the preferred approach as it can provide added security having a daemon run in userspace instead of the kernel. It also would fit with the common Linux principal of doing one thing and doing it well (and that daemons shouldn't be a job for the kernel).
In fact the only benefit I can think of running in the kernel would a performance boost from context switching (and that is a debatable reason).
So is there any documented reason why it is implemented the way it is? I tried googling around but couldn't find anything.
There seems to be a lot of confusion, please note I am not asking about mounting filesystems, I am asking about providing the server side of a network filesystem. There is a very distinct difference. Mounting a filesystem locally requires support for the filesystem in the kernel, providing it does not (eg samba or unfs3).
|
unfs3 is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature.
Although it serves different protocols, Samba is an example of a successful
file server that operates in userspace.
I haven't seen a recent performance comparison.
Some other issues:
Ordinary applications look files up by pathname, but nfsd needs to be able to
look them up by filehandle. This is tricky and requires support from the
filesystem (and not all filesystems can support it). In the past it was not
possible to do this from userspace, but more recent kernels have added
name_to_handle_at(2) and open_by_handle_at(2) system calls.
I seem to recall blocking file-locking calls being a problem; I'm not sure
how userspace servers handle them these days. (Do you tie up a server thread
waiting on the lock, or do you poll?)
Newer file system semantics (change attributes, delegations, share locks)
may be implemented
more easily in kernel first (in theory--they mostly haven't been yet).
You don't want to have to check permissions, quotas, etc., by hand--instead
you want to change your uid and rely on the common kernel vfs code to do
that. And Linux has a system call (setfsuid(2)) that should do that. For
reasons I forget, I think that's proved more complicated to use in servers
than it should be.
In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues.
Short version: good question!
| Why is Linux NFS server implemented in the kernel as opposed to userspace? |
1,354,072,861,000 |
I read once that one advantage of a microkernel architecture is that you can stop/start essential services like networking and filesystems, without needing to restart the whole system. But considering that Linux kernel nowadays (was it always the case?) offers the option to use modules to achieve the same effect, what are the (remaining) advantages of a microkernel?
|
Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels. This has many aspects, such as:
Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules.
Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. A buggy filesystem or device driver can crash a Linux system. Linux doesn't have any way to mitigate these problems other than coding practices and testing.
Microkernels have a smaller trusted computing base. So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk).
A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel.
Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). Many modern unices allow ordinary users to load filesystem drivers through FUSE. Some of the Linux network packet filtering can be done in userland. However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only.
A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate. Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go.
Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes.
| How does Linux kernel compare to microkernel architectures? |
1,354,072,861,000 |
I am trying to change my username, as per advice here however after running the following command:
CurrentName@HostName ~ $ sudo usermod -l TheNameIWantToChange -d /home/TheNameIWantToChange -m CurrentName
Terminal responds with:
CurrentName@HostName ~ $ usermod: user CurrentName is currently used by process 2491
And the username stays the same. Does anybody know how I could fix this and change my username after all?
|
To quote man usermod :
CAVEATS
You must make certain that the named user is not executing any processes when this command is being executed if the user's numerical user ID, the user's name, or the user's home directory is being changed. usermod checks this on Linux, but only check if the user is logged in according to utmp on other architectures.
So, you need to make sure the user you're renaming is not logged in.
Also, I note you're not running this as root. Either run it as root, or run with sudo usermod.
| When trying to change username, terminal tells me user is currently used by process |
1,354,072,861,000 |
On my Arch Linux system (Linux Kernel 3.14.2) bind mounts do not respect the read only option
# mkdir test
# mount --bind -o ro test/ /mnt
# touch /mnt/foo
creates the file /mnt/foo. The relevant entry in /proc/mounts is
/dev/sda2 /mnt ext4 rw,noatime,data=ordered 0 0
The mount options do not match my requested options, but do match both the read/write behaviour of the bind mount and the options used to originally mount /dev/sda2 on /
/dev/sda2 / ext4 rw,noatime,data=ordered 0 0
If, however, I remount the mount then it respects the read only option
# mount --bind -o remount,ro test/ /mnt
# touch /mnt/bar
touch: cannot touch ‘/mnt/bar’: Read-only file system
and the relevant entry in /proc/mounts/
/dev/sda2 /mnt ext4 ro,relatime,data=ordered 0 0
looks like what I might expect (although in truth I would expect to see the full path of the test directory). The entry in /proc/mounts/ for the orignal mount of /dev/sda2/ on / is also unchanged and remains read/write
/dev/sda2 / ext4 rw,noatime,data=ordered 0 0
This behaviour and the work around have been known since at least 2008 and are documented in the man page of mount
Note that the filesystem mount options will remain the same as those on the original mount point, and cannot be changed by passing the -o option along with --bind/--rbind. The mount options can be changed by a separate remount command
Not all distributions behave the same. Arch seems to silently fail to respect the options while Debian generates a warning when the bind mount does not get mount read-only
mount: warning: /mnt seems to be mounted read-write.
There are reports that this behaviour was "fixed" in Debian Lenny and Squeeze although it does not appear to be a universal fix nor does it still work in Debian Wheezy. What is the difficultly associated with making bind mount respect the read only option on the initial mount?
|
Bind mount is just... well... a bind mount. I.e. it's not a new mount. It just "links"/"exposes"/"considers" a subdirectory as a new mount point. As such it cannot alter the mount parameters. That's why you're getting complaints:
# mount /mnt/1/lala /mnt/2 -o bind,ro
mount: warning: /mnt/2 seems to be mounted read-write.
But as you said a normal bind mount works:
# mount /mnt/1/lala /mnt/2 -o bind
And then a ro remount also works:
# mount /mnt/1/lala /mnt/2 -o bind,remount,ro
However what happens is that you're changing the whole mount and not just this bind mount. If you take a look at /proc/mounts you'll see that both bind mount and the original mount change to read-only:
/dev/loop0 /mnt/1 ext2 ro,relatime,errors=continue,user_xattr,acl 0 0
/dev/loop0 /mnt/2 ext2 ro,relatime,errors=continue,user_xattr,acl 0 0
So what you're doing is like changing the initial mount to a read-only mount and then doing a bind mount which will of course be read-only.
UPDATE 2016-07-20:
The following are true for 4.5 kernels, but not true for 4.3 kernels (This is wrong. See update #2 below):
The kernel has two flags that control read-only:
The MS_READONLY: Indicating whether the mount is read-only
The MNT_READONLY: Indicating whether the "user" wants it read-only
On a 4.5 kernel, doing a mount -o bind,ro will actually do the trick. For example, this:
# mkdir /tmp/test
# mkdir /tmp/test/a /tmp/test/b
# mount -t tmpfs none /tmp/test/a
# mkdir /tmp/test/a/d
# mount -o bind,ro /tmp/test/a/d /tmp/test/b
will create a read-only bind mount of /tmp/test/a/d to /tmp/test/b, which will be visible in /proc/mounts as:
none /tmp/test/a tmpfs rw,relatime 0 0
none /tmp/test/b tmpfs ro,relatime 0 0
A more detailed view is visible in /proc/self/mountinfo, which takes into consideration the user view (namespace). The relevant lines will be these:
363 74 0:49 / /tmp/test/a rw,relatime shared:273 - tmpfs none rw
368 74 0:49 /d /tmp/test/b ro,relatime shared:273 - tmpfs none rw
Where on the second line, you can see that it says both ro (MNT_READONLY) and rw (!MS_READONLY).
The end result is this:
# echo a > /tmp/test/a/d/f
# echo a > /tmp/test/b/f
-su: /tmp/test/b/f: Read-only file system
UPDATE 2016-07-20 #2:
A bit more digging into this shows that the behavior in fact depends on the version of libmount which is part of util-linux. Support for this was added with this commit and was released with version 2.27:
commit 9ac77b8a78452eab0612523d27fee52159f5016a
Author: Karel Zak
Date: Mon Aug 17 11:54:26 2015 +0200
libmount: add support for "bind,ro"
Now it's necessary t use two mount(8) calls to create a read-only
mount:
mount /foo /bar -o bind
mount /bar -o remount,ro,bind
This patch allows to specify "bind,ro" and the remount is done
automatically by libmount by additional mount(2) syscall. It's not
atomic of course.
Signed-off-by: Karel Zak
which also provides the workaround. The behavior can be seen using strace on an older and a newer mount:
Old:
mount("/tmp/test/a/d", "/tmp/test/b", 0x222e240, MS_MGC_VAL|MS_RDONLY|MS_BIND, NULL) = 0 <0.000681>
New:
mount("/tmp/test/a/d", "/tmp/test/b", 0x1a8ee90, MS_MGC_VAL|MS_RDONLY|MS_BIND, NULL) = 0 <0.011492>
mount("none", "/tmp/test/b", NULL, MS_RDONLY|MS_REMOUNT|MS_BIND, NULL) = 0 <0.006281>
Conclusion:
To achieve the desired result one needs to run two commands (as @Thomas already said):
mount SRC DST -o bind
mount DST -o remount,ro,bind
Newer versions of mount (util-linux >=2.27) do this automatically when one runs
mount SRC DST -o bind,ro
| Why doesn't mount respect the read only option for bind mounts? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.