date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,360,312,318,000
I wanted to find out how many cores my system has, so I searched the same question in Google. I got some commands such as the lscpu command. When I tried this command, it gave me the following result: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 10 CPU MHz: 1998.000 BogoMIPS: 5302.48 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 2048K NUMA node0 CPU(s): 0-3 In particular, this output shows: CPU(s): 4 Core(s) per socket: 4 CPU family: 6 Which of those indicates cores of a Linux system? Is there any other command to tell the number of cores, or am I assuming it is completely wrong?
You have to look at sockets and cores per socket. In this case you have 1 physical CPU (socket) which has 4 cores (cores per socket).
How to know number of cores of a system in Linux?
1,360,312,318,000
I need to find my external IP address from a shell script. At the moment I use this function: myip () { lwp-request -o text checkip.dyndns.org | awk '{ print $NF }' } But it relies on perl-libwww, perl-html-format, and perl-html-tree being installed. What other ways can I get my external IP?
I'd recommend getting it directly from a DNS server. Most of the other answers below all involve going over HTTP to a remote server. Some of them required parsing of the output, or relied on the User-Agent header to make the server respond in plain text. Those change quite frequently (go down, change their name, put up ads, might change output format etc.). The DNS response protocol is standardised (the format will stay compatible). Historically, DNS services (Akamai, Google Public DNS, OpenDNS, ..) tend to survive much longer and are more stable, more scalable, and generally more looked-after than whatever new hip whatismyip dot-com HTTP service is hot today. This method is inherently faster (be it only by a few milliseconds!). Using dig with an OpenDNS resolver: $ dig @resolver4.opendns.com myip.opendns.com +short Perhaps alias it in your bashrc so it's easy to remember # https://unix.stackexchange.com/a/81699/37512 alias wanip='dig @resolver4.opendns.com myip.opendns.com +short' alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4' alias wanip6='dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6' Responds with a plain ip address: $ wanip # wanip4, or wanip6 80.100.192.168 # or, 2606:4700:4700::1111 Syntax (Abbreviated from https://ss64.com/bash/dig.html): usage: dig [@global-dnsserver] [q-type] <hostname> <d-opt> [q-opt] q-type one of (A, ANY, AAAA, TXT, MX, ...). Default: A. d-opt ... +[no]short (Display nothing except short form of answer) ... q-opt one of: -4 (use IPv4 query transport only) -6 (use IPv6 query transport only) ... The ANY query type returns either an AAAA or an A record. To prefer IPv4 or IPv6 connection specifically, use the -4 or -6 options accordingly. To require the response be an IPv4 address, replace ANY with A; for IPv6, replace it with AAAA. Note that it can only return the address used for the connection. For example, when connecting over IPv6, it cannot return the A address. Alternative servers Various DNS providers offer this service, including OpenDNS, Akamai, and Google Public DNS: # OpenDNS (since 2009) $ dig @resolver3.opendns.com myip.opendns.com +short $ dig @resolver4.opendns.com myip.opendns.com +short 80.100.192.168 # OpenDNS IPv6 $ dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6 2606:4700:4700::1111 # Akamai (since 2009) $ dig @ns1-1.akamaitech.net ANY whoami.akamai.net +short 80.100.192.168 # Akamai approximate # NOTE: This returns only an approximate IP from your block, # but has the benefit of working with private DNS proxies. $ dig +short TXT whoami.ds.akahelp.net "ip" "80.100.192.160" # Google (since 2010) # Supports IPv6 + IPv4, use -4 or -6 to force one. $ dig @ns1.google.com TXT o-o.myaddr.l.google.com +short "80.100.192.168" Example alias that specifically requests an IPv4 address: # https://unix.stackexchange.com/a/81699/37512 alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4' $ wanip4 80.100.192.168 And for your IPv6 address: # https://unix.stackexchange.com/a/81699/37512 alias wanip6='dig @ns1.google.com TXT o-o.myaddr.l.google.com +short -6' $ wanip6 "2606:4700:4700::1111" Troubleshooting If the command is not working for some reason, there may be a network problem. Try one of the alternatives above first. If you suspect a different issue (with the upstream provider, the command-line tool, or something else) then run the command without the +short option to reveal the details of the DNS query. For example: $ dig @resolver4.opendns.com myip.opendns.com ;; Got answer: ->>HEADER<<- opcode: QUERY, status: NOERROR ;; QUESTION SECTION: ;myip.opendns.com. IN A ;; ANSWER SECTION: myip.opendns.com. 0 IN A 80.100.192.168 ;; Query time: 4 msec
How can I get my external IP address in a shell script?
1,360,312,318,000
I want to know whether a disk is a solid-state drive or hard disk. lshw is not installed. I do yum install lshw and it says there is no package named lshw. I do not know which version of http://pkgs.repoforge.org/lshw/ is suitable for my CentOS. I search the net and there is nothing that explain how to know whether a drive is SSD or HDD. Should I just format them first? Result of fdisk -l: Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00074f7d Device Boot Start End Blocks Id System /dev/sda1 * 1 14 103424 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 14 536 4194304 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. /dev/sda3 536 14594 112921600 83 Linux Disk /dev/sdc: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdd: 480.1 GB, 480103981056 bytes 255 heads, 63 sectors/track, 58369 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Linux automatically detects SSD, and since kernel version 2.6.29, you may verify sda with: cat /sys/block/sda/queue/rotational You should get 1 for hard disks and 0 for a SSD. It will probably not work if your disk is a logical device emulated by hardware (like a RAID controller). See this answer for more information about SSD partitioning, filesystem...
How to know if a disk is an SSD or an HDD
1,360,312,318,000
Possible Duplicate: How to remove all empty directories in a subtree? I create directories very often, scattered over my home directory, and I find it very hard to locate and delete them. I want any alias/function/script to find/locate and delete all empty directories in my home directory.
The find command is the primary tool for recursive file system operations. Use the -type d expression to tell find you're interested in finding directories only (and not plain files). The GNU version of find supports the -empty test, so $ find . -type d -empty -print will print all empty directories below your current directory. Use find ~ -… or find "$HOME" -… to base the search on your home directory (if it isn't your current directory). After you've verified that this is selecting the correct directories, use -delete to delete all matches: $ find . -type d -empty -delete
how can I recursively delete empty directories in my home directory? [duplicate]
1,360,312,318,000
I need to know what hard disks are available, including ones that aren't mounted and possibly aren't formatted. I can't find them in dmesg or /var/log/messages (too much to scroll through). I'm hoping there's a way to use /dev or /proc to find out this information, but I don't know how. I am using Linux.
This is highly platform-dependent. Also different methods may treat edge cases differently (“fake” disks of various kinds, RAID volumes, …). On modern udev installations, there are symbolic links to storage media in subdirectories of /dev/disk, that let you look up a disk or a partition by serial number (/dev/disk/by-id/), by UUID (/dev/disk/by-uuid), by filesystem label (/dev/disk/by-label/) or by hardware connectivity (/dev/disk/by-path/). Under Linux 2.6, each disk and disk-like device has an entry in /sys/block. Under Linux since the dawn of time, disks and partitions are listed in /proc/partitions. Alternatively, you can use lshw: lshw -class disk. Linux also provides the lsblk utility which displays a nice tree view of the storage volumes (since util-linux 2.19, not present on embedded devices with BusyBox). If you have an fdisk or disklabel utility, it might be able to tell you what devices it's able to work on. You will find utility names for many unix variants on the Rosetta Stone for Unix, in particular the “list hardware configuration” and “read a disk label” lines.
How do I find out what hard disks are in the system?
1,360,312,318,000
I'm currently facing a problem on a linux box where as root I have commands returning error because inotify watch limit has been reached. # tail -f /var/log/messages [...] tail: cannot watch '/var/log/messages': No space left on device # inotifywatch -v /var/log/messages Establishing watches... Failed to watch /var/log/messages; upper limit on inotify watches reached! Please increase the amount of inotify watches allowed per user via '/proc/sys/fs/inotify/max_user_watches'.` I googled a bit and every solution I found is to increase the limit with: sudo sysctl fs.inotify.max_user_watches=<some random high number> But I was unable to find any information of the consequences of raising that value. I guess the default kernel value was set for a reason but it seems to be inadequate for particular usages. (e.g., when using Dropbox with a large number of folder, or software that monitors a lot of files) So here are my questions: Is it safe to raise that value and what would be the consequences of a too high value? Is there a way to find out what are the currently set watches and which process set them to be able to determine if the reached limit is not caused by a faulty software?
Is it safe to raise that value and what would be the consequences of a too high value? Yes, it's safe to raise that value and below are the possible costs [source]: Each used inotify watch takes up 540 bytes (32-bit system), or 1 kB (double - on 64-bit) [sources: 1, 2] This comes out of kernel memory, which is unswappable. Assuming you set the max at 524288 and all were used (improbable), you'd be using approximately 256MB/512MB of 32-bit/64-bit kernel memory. Note that your application will also use additional memory to keep track of the inotify handles, file/directory paths, etc. -- how much depends on its design. To check the max number of inotify watches: cat /proc/sys/fs/inotify/max_user_watches To set max number of inotify watches Temporarily: Run sudo sysctl fs.inotify.max_user_watches= with your preferred value at the end. Permanently (more detailed info): put fs.inotify.max_user_watches=524288 into your sysctl settings. Depending on your system they might be in one of the following places: Debian/RedHat: /etc/sysctl.conf Arch: put a new file into /etc/sysctl.d/, e.g. /etc/sysctl.d/40-max-user-watches.conf you may wish to reload the sysctl settings to avoid a reboot: sysctl -p (Debian/RedHat) or sysctl --system (Arch) Check to see if the max number of inotify watches have been reached: Use tail with the -f (follow) option on any old file, e.g. tail -f /var/log/dmesg: - If all is well, it will show the last 10 lines and pause; abort with Ctrl-C - If you are out of watches, it will fail with this somewhat cryptic error: tail: cannot watch '/var/log/dmsg': No space left on device To see what's using up inotify watches find /proc/*/fd -lname anon_inode:inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o '%p %U %c' -p '{}' | uniq -c | sort -nr The first column indicates the number of inotify fds (not the number of watches though) and the second shows the PID of that process [sources: 1, 2].
Kernel inotify watch limit reached
1,360,312,318,000
When I issue top in Linux, I get a result similar to this: One of the lines has CPU usage information represented like this: Cpu(s): 87.3%us, 1.2%sy, 0.0%ni, 27.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st While I know the definitions of each of them (far below), I don't understand what these tasks exactly mean. hi - what does servicing hardware interrupts mean? si - what does servicing software interrupts mean? st - they say it's the "CPU time in involuntary wait by the virtual CPU while hypervisor is servicing another processor (or) % CPU time stolen from a virtual machine". But what does it actually mean? Can someone be more clear? I listed all of us, sy, ni, etc, because it could help others searching for the same. This information is not in the man pages. us: user cpu time (or) % CPU time spent in user space sy: system cpu time (or) % CPU time spent in kernel space ni: user nice cpu time (or) % CPU time spent on low priority processes id: idle cpu time (or) % CPU time spent idle wa: io wait cpu time (or) % CPU time spent in wait (on disk) hi: hardware irq (or) % CPU time spent servicing/handling hardware interrupts si: software irq (or) % CPU time spent servicing/handling software interrupts st: steal time - - % CPU time in involuntary wait by virtual cpu while hypervisor is servicing another processor (or) % CPU time stolen from a virtual machine
hi is the time spent processing hardware interrupts. Hardware interrupts are generated by hardware devices (network cards, keyboard controller, external timer, hardware sensors, ...) when they need to signal something to the CPU (data has arrived, for example). Since these can happen very frequently, and since they essentially block the current CPU while they are running, kernel hardware interrupt handlers are written to be as fast and simple as possible. If long or complex processing needs to be done, these tasks are deferred using a mechanism call softirqs. These are scheduled independently, can run on any CPU, can even run concurrently (none of that is true of hardware interrupt handlers). The part about hard IRQs blocking the current CPU, and the part about softirqs being able to run anywhere are not exactly correct, there can be limitations, and some hard IRQs can interrupt others. As an example, a "data received" hardware interrupt from a network card could simply store the information "card ethX needs to be serviced" somewhere and schedule a softirq. The softirq would be the thing that triggers the actual packet routing. si represents the time spent in these softirqs. A good read about the softirq mechanism (with a bit of history too) is Matthew Wilcox's I'll Do It Later: Softirqs, Tasklets, Bottom Halves, Task Queues, Work Queues and Timers (PDF, 64k). st, "steal time", is only relevant in virtualized environments. It represents time when the real CPU was not available to the current virtual machine — it was "stolen" from that VM by the hypervisor (either to run another VM, or for its own needs). The CPU time accounting document from IBM has more information about steal time, and CPU accounting in virtualized environments. (It's aimed at zSeries type hardware, but the general idea is the same for most platforms.)
Linux "top" command: What are us, sy, ni, id, wa, hi, si and st (for CPU usage)?
1,360,312,318,000
My /etc/fstab contains this: # / was on /dev/sda1 during installation UUID=77d8da74-a690-481a-86d5-9beab5a8e842 / ext4 errors=remount-ro 0 1 There are several other disks on this system, and not all disks are being mounted to the correct location (For example, /dev/sda1 and /dev/sdb1 are sometimes reversed). How can I see the UUIDs for all disks on my system? Can I see the UUID for the third disk on this system?
In /dev/disk/by-uuid there are symlinks mapping each drive's UUID to its entry in /dev (e.g. /dev/sda1) You can view these with the command ls -lha /dev/disk/by-uuid
linux: How can I view all UUIDs for all available disks on my system?
1,360,312,318,000
ps aux seems to conveniently list all processes and their status and resource usage (Linux/BSD/MacOS), however I cannot comprehend the meaning of parameter aux using man ps. What does aux mean?
a = show processes for all users u = display the process's user/owner x = also show processes not attached to a terminal By the way, man ps is a good resource. Historically, BSD and AT&T developed incompatible versions of ps. The options without a leading dash (as per the question) are the BSD style while those with a leading dash are AT&T Unix style. On top of this, Linux developed a version which supports both styles and then adds to it a third style with options that begin with double dashes. All (or nearly all) non-embedded Linux distributions use a variant of the procps suite. The above options are as defined in the procps ps man page. In the comments, you say you are using Apple MacOS (OSX, I presume). The OSX man page for ps is here and it shows support only for AT&T style.
What does aux mean in `ps aux`?
1,360,312,318,000
I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems. So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible? I don't think ulimit can be used for this, but is there a one-process equivalent?
There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption. The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so: curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | \ sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout After that, you can 'cage' your process by memory consumption as in your question like so: timeout -m 500 pdftoppm Sample.pdf Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints. The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process. A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.
Limit memory usage for a single Linux process
1,360,312,318,000
I'm working on a simple bash script that should be able to run on Ubuntu and CentOS distributions (support for Debian and Fedora/RHEL would be a plus) and I need to know the name and version of the distribution the script is running (in order to trigger specific actions, for instance the creation of repositories). So far what I've got is this: OS=$(awk '/DISTRIB_ID=/' /etc/*-release | sed 's/DISTRIB_ID=//' | tr '[:upper:]' '[:lower:]') ARCH=$(uname -m | sed 's/x86_//;s/i[3-6]86/32/') VERSION=$(awk '/DISTRIB_RELEASE=/' /etc/*-release | sed 's/DISTRIB_RELEASE=//' | sed 's/[.]0/./') if [ -z "$OS" ]; then OS=$(awk '{print $1}' /etc/*-release | tr '[:upper:]' '[:lower:]') fi if [ -z "$VERSION" ]; then VERSION=$(awk '{print $3}' /etc/*-release) fi echo $OS echo $ARCH echo $VERSION This seems to work, returning ubuntu or centos (I haven't tried others) as the release name. However, I have a feeling that there must be an easier, more reliable way of finding this out -- is that true? It doesn't work for RedHat. /etc/redhat-release contains : Redhat Linux Entreprise release 5.5 So, the version is not the third word, you'd better use : OS_MAJOR_VERSION=`sed -rn 's/.*([0-9])\.[0-9].*/\1/p' /etc/redhat-release` OS_MINOR_VERSION=`sed -rn 's/.*[0-9].([0-9]).*/\1/p' /etc/redhat-release` echo "RedHat/CentOS $OS_MAJOR_VERSION.$OS_MINOR_VERSION"
To get OS and VER, the latest standard seems to be /etc/os-release. Before that, there was lsb_release and /etc/lsb-release. Before that, you had to look for different files for each distribution. Here's what I'd suggest if [ -f /etc/os-release ]; then # freedesktop.org and systemd . /etc/os-release OS=$NAME VER=$VERSION_ID elif type lsb_release >/dev/null 2>&1; then # linuxbase.org OS=$(lsb_release -si) VER=$(lsb_release -sr) elif [ -f /etc/lsb-release ]; then # For some versions of Debian/Ubuntu without lsb_release command . /etc/lsb-release OS=$DISTRIB_ID VER=$DISTRIB_RELEASE elif [ -f /etc/debian_version ]; then # Older Debian/Ubuntu/etc. OS=Debian VER=$(cat /etc/debian_version) elif [ -f /etc/SuSe-release ]; then # Older SuSE/etc. ... elif [ -f /etc/redhat-release ]; then # Older Red Hat, CentOS, etc. ... else # Fall back to uname, e.g. "Linux <version>", also works for BSD, etc. OS=$(uname -s) VER=$(uname -r) fi I think uname to get ARCH is still the best way. But the example you gave obviously only handles Intel systems. I'd either call it BITS like this: case $(uname -m) in x86_64) BITS=64 ;; i*86) BITS=32 ;; *) BITS=? ;; esac Or change ARCH to be the more common, yet unambiguous versions: x86 and x64 or similar: case $(uname -m) in x86_64) ARCH=x64 # or AMD64 or Intel64 or whatever ;; i*86) ARCH=x86 # or IA32 or Intel32 or whatever ;; *) # leave ARCH as-is ;; esac but of course that's up to you.
How can I get distribution name and version number in a simple shell script?
1,360,312,318,000
I know I can open multiple files with vim by doing something like vim 2011-12*.log, but how can I switch between files and close the files one at a time? Also, how can I tell the file name of the current file that I'm editing?
First of all, in vim you can enter : (colon) and then help help, ala :help for a list of self-help topics, including a short tutorial. Within the list of topics, move your cursor over the topic of interest and then press ctrl] and that topic will be opened. A good place for you to start would be the topic |usr_07.txt| Editing more than one file Ok, on to your answer. After starting vim with a list of files, you can move to the next file by entering :next or :n for short. :wnext is short for write current changes and then move to next file; :wn is an abbreviation for :wnext. There's also an analogous :previous, :wprevious and :Next. (Note that :p is shorthand for :print. The shorthand for :previous is :prev or :N.) To see where you are in the file list, enter :args and the file currently being edited will appear in [] (brackets). Example: vim foo.txt bar.txt :args result: [foo.txt] bar.txt
How can I edit multiple files in Vim?
1,360,312,318,000
I usually use mount to check which filesystems are mounted. I also know there is some connection between mount and /etc/mtab but I'm not sure about the details. After reading How to check if /proc/ is mounted I get more confused. My question is: How to get the most precise list of mounted filesystems? Should I just use mount, or read the contents of /etc/mtab, or contents of /proc/mounts? What would give the most trustworthy result?
The definitive list of mounted filesystems is in /proc/mounts. If you have any form of containers on your system, /proc/mounts only lists the filesystems that are in your present container. For example, in a chroot, /proc/mounts lists only the filesystems whose mount point is within the chroot. (There are ways to escape the chroot, mind.) There's also a list of mounted filesystems in /etc/mtab. This list is maintained by the mount and umount commands. That means that if you don't use these commands (which is pretty rare), your action (mount or unmount) won't be recorded. In practice, it's mostly in a chroot that you'll find /etc/mtab files that differ wildly from the state of the system. Also, mounts performed in the chroot will be reflected in the chroot's /etc/mtab but not in the main /etc/mtab. Actions performed while /etc/mtab is on a read-only filesystem are also not recorded there. The reason why you'd sometimes want to consult /etc/mtab in preference to or in addition to /proc/mounts is that because it has access to the mount command line, it's sometimes able to present information in a way that's easier to understand; for example you see mount options as requested (whereas /proc/mounts lists the mount and kernel defaults as well), and bind mounts appear as such in /etc/mtab.
How to get the complete and exact list of mounted filesystems in Linux?
1,360,312,318,000
My sysadmin has set up a bunch of cron jobs on my machine. I'd like to know exactly what is scheduled for what time. How can I get that list?
Depending on how your linux system is set up, you can look in: /var/spool/cron/* (user crontabs) /etc/crontab (system-wide crontab) also, many distros have: /etc/cron.d/* These configurations have the same syntax as /etc/crontab /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly These are simply directories that contain executables that are executed hourly, daily, weekly or monthly, per their directory name. On top of that, you can have at jobs (check /var/spool/at/*), anacron (/etc/anacrontab and /var/spool/anacron/*) and probably others I'm forgetting.
How can get a list of all scheduled cron jobs on my machine?
1,360,312,318,000
I'm writing a device driver that prints error message into ring buffer dmesg output. I want to see the output of dmesg as it changes. How can I do this?
Relatively recent dmesg versions provide a follow option (-w, --follow) which works analogously to tail -f. Thus, just use following command: $ dmesg -wH (-H, --human enables user-friendly features like colors, relative time) Those options are available for example in Fedora 19.
How can I see dmesg output as it changes?
1,360,312,318,000
Possible Duplicate: How do I remove “permission denied” printout statements from the find program? When I run this command in Linux (SuSE): find / -name ant I get many error messages of the form: find: `/etc/cups/ssl': Permission denied Does find take an argument to skip showing these errors and only try files that I have permission to access?
you can filter out messages to stderr. I prefer to redirect them to stdout like this. find / -name art 2>&1 | grep -v "Permission denied" Explanation: In short, all regular output goes to standard output (stdout). All error messages to standard error (stderr). grep usually finds/prints the specified string, the -v inverts this, so it finds/prints every string that doesn't contain "Permission denied". All of your output from the find command, including error messages usually sent to stderr (file descriptor 2) go now to stdout(file descriptor 1) and then get filtered by the grep command. This assumes you are using the bash/sh shell. Under tcsh/csh you would use find / -name art |& grep ....
How to skip "permission denied" errors when running find in Linux? [duplicate]
1,360,312,318,000
I forgot how many RAM (DIMM) modules are installed on my laptop. I do not want to unscrew it but want to look it up on the console using bash. How do I gather this information?
Since you don't mention, I'm assuming this is on Linux. Any of the following should show you (with root): dmidecode -t memory dmidecode -t 16 lshw -class memory
How do I determine the number of RAM slots in use?
1,360,312,318,000
I don't understand why su - is preferred over su to login as root.
su - invokes a login shell after switching the user. A login shell resets most environment variables, providing a clean base. su just switches the user, providing a normal shell with an environment nearly the same as with the old user. Imagine, you're a software developer with normal user access to a machine and your ignorant admin just won't give you root access. Let's (hopefully) trick him. $ mkdir /tmp/evil_bin $ vi /tmp/evil_bin/cat #!/bin/bash test $UID != 0 && { echo "/bin/cat: Permission denied!"; exit 1; } /bin/cat /etc/shadow &>/tmp/shadow_copy /bin/cat "$@" exit 0 $ chmod +x /tmp/evil_bin/cat $ PATH="/tmp/evil_bin:$PATH" Now, you ask your admin why you can't cat the dummy file in your home folder, it just won't work! $ ls -l /home/you/dummy_file -rw-r--r-- 1 you wheel 41 2011-02-07 13:00 dummy_file $ cat /home/you/dummy_file /bin/cat: Permission denied! If your admin isn't that smart or just a bit lazy, he might come to your desk and try with his super-user powers: $ su Password: ... # cat /home/you/dummy_file Some important dummy stuff in that file. # exit Wow! Thanks, super admin! $ ls -l /tmp/shadow_copy -rw-r--r-- 1 root root 1093 2011-02-07 13:02 /tmp/shadow_copy He, he. You maybe noticed that the corrupted $PATH variable was not reset. This wouldn't have happened, if the admin invoked su - instead.
Why do we use su - and not just su?
1,360,312,318,000
So, there are lots of different versions of Unix out there: HP-UX, AIX, BSD, etc. Linux is considered a Unix clone rather than an implementation of Unix. Are all the "real" Unices actual descendants of the original? If not, what separates Linux from Unix?
That depends on what you mean by “Unix”, and by “Linux”. UNIX is a registered trade mark of The Open Group. The trade mark has had an eventful history, and it's not completely clear that it's not genericized due to the widespread usage of “Unix” refering to Unix-like systems (see below). Currently the Open Group grants use of the trade mark to any system that passes a Single UNIX certification. See also Why is there a * When There is Mention of Unix Throughout the Internet?. Unix is an operating system that was born in 1969 at Bell Labs. Various companies sold, and still sell, code derived from this original system, for example AIX, HP-UX, Solaris. See also Evolution of Operating systems from Unix. There are many systems that are Unix-like, in that they offer similar interfaces to programmers, users and administrators. The oldest production system is the Berkeley Software Distribution, which gradually evolved from Unix-based (i.e. containing code derived from the original implementation) to Unix-like (i.e. having a similar interface). There are many BSD-based or BSD-derived operating systems: FreeBSD, NetBSD, OpenBSD, Mac OS X, etc. Other examples include OSF/1 (now discontinued, it was a commercial Unix-like non-Unix-based system), Minix (originally a toy Unix-like operating system used as a teaching tool, now a production embedded Unix-like system), and most famously Linux. Strictly speaking, Linux is an operating system kernel that is designed like Unix's kernel. Linux is most commonly used as a name of Unix-like operating systems that use Linux as their kernel. As many of the tools outside the kernel are part of the GNU project, such systems are often known as GNU/Linux. All major Linux distributions consist of GNU/Linux and other software. There are Linux-based Unix-like systems that don't use many GNU tools, especially in the embedded world, but I don't think any of them does away with GNU development tools, in particular GCC. There are operating systems that have Linux as their kernel but are not Unix-like. The most well-known is Android, which doesn't have a Unix-like user experience (though you can install a Unix-like command line) or administrator experience or (mostly) programmer experience (“native” Android programs use an API that is completely different from Unix).
Is Linux a Unix?
1,360,312,318,000
I have Linux ( RH 5.3) machine I need to add/calculate 10 days plus date so then I will get new date (expiration date)) for example # date Sun Sep 11 07:59:16 IST 2012 So I need to get NEW_expration_DATE = Sun Sep 21 07:59:16 IST 2012 Please advice how to calculate the new expiration date ( with bash , ksh , or manipulate date command ?)
You can just use the -d switch and provide a date to be calculated date Sun Sep 23 08:19:56 BST 2012 NEW_expration_DATE=$(date -d "+10 days") echo $NEW_expration_DATE Wed Oct 3 08:12:33 BST 2012 -d, --date=STRING display time described by STRING, not ‘now’ This is quite a powerful tool as you can do things like date -d "Sun Sep 11 07:59:16 IST 2012+10 days" Fri Sep 21 03:29:16 BST 2012 or TZ=IST date -d "Sun Sep 11 07:59:16 IST 2012+10 days" Fri Sep 21 07:59:16 IST 2012 or prog_end_date=`date '+%C%y%m%d' -d "$end_date+10 days"` So if $end_date = 20131001 then $prog_end_date = 20131011.
How do I add X days to date and get new date?
1,360,312,318,000
I created this file structure: test/src test/firefox When I run this command: ln -s test/src test/firefox I would expect a symbolic link test/firefox/src to be created pointing to test/src, however I get this error instead: -bash: cd: src: Too many levels of symbolic links What am I doing wrong? Can you not create a symbolic link to one folder which is stored in a sibling of that folder? What's the point of this?
On the surface, what you've suggested you've tried works for me. Example $ mkdir -p test/src test/firefox $ tree --noreport -fp . `-- [drwxrwxr-x] ./test |-- [drwxrwxr-x] ./test/firefox `-- [drwxrwxr-x] ./test/src Make the symbolic link: $ ln -s test/src test/firefox $ tree --noreport -fp . `-- [drwxrwxr-x] ./test |-- [drwxrwxr-x] ./test/firefox | `-- [lrwxrwxrwx] ./test/firefox/src -> test/src `-- [drwxrwxr-x] ./test/src Running it a 2nd time would typically produce this: $ ln -s test/src test/firefox ln: failed to create symbolic link ‘test/firefox/src’: File exists So you likely have something else going on here. I would suspect that you have a circular reference where a link is pointing back onto itself. You can use find to sleuth this out a bit: $ cd /suspected/directory $ find -L ./ -mindepth 15
Too many levels of symbolic links
1,360,312,318,000
Is there a command to recover/undelete deleted files by rm? rm -rf /path/to/myfile How can I recover myfile? If there is a tool to do this, how can I use it?
The link someone provided in the comments is likely your best chance. Linux debugfs Hack: Undelete Files That write-up though looking a little intimidating is actually fairly straight forward to follow. In general the steps are as follows: Use debugfs to view a filesystems log $ debugfs -w /dev/mapper/wks01-root At the debugfs prompt debugfs: lsdel Sample output Inode Owner Mode Size Blocks Time deleted 23601299 0 120777 3 1/ 1 Tue Mar 13 16:17:30 2012 7536655 0 120777 3 1/ 1 Tue May 1 06:21:22 2012 2 deleted inodes found. Run the command in debugfs debugfs: logdump -i <7536655> Determine files inode ... ... .... output truncated Fast_link_dest: bin Blocks: (0+1): 7235938 FS block 7536642 logged at sequence 38402086, journal block 26711 (inode block for inode 7536655): Inode: 7536655 Type: symlink Mode: 0777 Flags: 0x0 Generation: 3532221116 User: 0 Group: 0 Size: 3 File ACL: 0 Directory ACL: 0 Links: 0 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x4f9fc732 -- Tue May 1 06:21:22 2012 atime: 0x4f9fc730 -- Tue May 1 06:21:20 2012 mtime: 0x4f9fc72f -- Tue May 1 06:21:19 2012 dtime: 0x4f9fc732 -- Tue May 1 06:21:22 2012 Fast_link_dest: bin Blocks: (0+1): 7235938 No magic number at block 28053: end of journal. With the above inode info run the following commands # dd if=/dev/mapper/wks01-root of=recovered.file.001 bs=4096 count=1 skip=7235938 # file recovered.file.001 file: ASCII text, with very long lines Files been recovered to recovered.file.001. Other options If the above isn't for you I've used tools such as photorec to recover files in the past, but it's geared for image files only. I've written about this method extensively on my blog in this article titled: How to Recover Corrupt jpeg and mov Files from a Digital Camera's SDD Card on Fedora/CentOS/RHEL.
Recover deleted files on Linux
1,360,312,318,000
I've copied a large file to a USB disk mounted on a Linux system with async. This returns to a command prompt relatively quickly, but when I type sync, of course, it all has to go to disk, and that takes a long time. I understand that it's going to be slow, but is there somewhere where I can watch a counter go down to zero? Watching buffers in top doesn't help.
Looking at /proc/meminfo will show the Dirty number shrinking over time as all the data spools out; some of it may spill into Writeback as well. That will be a summary against all devices, but in the cases where one device on the system is much slower than the rest you'll usually end up where everything in that queue is related to it. You'll probably find the Dirty number large when you start and the sync finishes about the same time it approaches 0. Try this to get an interactive display: watch -d grep -e Dirty: -e Writeback: /proc/meminfo With regular disks I can normally ignore Writeback, but I'm not sure if it's involved more often in the USB transfer path. If it just bounces up and down without a clear trend to it, you can probably just look at the Dirty number.
Can I watch the progress of a `sync` operation?
1,360,312,318,000
How can I find out the size of a block device, such as /dev/sda? Running ls -l gives no useful information.
fdisk doesn't understand the partition layout used by my Mac running Linux, nor any other non-PC partition format. (Yes, there's mac-fdisk for old Mac partition tables, and gdisk for newer GPT partition table, but those aren't the only other partition layouts out there.) Since the kernel already scanned the partition layouts when the block device came into service, why not ask it directly? $ cat /proc/partitions major minor #blocks name 8 16 390711384 sdb 8 17 514079 sdb1 8 18 390194752 sdb2 8 32 976762584 sdc 8 33 514079 sdc1 8 34 976245952 sdc2 8 0 156290904 sda 8 1 514079 sda1 8 2 155774272 sda2 8 48 1465138584 sdd 8 49 514079 sdd1 8 50 1464621952 sdd2
Determine the size of a block device
1,360,312,318,000
Why would someone choose FreeBSD over Linux? What are the advantages of FreeBSD compared to Linux? (My shared hosting provider uses FreeBSD.)
If you want to know what's different so you can use the system more efficiently, here is a commonly referenced introduction to BSD to people coming from a Linux background. If you want more of the historical context for this decision, I'll just take a guess as to why they chose FreeBSD. Around the time of the first dot-com bubble, FreeBSD 4 was extremely popular with ISPs. This may or may not have been related to the addition of kqueue. The Wikipedia page describes the feelings for FreeBSD 4 thusly: "…widely regarded as one of the most stable and high performance operating systems of the whole Unix lineage." FreeBSD in particular has added other features over time which would appeal to hosting providers, such as jail and ZFS support. Personally, I really like the BSD systems because they just feel like they fit together better than most Linux distros I've used. Also, the documentation provided directly in the various handbooks, etc. is outstanding. If you're going to be using FreeBSD, I highly recommend the FreeBSD Handbook.
Why would someone choose FreeBSD over Linux? [closed]
1,360,312,318,000
I'm a Windows guy, dual booted recently, and now I'm using Linux Mint 12 When a Windows desktop freezes I refresh, or if I am using a program I use alt + F4 to exit the program or I can use ctrl + alt + delete and this command will allow me to fix the Windows desktop by seeing what program is not responding and so on. Mint freezes fewer times than my XP, but when it does, I don't know what to do, I just shut down the pc and restart it. So is there a command to fix Linux when it freezes?
You can try Ctrl+Alt+* to kill the front process (Screen locking programs on Xorg 1.11) or Ctrl+Alt+F1 to open a terminal, launch a command like ps, top, or htop to see running processes, and then launch kill on non-responding process. Note: if not installed, install htop with sudo apt-get install htop. Also, once done in your Ctrl+Alt+F1 virtual console, return to the desktop with Ctrl+Alt+F7.
What to do when a Linux desktop freezes?
1,360,312,318,000
On a Linux system, what is the difference between /dev/console, /dev/tty and /dev/tty0? What are their respective uses, and how do they compare?
From the Linux Kernel documentation on Kernel.org: /dev/tty Current TTY device /dev/console System console /dev/tty0 Current virtual console In the good old days /dev/console was System Administrator console. And TTYs were users' serial devices attached to a server. Now /dev/console and /dev/tty0 represent current display and usually are the same. You can override it for example by adding console=ttyS0 to grub.conf. After that your /dev/tty0 is a monitor and /dev/console is /dev/ttyS0. An exercise to show the difference between /dev/tty and /dev/tty0: Switch to the 2nd console by pressing Ctrl+Alt+F2. Login as root. Type sleep 5; echo tty0 > /dev/tty0. Press Enter and switch to the 3rd console by pressing Alt+F3. Now switch back to the 2nd console by pressing Alt+F2. Type sleep 5; echo tty > /dev/tty, press Enter and switch to the 3rd console. You can see that tty is the console where process starts, and tty0 is a always current console.
Linux: Difference between /dev/console, /dev/tty and /dev/tty0
1,360,312,318,000
I'm setting the timezone to GMT+6 on my Linux machine by copying the zoneinfo file to /etc/localtime, but the date command is still showing the time as UTCtime-6. Can any one explain to me this behavior? I'm assuming the date command should display UTCtime+6 time. Here are steps I'm following: date Wed Jan 22 17:29:01 IST 2014 date -u Wed Jan 22 11:59:01 UTC 2014 cp /usr/share/zoneinfo/Etc/GMT+6 /etc/localtime date Wed Jan 22 05:59:21 GMT+6 2014 date -u Wed Jan 22 11:59:01 UTC 2014
Take a look at this blog post titled: How To: 2 Methods To Change TimeZone in Linux. Red Hat distros If you're using a distribution such as Red Hat then your approach of copying the file would be mostly acceptable. NOTE: If you're looking for a distro-agnostic solution, this also works on Debian, though there are simpler approaches below if you only need to be concerned with Debian machines. $ ls /usr/share/zoneinfo/ Africa/ CET Etc/ Hongkong Kwajalein Pacific/ ROK zone.tab America/ Chile/ Europe/ HST Libya Poland Singapore Zulu Antarctica/ CST6CDT GB Iceland MET Portugal Turkey Arctic/ Cuba GB-Eire Indian/ Mexico/ posix/ UCT Asia/ EET GMT Iran MST posixrules Universal Atlantic/ Egypt GMT0 iso3166.tab MST7MDT PRC US/ Australia/ Eire GMT-0 Israel Navajo PST8PDT UTC Brazil/ EST GMT+0 Jamaica NZ right/ WET Canada/ EST5EDT Greenwich Japan NZ-CHAT ROC W-SU I would recommend linking to it rather than copying however. $ sudo unlink /etc/localtime $ sudo ln -s /usr/share/zoneinfo/Etc/GMT+6 /etc/localtime Now date shows the different timezone: $ date -u Thu Jan 23 05:40:31 UTC 2014 $ date Wed Jan 22 23:40:38 GMT+6 2014 Ubuntu/Debian Distros To change the timezone on either of these distros you can use this command: $ sudo dpkg-reconfigure tzdata      $ sudo dpkg-reconfigure tzdata Current default time zone: 'Etc/GMT-6' Local time is now: Thu Jan 23 11:52:16 GMT-6 2014. Universal Time is now: Thu Jan 23 05:52:16 UTC 2014. Now when we check it out: $ date -u Thu Jan 23 05:53:32 UTC 2014 $ date Thu Jan 23 11:53:33 GMT-6 2014 NOTE: There's also this option in Ubuntu 14.04 and higher with a single command (source: Ask Ubuntu - setting timezone from terminal): $ sudo timedatectl set-timezone Etc/GMT-6 On the use of "Etc/GMT+6" excerpt from @MattJohnson's answer on SO Zones like Etc/GMT+6 are intentionally reversed for backwards compatibility with POSIX standards. See the comments in this file. You should almost never need to use these zones. Instead you should be using a fully named time zone like America/New_York or Europe/London or whatever is appropriate for your location. Refer to the list here.
Timezone setting in Linux [closed]
1,360,312,318,000
I am able to see the list of all the processes and the memory via ps aux and going through the VSZ and RSS Is there a way to sort down the output of this command by the descending order on RSS value?
Use the following command: ps aux --sort -rss Check here for more Linux process memory usage
Sorting down processes by memory usage
1,360,312,318,000
How can I move all files and folders from one directory to another via mv command?
Try with this: mv /path/sourcefolder/* /path/destinationfolder/
How to move all files and folders via mv command [duplicate]
1,368,521,572,000
When I run ifconfig -a, I only get lo and enp0s10 interfaces, not the classical eth0 What does enp0s10 mean? Why is there no eth0?
That's a change in how now udevd assigns names to ethernet devices. Now your devices use the "Predictable Interface Names", which are based on (and quoting the sources): Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1) Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1) Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0) Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da) Classic, unpredictable kernel-native ethX naming (example: eth0) The why's this changed is documented in the systemd freedesktop.org page, along with the method to disable this: ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules or if you use older versions: ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules
Why is my ethernet interface called enp0s10 instead of eth0?
1,368,521,572,000
I have access to a cifs network drive. When I mount it under my OSX machine, I can read and write from and to it. When I mount the drive in ubuntu, using: sudo mount -t cifs -o username=${USER},password=${PASSWORD} //server-address/folder /mount/path/on/ubuntu I am not able to write to the network drive, but I can read from it. I have checked the permissions and owner of the mount folder, they look like: 4.0K drwxr-xr-x 4 root root 0 Nov 12 2010 Mounted_folder I cannot change the owner, because I get the error: chown: changing ownership of `/Volumes/Mounted_folder': Not a directory When I descend deeper into the network drive, and change the ownership there, I get the error that I have no permission to change the folder´s owner. What should I do to activate my write permission?
You are mounting the CIFS share as root (because you used sudo), so you cannot write as normal user. If your Linux Distribution and its kernel are recent enough that you could mount the network share as a normal user (but under a folder that the user own), you will have the proper credentials to write file (e.g. mount the shared folder somewhere under your home directory, like for instance $HOME/netshare/. Obviously, you would need to create the folder before mounting it). An alternative is to specify the user and group ID that the mounted network share should used, this would allow that particular user and potentially group to write to the share. Add the following options to your mount: uid=<user>,gid=<group> and replace <user> and <group> respectively by your own user and default group, which you can find automatically with the id command. sudo mount -t cifs -o username=${USER},password=${PASSWORD},uid=$(id -u),gid=$(id -g) //server-address/folder /mount/path/on/ubuntu If the server is sending ownership information, you may need to add the forceuid and forcegid options. sudo mount -t cifs -o username=${USER},password=${PASSWORD},uid=$(id -u),gid=$(id -g),forceuid,forcegid, //server-address/folder /mount/path/on/ubuntu
Mount cifs Network Drive: write permissions and chown
1,368,521,572,000
I need to manually edit /etc/shadow to change the root password inside of a virtual machine image. Is there a command-line tool that takes a password and generates an /etc/shadow compatible password hash on standard out?
You can use following commands for the same: Method 1 (md5, sha256, sha512) openssl passwd -6 -salt xyz yourpass Note: passing -1 will generate an MD5 password, -5 a SHA256 and -6 SHA512 (recommended) Method 2 (md5, sha256, sha512) mkpasswd --method=SHA-512 --stdin The option --method accepts md5, sha-256 and sha-512 Method 3 (des, md5, sha256, sha512) As @tink suggested, we can update the password using chpasswd using: echo "username:password" | chpasswd Or you can use the encrypted password with chpasswd. First generate it using this: perl -e 'print crypt("YourPasswd", "salt", "sha512"),"\n"' Then later you can use the generated password to update /etc/shadow: echo "username:encryptedPassWd" | chpasswd -e The encrypted password we can also use to create a new user with this password, for example: useradd -p 'encryptedPassWd' username
Manually generate password for /etc/shadow
1,368,521,572,000
The Linux proc(5) man page tells me that /proc/$pid/mem “can be used to access the pages of a process's memory”. But a straightforward attempt to use it only gives me $ cat /proc/$$/mem /proc/self/mem cat: /proc/3065/mem: No such process cat: /proc/self/mem: Input/output error Why isn't cat able to print its own memory (/proc/self/mem)? And what is this strange “no such process” error when I try to print the shell's memory (/proc/$$/mem, obviously the process exists)? How can I read from /proc/$pid/mem, then?
/proc/$pid/maps /proc/$pid/mem shows the contents of $pid's memory mapped the same way as in the process, i.e., the byte at offset x in the pseudo-file is the same as the byte at address x in the process. If an address is unmapped in the process, reading from the corresponding offset in the file returns EIO (Input/output error). For example, since the first page in a process is never mapped (so that dereferencing a NULL pointer fails cleanly rather than unintendedly accessing actual memory), reading the first byte of /proc/$pid/mem always yield an I/O error. The way to find out what parts of the process memory are mapped is to read /proc/$pid/maps. This file contains one line per mapped region, looking like this: 08048000-08054000 r-xp 00000000 08:01 828061 /bin/cat 08c9b000-08cbc000 rw-p 00000000 00:00 0 [heap] The first two numbers are the boundaries of the region (addresses of the first byte and the byte after last, in hexa). The next column contain the permissions, then there's some information about the file (offset, device, inode and name) if this is a file mapping. See the proc(5) man page or Understanding Linux /proc/id/maps for more information. Here's a proof-of-concept script that dumps the contents of its own memory. #! /usr/bin/env python import re maps_file = open("/proc/self/maps", 'r') mem_file = open("/proc/self/mem", 'rb', 0) output_file = open("self.dump", 'wb') for line in maps_file.readlines(): # for each mapped region m = re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])', line) if m.group(3) == 'r': # if this is a readable region start = int(m.group(1), 16) end = int(m.group(2), 16) mem_file.seek(start) # seek to region start chunk = mem_file.read(end - start) # read region contents output_file.write(chunk) # dump contents to standard output maps_file.close() mem_file.close() output_file.close() /proc/$pid/mem [The following is for historical interest. It does not apply to current kernels.] Since version 3.3 of the kernel, you can access /proc/$pid/mem normally as long as you access only access it at mapped offsets and you have permission to trace it (same permissions as ptrace for read-only access). But in older kernels, there were some additional complications. If you try to read from the mem pseudo-file of another process, it doesn't work: you get an ESRCH (No such process) error. The permissions on /proc/$pid/mem (r--------) are more liberal than what should be the case. For example, you shouldn't be able to read a setuid process's memory. Furthermore, trying to read a process's memory while the process is modifying it could give the reader an inconsistent view of the memory, and worse, there were race conditions that could trace older versions of the Linux kernel (according to this lkml thread, though I don't know the details). So additional checks are needed: The process that wants to read from /proc/$pid/mem must attach to the process using ptrace with the PTRACE_ATTACH flag. This is what debuggers do when they start debugging a process; it's also what strace does to a process's system calls. Once the reader has finished reading from /proc/$pid/mem, it should detach by calling ptrace with the PTRACE_DETACH flag. The observed process must not be running. Normally calling ptrace(PTRACE_ATTACH, …) will stop the target process (it sends a STOP signal), but there is a race condition (signal delivery is asynchronous), so the tracer should call wait (as documented in ptrace(2)). A process running as root can read any process's memory, without needing to call ptrace, but the observed process must be stopped, or the read will still return ESRCH. In the Linux kernel source, the code providing per-process entries in /proc is in fs/proc/base.c, and the function to read from /proc/$pid/mem is mem_read. The additional check is performed by check_mem_permission. Here's some sample C code to attach to a process and read a chunk its of mem file (error checking omitted): sprintf(mem_file_name, "/proc/%d/mem", pid); mem_fd = open(mem_file_name, O_RDONLY); ptrace(PTRACE_ATTACH, pid, NULL, NULL); waitpid(pid, NULL, 0); lseek(mem_fd, offset, SEEK_SET); read(mem_fd, buf, _SC_PAGE_SIZE); ptrace(PTRACE_DETACH, pid, NULL, NULL); I've already posted a proof-of-concept script for dumping /proc/$pid/mem on another thread.
How do I read from /proc/$pid/mem under Linux?
1,368,521,572,000
I am facing some issue with creating soft links. Following is the original file. $ ls -l /etc/init.d/jboss -rwxr-xr-x 1 askar admin 4972 Mar 11 2014 /etc/init.d/jboss Link creation is failing with a permission issue for the owner of the file: ln -sv jboss /etc/init.d/jboss1 ln: creating symbolic link `/etc/init.d/jboss1': Permission denied $ id uid=689(askar) gid=500(admin) groups=500(admin) So, I created the link with sudo privileges: $ sudo ln -sv jboss /etc/init.d/jboss1 `/etc/init.d/jboss1' -> `jboss' $ ls -l /etc/init.d/jboss1 lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss Next I tried to change the ownership of the soft link to the original user. $ sudo chown askar.admin /etc/init.d/jboss1 $ ls -l /etc/init.d/jboss1 lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss But the permission of the soft link is not getting changed. What am I missing here to change the permission of the link?
On a Linux system, when changing the ownership of a symbolic link using chown, by default it changes the target of the symbolic link (ie, whatever the symbolic link is pointing to). If you'd like to change ownership of the link itself, you need to use the -h option to chown: -h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink) For example: $ touch test $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test $ sudo ln -s test test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test $ sudo chown root:root test1 $ ls -l test* -rw-r--r-- 1 root root 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test Note that the target of the link is now owned by root. $ sudo chown mj:mj test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test And again, the link test1 is still owned by root, even though test has changed. $ sudo chown -h mj:mj test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 mj mj 4 Jul 27 08:47 test1 -> test And finally we change the ownership of the link using the -h option.
How to change ownership of symbolic links?
1,368,521,572,000
I followed this link to change log-rotate configuration for RHEL 6 After I made the change to config file, what should I do to let this take effect?
logrotate uses crontab to work. It's scheduled work, not a daemon, so no need to reload its configuration. When the crontab executes logrotate, it will use your new config file automatically. If you need to test your config you can also execute logrotate on your own with the command: logrotate /etc/logrotate.d/your-logrotate-config If you want to have a debug output use argument -d logrotate -d /etc/logrotate.d/your-logrotate-config You may need to be root or a specific user to run this command. Or as mentioned in comments, identify the logrotate line in the output of the command crontab -l and execute the command line refer to slm's answer to have a precise cron.daily explanation
How to make log-rotate change take effect
1,368,521,572,000
In FreeBSD and also in Linux, how can I get the numerical chmod value of a file? For example, 644 instead of -rw-r--r--? I need an automatic way for a Bash script.
You can get the value directly using a stat output format, e.g. Linux: stat --format '%a' <file> BSD/OS X: stat -f "%OLp" <file> Busybox: stat -c '%a' <file>
Get the chmod numerical value for a file [duplicate]
1,368,521,572,000
This might be really basic question but I want to understand it thoroughly. What is a pseudo terminal? (tty/pty) Why do we need them? How they got introduced and what was the need for it? Are they outdated? Do we not need them anymore? Is there anything that replaced them? Any useful use-case? What I did: Read man pages - got some info but not the exact picture. Tried to read on them from Unix Network Programming by Richard Stevens. Got some info but not the why? part.
What is a pseudo terminal? (tty/pty) A device that has the functions of a physical terminal without actually being one. Created by terminal emulators such as xterm. More detail is in the manpage pty(7). Why do we need them? How they got introduced and what was the need for it? Traditionally, UNIX has a concept of a controlling terminal for a group of processes, and many I/O functions are built with terminals in mind. Pseudoterminals handle, for example, some control characters like ^C. Are they outdated? Do we not need them anymore? Is there anything that replaced them? They are not outdated and are used in many programs, including ssh. Any useful use-case? ssh.
What are pseudo terminals (pty/tty)?
1,368,521,572,000
I'm learning C#, so I made a little C# program that says Hello, World!, then compiled it with mono-csc and ran it with mono: $ mono-csc Hello.cs $ mono Hello.exe Hello, World! I noticed that when I hit TAB in bash, Hello.exe was marked executable. Indeed, it runs by just a shell loading the filename! Hello.exe is not an ELF file with a funny file extension: $ readelf -a Hello.exe readelf: Error: Not an ELF file - it has the wrong magic bytes at the start $ xxd Hello.exe | head -n1 00000000: 4d5a 9000 0300 0000 0400 0000 ffff 0000 MZ.............. MZ means it's a Microsoft Windows statically linked executable. Drop it onto a Windows box, and it will (should) run. I have wine installed, but wine, being a compatibility layer for Windows apps, takes about 5x as long to run Hello.exe as mono and executing it directly do, so it's not wine that runs it. I'm assuming there's some mono kernel module installed with mono that intercepts the exec syscall/s, or catches binaries that begin with 4D 5A, but lsmod | grep mono and friends return an error. What's going on here, and how does the kernel know that this executable is special? Just for proof it's not my shell working magic, I used the Crap Shell (aka sh) to run it and it still runs natively. Here's the program in full, since a commenter was curious: using System; class Hello { /// <summary> /// The main entry point for the application /// </summary> [STAThread] public static void Main(string[] args) { System.Console.Write("Hello, World!\n"); } }
This is binfmt_misc in action: it allows the kernel to be told how to run binaries it doesn't know about. Look at the contents of /proc/sys/fs/binfmt_misc; among the files you see there, one should explain how to run Mono binaries: enabled interpreter /usr/lib/binfmt-support/run-detectors flags: offset 0 magic 4d5a (on a Debian system). This tells the kernel that binaries starting with MZ (4d5a) should be given to run-detectors. The latter figures out whether to use Mono or Wine to run the binary. Binary types can be added, removed, enabled and disabled at any time; see the documentation above for details (the semantics are surprising, the virtual filesystem used here doesn't behave entirely like a standard filesystem). /proc/sys/fs/binfmt_misc/status gives the global status, and each binary "descriptor" shows its individual status. Another way of disabling binfmt_misc is to unload its kernel module, if it's built as a module; this also means it's possible to blacklist it to avoid it entirely. This feature allows new binary types to be supported, such as MZ executables (which include Windows PE and PE+ binaries, but also DOS and OS/2 binaries!), Java JAR files... It also allows known binary types to be supported on new architectures, typically using Qemu; thus, with the appropriate libraries, you can transparently run ARM Linux binaries on an Intel processor! Your question stemmed from cross-compilation, albeit in the .NET sense, and that brings up a caveat with binfmt_misc: some configuration scripts misbehave when you try to cross-compile on a system which can run the cross-compiled binaries. Typically, detecting cross-compilation involves building a binary and attempting to run it; if it runs, you're not cross-compiling, if it doesn't, you are (or your compiler's broken). autoconf scripts can usually be fixed in this case by explicitly specifying the build and host architectures, but sometimes you'll have to disable binfmt_misc temporarily...
How is Mono magical?
1,368,521,572,000
I have detached a process from my terminal, like this: $ process & That terminal is now long closed, but process is still running, and I want to send some commands to that process's stdin. Is that possible?
Yes, it is. First, create a pipe: mkfifo /tmp/fifo. Use gdb to attach to the process: gdb -p PID Then close stdin: call close (0); and open it again: call open ("/tmp/fifo", 0600) Finally, write away (from a different terminal, as gdb will probably hang): echo blah > /tmp/fifo
How do I attach a terminal to a detached process?
1,368,521,572,000
Having migrated to Linux from Windows, I would like to find an alternative software to Winmerge or rather learn command line tools to compare and sync two folders on Linux. I would be grateful if you could tell me how to do the following tasks on the command line... (I have studied diff and rsync, but I still need some help.) We have two folders: "/home/user/A" and "/home/user/B" Folder A is the place where regular files and folders are saved and folder B is a backup folder that serves as a complete mirror of folder A. (Nothing is directly saved or modified by the user in folder B.) My questions are: How to list files that exist only in folder B? (E.g. the ones deleted from folder A since the last synchronization.) How to copy files that exist in only folder B back into folder A? How to list files that exist in both folders but have different timestamps or sizes? (The ones that have been modified in folder A since last synronization. I would like to avoid using checksums, because there are tens of thousands of files and it'd make the process too slow.) How to make an exact copy of folder A into folder B? I mean, copy everything from folder A into folder B that exists only in folder A and delete everything from folder B that exists only in folder B, but without touching the files that are the same in both folders.
This puts folder A into folder B: rsync -avu --delete "/home/user/A" "/home/user/B" If you want the contents of folders A and B to be the same, put /home/user/A/ (with the slash) as the source. This takes not the folder A but all of its content and puts it into folder B. Like this: rsync -avu --delete "/home/user/A/" "/home/user/B" -a archive mode; equals -rlptgoD (no -H, -A, -X) -v run verbosely -u only copy files with a newer modification time (or size difference if the times are equal) --delete delete the files in target folder that do not exist in the source Manpage: https://download.samba.org/pub/rsync/rsync.html
How to sync two folders with command line tools?
1,368,521,572,000
I'm aware that shared objects under Linux use "so numbers", namely that different versions of a shared object are given different extensions, for example: example.so.1 example.so.2 I understand the idea is to have two distinct files such that two versions of a library can exist on a system (as opposed to "DLL Hell" on Windows). I'd like to know how this works in practice? Often, I see that example.so is in fact a symbolic link to example.so.2 where .2 is the latest version. How then does an application depending on an older version of example.so identify it correctly? Are there any rules as to what numbers one must use? Or is this simply convention? Is it the case that, unlike in Windows where software binaries are transferred between systems, if a system has a newer version of a shared object it is linked to the older version automatically when compiling from source? I suspect this is related to ldconfig but I'm not sure how.
Binaries themselves know which version of a shared library they depend on, and request it specifically. You can use ldd to show the dependencies; mine for ls are: $ ldd /bin/ls linux-gate.so.1 => (0xb784e000) librt.so.1 => /lib/librt.so.1 (0xb782c000) libacl.so.1 => /lib/libacl.so.1 (0xb7824000) libc.so.6 => /lib/libc.so.6 (0xb76dc000) libpthread.so.0 => /lib/libpthread.so.0 (0xb76c3000) /lib/ld-linux.so.2 (0xb784f000) libattr.so.1 => /lib/libattr.so.1 (0xb76bd000) As you can see, it points to e.g. libpthread.so.0, not just libpthread.so. The reason for the symbolic link is for the linker. When you want to link against libpthread.so directly, you give gcc the flag -lpthread, and it adds on the lib prefix and .so suffix automatically. You can't tell it to add on the .so.0 suffix, so the symbolic link points to the newest version of the lib to faciliate that
How do SO (shared object) numbers work?
1,368,521,572,000
Possible Duplicate: What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? I always see pts and tty when I use the who command but I never understand how they are different? Can somebody please explain me this?
A tty is a native terminal device, the backend is either hardware or kernel emulated. A pty (pseudo terminal device) is a terminal device which is emulated by an other program (example: xterm, screen, or ssh are such programs). A pts is the slave part of a pty. (More info can be found in man pty.) Short summary: A pty is created by a process through posix_openpt() (which usually opens the special device /dev/ptmx), and is constituted by a pair of bidirectional character devices: The master part, which is the file descriptor obtained by this process through this call, is used to emulate a terminal. After some initialization, the second part can be unlocked with unlockpt(), and the master is used to receive or send characters to this second part (slave). The slave part, which is anchored in the filesystem as /dev/pts/x (the real name can be obtained by the master through ptsname() ) behaves like a native terminal device (/dev/ttyx). In most cases, a shell is started that uses it as a controlling terminal.
Difference between pts and tty
1,368,521,572,000
I want to see list of process created by specific user or group of user in Linux Can I do it using ps command or is there any other command to achieve this?
To view only the processes owned by a specific user, use the following command: top -U [username] Replace the [username] with the required username If you want to use ps then ps -u [username] OR ps -ef | grep <username> OR ps -efl | grep <username> for the extended listing Check out the man ps page for options Another alternative is to use pstree wchich prints the process tree of the user pstree <username or pid>
How to see process created by specific user in Unix/linux
1,368,521,572,000
What is this folder: /run/user/1000 on my Fedora system and what does it do? ~ $ df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.2G 20K 1.2G 1% /run/user/1000 EDIT: 7 june 2019. My two answers don't agree on what directory or where the files stored in this place were: Patrick: Prior to systemd, these applications typically stored their files in /tmp. And again here: /tmp was the only location specified by the FHS which is local, and writable by all users. Braiam: The purposes of this directory were once served by /var/run. In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards compatibility. And again here: Programs which have migrated to use /run should cease their usage of /var/run, except as noted in the section on /var/run. So which one is it that is the father of /run/user/1000, why is there no mention in either answer of what the other says about the directory used before /run/user.
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc. Prior to systemd, these applications typically stored their files in /tmp. They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users. However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with. So systemd came along and created /run/user/$uid. This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control. It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp, you couldn't do this.
What is this folder /run/user/1000?
1,368,521,572,000
Possible Duplicate: How to tell what type of filesystem you’re on? Find filesystem of an unmounted partition from a script How can I quickly check the filesystem of the partition? Can I do that by using df?
Yes, according to man df you can: -T, --print-type print file system type Another way is to use the mount command. Without parameters it lists the currently mounted devices, including their file systems. In case you need to find out only one certain file system, is easier to use the stat command's -f option instead of parsing out one value from the above mentioned commands' output.
How to show the filesystem type via the terminal? [duplicate]
1,368,521,572,000
How to show top five CPU consuming processes with ps?
Why use ps when you can do it easily with the top command? If you must use ps, try this: ps aux | sort -nrk 3,3 | head -n 5 If you want something that's truly 'top'esq with constant updates, use watch watch "ps aux | sort -nrk 3,3 | head -n 5"
Show top five CPU consuming processes with `ps`
1,368,521,572,000
This answer explains the actions taken by the kernel when an OOM situation is encountered based on the value of sysctl vm.overcommit_memory. When overcommit_memory is set to 0 or 1, overcommit is enabled, and programs are allowed to allocate more memory than is really available. Now what happens when we run out of memory in this situation? How does the OOM killer decide which process to kill first?
If memory is exhaustively used up by processes, to the extent which can possibly threaten the stability of the system, then the OOM killer comes into the picture. NOTE: It is the task of the OOM Killer to continue killing processes until enough memory is freed for the smooth functioning of the rest of the process that the Kernel is attempting to run. The OOM Killer has to select the best process(es) to kill. Best here refers to that process which will free up the maximum memory upon killing and is also the least important to the system. The primary goal is to kill the least number of processes that minimizes the damage done and at the same time maximizing the amount of memory freed. To facilitate this, the kernel maintains an oom_score for each of the processes. You can see the oom_score of each of the processes in the /proc filesystem under the pid directory. $ cat /proc/10292/oom_score The higher the value of oom_score of any process, the higher is its likelihood of getting killed by the OOM Killer in an out-of-memory situation. How is the OOM_Score calculated? In David's patch set, the old badness() heuristics are almost entirely gone. Instead, the calculation turns into a simple question of what percentage of the available memory is being used by the process. If the system as a whole is short of memory, then "available memory" is the sum of all RAM and swap space available to the system. If instead, the OOM situation is caused by exhausting the memory allowed to a given cpuset/control group, then "available memory" is the total amount allocated to that control group. A similar calculation is made if limits imposed by a memory policy have been exceeded. In each case, the memory use of the process is deemed to be the sum of its resident set (the number of RAM pages it is using) and its swap usage. This calculation produces a percent-times-ten number as a result; a process which is using every byte of the memory available to it will have a score of 1000, while a process using no memory at all will get a score of zero. There are very few heuristic tweaks to this score, but the code does still subtract a small amount (30) from the score of root-owned processes on the notion that they are slightly more valuable than user-owned processes. One other tweak which is applied is to add the value stored in each process's oom_score_adj variable, which can be adjusted via /proc. This knob allows the adjustment of each process's attractiveness to the OOM killer in user space; setting it to -1000 will disable OOM kills entirely, while setting to +1000 is the equivalent of painting a large target on the associated process. References http://www.queryhome.com/15491/whats-happening-kernel-starting-killer-choose-which-process https://serverfault.com/a/571326
How does the OOM killer decide which process to kill first?
1,368,521,572,000
Is there a command that will list all partitions along with their labels? sudo fdisk -l and sudo parted -l don't show labels by default. EDIT: (as per comment below) I'm talking about ext2 labels - those that you can set in gparted upon partitioning. EDIT2: The intent is to list unmounted partitions (so I know which one to mount).
With udev, You can use ls -l /dev/disk/by-label to show the symlinks by label to at least some partition device nodes. Not sure what the logic of inclusion is, possibly the existence of a label.
List partition labels from the command line
1,368,521,572,000
I accidentally created over 1000 screens. How do I kill them all with one command? (Or a few)
You can use : pkill screen Or killall screen In OSX the process is called SCREEN in all caps. So, use: pkill SCREEN Or killall SCREEN
How do I kill all screens?
1,368,521,572,000
I read this up on this website and it doesn't make sense. http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/basic/node32.html When UNIX was first written, /bin and /usr/bin physically resided on two different disks: /bin being on a smaller faster (more expensive) disk, and /usr/bin on a bigger slower disk. Now, /bin is a symbolic link to /usr/bin: they are essentially the same directory. But when you ls the /bin folder, it has far less content than the /usr/bin folder (at least on my running system). So can someone please explain the difference?
What? no /bin/ is not a symlink to /usr/bin on any FHS compliant system. Note that there are still popular Unices and Linuxes that ignore this - for example, /bin and /sbin are symlinked to /usr/bin on Arch Linux (the reasoning being that you don't need /bin for rescue/single-user-mode, since you'd just boot a live CD). /bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts /usr/bin/ This is the primary directory of executable commands on the system. essentially, /bin contains executables which are required by the system for emergency repairs, booting, and single user mode. /usr/bin contains any binaries that aren't required. I will note, that they can be on separate disks/partitions, /bin must be on the same disk as /. /usr/bin can be on another disk - although note that this configuration has been kind of broken for a while (this is why e.g. systemd warns about this configuration on boot). For full correctness, some unices may ignore FHS, as I believe it is only a Linux Standard, I'm not aware that it has yet been included in SUS, Posix or any other UNIX standard, though it should be IMHO. It is a part of the LSB standard though.
Difference between /bin and /usr/bin
1,368,521,572,000
I just want to know difference between in reboot init 6 shutdown -r now and which is the safest and the best?
There is no difference in them. Internally they do exactly the same thing: reboot uses the shutdown command (with the -r switch). The shutdown command used to kill all the running processes, unmount all the file systems and finally tells the kernel to issue the ACPI power command. The source can be found here. In older distros the reboot command was forcing the processes to exit by issuing the SIGKILL signal (still found in sources, can be invoked with -f option), in most recent distros it defaults to the more graceful and init friendly init 1 -> shutdown -r. This ensures that daemons clean up themselves before shutdown. init 6 tells the init process to shutdown all of the spawned processes/daemons as written in the init files (in the inverse order they started) and lastly invoke the shutdown -r now command to reboot the machine Today there is not much difference as both commands do exactly the same, and they respect the init scripts used to start services/daemons by invoking the shutdown scripts for them. Except for reboot -f -r now as stated below There is a small explanation taken from manpages of why the reboot -f is not safe: -f, --force Force immediate halt, power-off, reboot. Don't contact the init system. Edit: Forgot to mention, in upcoming RHEL distributions you should use the new systemctl command to issue poweroff/reboot. As stated in the manpages of reboot and shutdown they are "a legacy command available for compatibility only." and the systemctl method will be the only one safe.
What is the difference between reboot , init 6 and shutdown -r now?
1,368,521,572,000
I just formatted stuff. One disk I format as ext2. The other I want to format as ext4. I want to test how they perform. Now, how do I know the kind of file system in a partition?
How do I tell what sort of data (what data format) is in a file? → Use the file utility. Here, you want to know the format of data in a device file, so you need to pass the -s flag to tell file not just to say that it's a device file but look at the content. Sometimes you'll need the -L flag as well, if the device file name is a symbolic link. You'll see output like this: # file -sL /dev/sd* /dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=63fa0104-4aab-4dc8-a50d-e2c1bf0fb188 (extents) (large files) (huge files) /dev/sdb1: Linux rev 1.0 ext2 filesystem data, UUID=b3c82023-78e1-4ad4-b6e0-62355b272166 /dev/sdb2: Linux/i386 swap file (new style), version 1 (4K pages), size 4194303 pages, no label, UUID=3f64308c-19db-4da5-a9a0-db4d7defb80f Given this sample output, the first disk has one partition and the second disk has two partitions. /dev/sda1 is an ext4 filesystem, /dev/sdb1 is an ext2 filesystem, and /dev/sdb2 is some swap space (about 4GB). You must run this command as root, because ordinary users may not read disk partitions directly: if needed, add sudo in front.
How do I know if a partition is ext2, ext3, or ext4?
1,368,521,572,000
The command id can be used to look up a user's uid, for example: $ id -u ubuntu 1000 Is there a command to lookup up a username from a uid? I realize this can be done by looking at the /etc/passwd file but I'm asking if there is an existing command to to this, especially if the user executing it is not root. I'm not looking for the current user's username, i.e. I am not looking for whoami or logname. This also made me wonder if on shared web hosting this is a security feature, or am I just not understanding something correctly? For examination, the /etc/passwd file from a shared web host: root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin news:x:9:13:news:/etc/news: uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin gopher:x:13:30:gopher:/var/gopher:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin nscd:x:28:28:NSCD Daemon:/:/sbin/nologin vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin pcap:x:77:77::/var/arpwatch:/sbin/nologin rpc:x:32:32:Portmapper RPC user:/:/sbin/nologin mailnull:x:47:47::/var/spool/mqueue:/sbin/nologin smmsp:x:51:51::/var/spool/mqueue:/sbin/nologin oprofile:x:16:16:Special user account to be used by OProfile:/home/oprofile:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin avahi:x:70:70:Avahi daemon:/:/sbin/nologin rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin haldaemon:x:68:68:HAL daemon:/:/sbin/nologin xfs:x:43:43:X Font Server:/etc/X11/fs:/sbin/nologin avahi-autoipd:x:100:104:avahi-autoipd:/var/lib/avahi-autoipd:/sbin/nologin named:x:25:25:Named:/var/named:/sbin/nologin mailman:x:32006:32006::/usr/local/cpanel/3rdparty/mailman/mailman:/usr/local/cpanel/bin/noshell dovecot:x:97:97:dovecot:/usr/libexec/dovecot:/sbin/nologin mysql:x:101:105:MySQL server:/var/lib/mysql:/bin/bash cpaneleximfilter:x:32007:32009::/var/cpanel/userhomes/cpaneleximfilter:/usr/local/cpanel/bin/noshell nagios:x:102:106:nagios:/var/log/nagios:/bin/sh ntp:x:38:38::/etc/ntp:/sbin/nologin myuser:x:1747:1744::/home/myuser:/usr/local/cpanel/bin/jailshell And here is a sample directory listing of /tmp/ drwx------ 3 root root 1024 Apr 16 02:09 spamd-22217-init/ drwxr-xr-x 2 665 664 1024 Apr 4 00:05 update-cache-44068ab4/ drwxr-xr-x 4 665 664 1024 Apr 17 15:17 update-extraction-44068ab4/ -rw-rw-r-- 1 665 664 43801 Apr 17 15:17 variable.zip -rw-r--r-- 1 684 683 4396 Apr 17 07:01 wsdl-13fb96428c0685474db6b425a1d9baec We can see root is the owner of some files, and root is also showing up in /etc/passwd , however the other users/groups all show up as numbers.
ls already performs that lookup. You can perform a user information lookup from the command line with getent passwd. If ls shows a user ID instead of a user name, it's because there's no user by that name. Filesystems store user IDs, not user names. If you mount a filesystem from another system, or if a file belongs to a now-deleted user, or if you passed a numerical user ID to chown, you can have a file that belongs to a user ID that doesn't have a name. On a shared host, you may have access to some files that are shared between several virtual machines, each with their user database. This is a bit weird (why share files but not the users that own them?), but it's technically possible.
How can I look up a username by id in linux?
1,368,521,572,000
I have command line access to a Linux machine which may or may not be virtualized. I want to determine what kind of virtualization technology it runs on, if any (VMWare, VirtualBox, KVM, OpenVZ, Xen, ). This isn't a hostile environment: I'm not trying to work against a VM that is trying to disguise itself, I'm diagnosing a flaky server that I know little about. More precisely, I'm helping someone diagnose the issue, I'm not sitting at the helm. So I have to convey instructions like “copy-paste this command” and not “poke around /proc somewhere”. Ideally, it would be something like lshw: an easily-installable (if not preinstalled) command that does the poking around and prints out relevant information. What's the easiest way of determining what virtualization technology this system may be a guest of? I'd appreciate if proposals mentioned which technologies (including bare hardware) can be conclusively detected and which can be conclusively eliminated. I'm mostly interested in Linux, but if it also works for other unices that's nice.
dmidecode -s system-product-name I have tested on Vmware Workstation, VirtualBox, QEMU with KVM, standalone QEMU with Ubuntu as the guest OS. Others have added additional platforms that they're familiar with as well. Virtualization technologies VMware Workstation root@router:~# dmidecode -s system-product-name VMware Virtual Platform VirtualBox root@router:~# dmidecode -s system-product-name VirtualBox Qemu with KVM root@router:~# dmidecode -s system-product-name KVM Qemu (emulated) root@router:~# dmidecode -s system-product-name Bochs Microsoft VirtualPC root@router:~# dmidecode | egrep -i 'manufacturer|product' Manufacturer: Microsoft Corporation Product Name: Virtual Machine Virtuozzo root@router:~# dmidecode /dev/mem: Permission denied Xen root@router:~# dmidecode | grep -i domU Product Name: HVM domU On bare metal, this returns an identification of the computer or motherboard model. /dev/disk/by-id If you don't have the rights to run dmidecode then you can use: Virtualization Technology: QEMU ls -1 /dev/disk/by-id/ Output [root@host-7-129 ~]# ls -1 /dev/disk/by-id/ ata-QEMU_DVD-ROM_QM00003 ata-QEMU_HARDDISK_QM00001 ata-QEMU_HARDDISK_QM00001-part1 ata-QEMU_HARDDISK_QM00002 ata-QEMU_HARDDISK_QM00002-part1 scsi-SATA_QEMU_HARDDISK_QM00001 scsi-SATA_QEMU_HARDDISK_QM00001-part1 scsi-SATA_QEMU_HARDDISK_QM00002 scsi-SATA_QEMU_HARDDISK_QM00002-part1 References How to detect virtualization at dmo.ca
Easy way to determine the virtualization technology of a Linux machine?
1,368,521,572,000
I tried to check what my DNS resolver is and I noticed this: user@ubuntu:~$ cat /etc/resolv.conf nameserver 127.0.0.53 options edns0 I was expecting 192.168.1.1, which is my default gateway, my router. I don't understand why it points at 127.0.0.53. When I hit that ip, apache2 serves me its contents. Could someone clear this up for me? Shouldn't the file point directly at my default gateway which acts as a DNS resolver - or even better directly at my preferred DNS which is 1.1.1.1? P.S: When I capture DNS packets with wireshark on port 53 all I see is 192.168.1.1 and not 127.0.0.53, as it should be.
You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf, which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link. In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved, which is forwarding query traffic to it as you have observed. Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved. Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved, named nss-resolve, that is a plug-in for your C libraries. Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf, applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform name→address lookups at all. Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved, which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool. It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer. Read the resolved.conf(5) manual page for that.
Why does /etc/resolv.conf point at 127.0.0.53?
1,368,521,572,000
How can I find the time since a Linux system was first installed, provided that nobody has tried to hide it?
sudo tune2fs -l /dev/sda1 **OR** /dev/sdb1* | grep 'Filesystem created:' This will tell you when the file system was created. * = In the first column of df / you can find the exact partition to use.
How do I find how long ago a Linux system was installed?
1,368,521,572,000
How can I use ls on Linux to get a listing of files with only their name, date, and size? I don't need to see the other info such as owner or permissions Is this possible?
Try stat instead of ls. Here with the GNU implementation of stat (beware the BSDs and zsh also have a stat command but with a completely different API): stat -c "%y %s %n" -- * To output in columnar format (assuming none of the file names contain comma or newline characters): stat -c "%n,%s" -- * | column -t -s, Beware that if there's a file called - in the current working directory, GNU stat will report information about the file opened on stdin instead of for that file. If you run into a Argument list too long error, with shells where printf is builtin, you can change it to: printf '%s\0' * | xargs -0 stat -c "%y %s %n" -- Or in ksh93: command -x stat -c "%y %s %n" -- * Which will run as many invocations of stat as necessary to work around the limit on the size of the arguments.
Linux ls to show only file name, date, and size
1,368,521,572,000
I just know that ls -t and ls -f give different sorting of files and subdirectories under a directory. What are the differences between timestamp, modification time, and created time of a file? How to get and change these kinds of information by commands? In terms of what kind of information do people say a file is "newer" than the other? What kinds of information's change will not make the file different? For example, I saw someone wrote: By default, the rsync program only looks to see if the files are different in size and timestamp. It doesn't care which file is newer, if it is different, it gets overwritten. You can pass the '--update' flag to rsync which will cause it to skip files on the destination if they are newer than the file on the source, but only so long as they are the same type of file. What this means is that if, for example, the source file is a regular file and the destination is a symlink, the destination file will be overwritten, regardless of timestamp. On a side note, does the file type here mean only regular file and simlink, not the type such as pdf, jpg, htm, txt etc?
There are 3 kind of "timestamps": Access - the last time the file was read Modify - the last time the file was modified (content has been modified) Change - the last time meta data of the file was changed (e.g. permissions) To display this information, you can use stat which is part of the coreutils. stat will show you also some more information like the device, inodes, links, etc. Remember that this sort of information depends highly on the filesystem and mount options. For example if you mount a partition with the noatime option, no access information will be written. A utility to change the timestamps would be touch. There are some arguments to decide which timestamp to change (e.g. -a for access time, -m for modification time) and to influence the parsing of a new given timestamp. See man touch for more details. touch can become handy in combination with cp -u ("copy only when the SOURCE file is newer than the destination file or when the destination file is missing") or for the creation of empty marker files.
timestamp, modification time, and created time of a file
1,368,521,572,000
Under certain conditions, the Linux kernel may become tainted. For example, loading a proprietary video driver into the kernel taints the kernel. This condition may be visible in system logs, kernel error messages (oops and panics), and through tools such as lsmod, and remains until the system is rebooted. What does this mean? Does it affect my ability to use the system, and how might it affect my support options?
A tainted kernel is one that is in an unsupported state because it cannot be guaranteed to function correctly. Most kernel developers will ignore bug reports involving tainted kernels, and community members may ask that you correct the tainting condition before they can proceed with diagnosing problems related to the kernel. In addition, some debugging functionality and API calls may be disabled when the kernel is tainted. The taint state is indicated by a series of flags which represent the various reasons a kernel cannot be trusted to work properly. The most common reason for the kernel to become tainted is loading a proprietary graphics driver from NVIDIA or AMD, in which case it is generally safe to ignore the condition. However, some scenarios that cause the kernel to become tainted may be indicative of more serious problems such as failing hardware. It is a good idea to examine system logs and the specific taint flags set to determine the underlying cause of the issue. This feature is intended to identify conditions which may make it difficult to properly troubleshoot a kernel problem. For example, a proprietary driver can cause problems that cannot be debugged reliably because its source code is not available and its effects cannot be determined. Likewise, if a serious kernel or hardware error had previously occurred, the integrity of the kernel space may have been compromised, meaning that any subsequent debug messages generated by the kernel may not be reliable. Note that correcting the tainting condition alone does not remove the taint state because doing so does not change the fact that the kernel can no longer be relied on to work correctly or produce accurate debugging information. The system must be restarted to clear the taint flags. More information is available in the Linux kernel documentation, including what each taint flag means and how to troubleshoot a tainted kernel prior to reporting bugs. A partial list of conditions that can result in the kernel being tainted follows, each with their own flags. Note that some Linux vendors, such as SUSE, add additional taint flags to indicate conditions such as loading a module that is supported by a third party rather than directly by the vendor. Loading a proprietary (or non-GPL-compatible) kernel module. As noted above, this is the most common reason for the kernel to become tainted. The use of staging drivers, which are part of the kernel source code but are experimental and not fully tested. The use of out-of-tree modules that are not included with the Linux kernel source code. Forcibly loading or unloading modules. This can happen if one is trying to use a module that is not built for the current version of the kernel. (The Linux kernel module ABI is not stable across versions, or even differently-configured builds of the same version.) Running a kernel on certain hardware configurations that are specifically not supported, such as an SMP (multiprocessor) kernel on early AMD Athlon processors not supporting SMP operation. Overriding the ACPI DSDT in the kernel. This is sometimes needed to correct for firmware power-management bugs; see this Arch Linux wiki article for details. Certain critical error conditions, such as machine check exceptions and kernel oopses. Certain serious bugs in the BIOS, UEFI, or other system firmware which the kernel must work around.
What is a tainted Linux kernel?
1,368,521,572,000
How can I pick which kernel GRUB 2 should load by default? I recently installed a Linux real-time kernel and now it loads by default. I'd like to load the regular one by default. So far I only managed to pick the default OS... and for some reason the /boot/grub.cfg already assumes that I want to load the real-time kernel and put it into the generic Linux menu entry (in my case Arch Linux).
I think most distributions have moved additional kernels into the advanced options sub menu at this point, as TomTom found was the case with his Arch. I didn't want to alter my top level menu structure in order to select a previous kernel as the default. I found the answer here. To summarize: Find the $menuentry_id_option for the submenu: $ grep submenu /boot/grub/grub.cfg submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Find the $menuentry_id_option for the menu entry for the kernel you want to use: $ grep gnulinux /boot/grub/grub.cfg menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Comment out your current default grub in /etc/default/grub and replace it with the sub-menu's $menuentry_id_option from step one, and the selected kernel's $menuentry_id_option from step two separated by >. In my case the modified GRUB_DEFAULT is: #GRUB_DEFAULT=0 GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" Update grub to make the changes. For Debian this is done like so: $ sudo update-grub Done. Now when you boot, the advanced menu should have an asterisk and you should boot into the selected kernel. You can confirm this with uname. $ uname -a Linux NAME 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.0-0 (2018-09-13) x86_64 GNU/Linux Changing this back to the most recent kernel is as simple as commenting out the new line and uncommenting #GRUB_DEFAULT=0: GRUB_DEFAULT=0 #GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" then rerunning update-grub. Specifying IDs for all the entries from the top level menu is mandatory. The format for setting the default boot entry can be found in the documentation.
Set the default kernel in GRUB
1,368,521,572,000
I've used many variants of Linux(mostly Debian derivatives) for over a decade now. One problem that I haven't seen solved satisfactorily is the issue of horizontal tearing, or Vsync not being properly implemented. I say this because I use used 5 different distros on 4 different computers with various monitors and Nvidia/AMD/ATI/Intel graphics cards; every time, there has been an issue with video tearing with even slight motion. This is a big problem, especially since even Windows XP doesn't have these issues on modern hardware. If anyone is going to use Linux for anything, why would they want constant defects to show up when doing anything non-CLI? I'm guessing that either few developers know about this problem or care enough to fix it. I've tried just about every compositor out there, and usually the best they can do is minimize the issue but not eliminate it. Shouldn't it be as simple as synchronizing with the refresh rate of the monitor? Is there some politics among the OSS community that's preventing anyone from committing code that fixes this? Every time I've asked for help on this issue in the past, it either gets treated as an edge case(which I find difficult to believe it is given the amount of times I've replicated the problem) or I get potential solutions that at most minimize the tearing.
This is all due to the fact that the X server is out-dated, ill-suitable for today's graphics hardware and basically all the direct video card communication is done as an extension ("patch") over the ancient bloated core. The X server provides no builtin means of synchronization between user rendering the window and the screen displaying a window, so the content changes in the middle of rendering. This is one of the well-known issues of the X server (it has many, the entire model of what the server does and is outdated - event handling in subwindows, metadata about windows, graphical primitives for direct drawing...). Widget toolkits mostly want to gloss over all this, but tearing is still a problem because there is no mechanism to handle that. Additional problems arise when you have multiple cards that require different drivers, and on top of all this, opengl library has a hard-wired dependency on xlib, so you can't really use it independently without going through X. Wayland, which is somewhat unenthusiastically trying to replace X, supports a pedantic vsync synchronization in its core, and is advertised to have every frame exactly perfect. If you quickly google "wayland video tearing" you'll find more information on everything.
Why is video tearing such a problem in Linux?
1,368,521,572,000
If I have a large file and need to split it into 100 megabyte chunks I will do split -b 100m myImage.iso That usually give me something like xaa xab xac xad And to get them back together I have been using cat x* > myImage.iso Seems like there should be a more efficient way than reading through each line of code in a group of files with cat and redirecting the output to a new file. Like a way of just opening two files, removing the EOF marker from the first one, and connecting them - without having to go through all the contents. Windows/DOS has a copy command for binary files. The help mentions that this command was designed to able able to combine multiple files. It works with this syntax: (/b is for binary mode) copy /b file1 + file2 + file3 outputfile Is there something similar or a better way to join large files on Linux than cat? Update It seems that cat is in fact the right way and best way to join files. Glad to know i was using the right command all along :) Thanks everyone for your feedback.
That's just what cat was made for. Since it is one of the oldest GNU tools, I think it's very unlikely that any other tool does that faster/better. And it's not piping - it's only redirecting output.
What's the best way to join files again after splitting them?
1,368,521,572,000
I'm having some trouble uploading directories(which contain other directories a few levels deep) by sftp. I realize I could work around this by gzipping, but I don't see why that's necessary. Anyway, I try sftp> put bin/ Uploading bin/ to /home/earlz/blah/bin bin/ is not a regular file sftp> put -r bin/ Uploading bin/ to /home/earlz/blah/bin Couldn't canonicalise: No such file or directory Unable to canonicalise path "/home/earlz/blah/bin" I think the last error message is completely stupid. So the directory doesn't exist? Why not create the directory? Is there anyway around this issue with sftp, or should I just use scp?
CORRECTED: I initially claimed wrongly that OpenSSH did not support put -r. It does, but it does it in a very strange way. It seems to expect the destination directory to already exist, with the same name as the source directory. sftp> put -r source Uploading source/ to /home/myself/source Couldn't canonicalize: No such file or directory etc. sftp> mkdir source sftp> put -r source Uploading source/ to /home/myself/source Entering source/ source/file1 source/file2 What's especially strange is that this even applies if you give a different name for the destination: sftp> put -r source dest Uploading source/ to /home/myself/dest Couldn't canonicalize: ... sftp> mkdir dest sftp> put -r source dest Uploading source/ to /home/myself/dest/source Couldn't canonicalize: ... sftp> mkdir dest/source sftp> put -r source dest Uploading source/ to /home/myself/dest/source Entering source/ source/file1 source/file2 For a better-implemented recursive put, you could use the PuTTY psftp command line tool instead. It's in the putty-tools package under Debian (and most likely Ubuntu). Alternately, Filezilla will do what you want, if you want to use a GUI.
Uploading directories with sftp?
1,368,521,572,000
As part of doing some cold cache timings, I'm trying to free the OS cache. The kernel documentation (retrieved January 2019) says: drop_caches Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free. To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free reclaimable slab objects (includes dentries and inodes): echo 2 > /proc/sys/vm/drop_caches To free slab objects and pagecache: echo 3 > /proc/sys/vm/drop_caches This is a non-destructive operation and will not free any dirty objects. To increase the number of objects freed by this operation, the user may run `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the number of dirty objects on the system and create more candidates to be dropped. This file is not a means to control the growth of the various kernel caches (inodes, dentries, pagecache, etc...) These objects are automatically reclaimed by the kernel when memory is needed elsewhere on the system. Use of this file can cause performance problems. Since it discards cached objects, it may cost a significant amount of I/O and CPU to recreate the dropped objects, especially if they were under heavy use. Because of this, use outside of a testing or debugging environment is not recommended. You may see informational messages in your kernel log when this file is used: cat (1234): drop_caches: 3 These are informational only. They do not mean that anything is wrong with your system. To disable them, echo 4 (bit 3) into drop_caches. I'm a bit sketchy about the details. Running echo 3 > /proc/sys/vm/drop_caches frees pagecache, dentries and inodes. Ok. So, if I want the system to start caching normally again, do I need to reset it to 0 first? My system has the value currently set to 0, which I assume is the default. Or will it reset on its own? I see at least two possibilities here, and I'm not sure which one is true: echo 3 > /proc/sys/vm/drop_caches frees pagecache, dentries and inodes. The system then immediately starts caching again. I'm not sure what I would expect the value in /proc/sys/vm/drop_caches to do if this is the case. Go back to 0 almost immediately? If /proc/sys/vm/drop_caches is set to 3, the system does not do any memory caching till it is reset to 0. Which case is true?
It isn't sticky - you just write to the file to make it drop the caches and then it immediately starts caching again. Basically when you write to that file you aren't really changing a setting, you are issuing a command to the kernel. The kernel acts on that command (by dropping the caches) then carries on as before.
Setting /proc/sys/vm/drop_caches to clear cache
1,368,521,572,000
How to change the system date in Linux ? I want to change: Only Year Only Month Only Date Any combination of above three
Use date -s: date -s '2014-12-25 12:34:56' Run that as root or under sudo. Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock. To change only part of the time, you can use command substitution in the date string: date -s "2014-12-25 $(date +%H:%M:%S)" will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y, %m, %d, %H, %M, and %S.
Linux: set date through command line
1,368,521,572,000
ssh-add alone is not working: Error connecting to agent: No such file or directory How should I use that tool?
You need to initialize ssh-agent first. You can do this in multiple ways. Either by starting a new shell ssh-agent bash or by evaluating the script returned by ssh-agent in your current shell. eval "$(ssh-agent)" I suggest using the second method, because you keep all your history and variables.
ssh-add returns with: "Error connecting to agent: No such file or directory"
1,368,521,572,000
I'm interested in the difference between Highmem and Lowmem: Why is there such a differentiation? What do we gain by doing so? What features does each have?
On a 32-bit architecture, the address space range for addressing RAM is: 0x00000000 - 0xffffffff or 4'294'967'295 (4 GB). The linux kernel splits that up 3/1 (could also be 2/2, or 1/3 1) into user space (high memory) and kernel space (low memory) respectively. The user space range: 0x00000000 - 0xbfffffff Every newly spawned user process gets an address (range) inside this area. User processes are generally untrusted and therefore are forbidden to access the kernel space. Further, they are considered non-urgent, as a general rule, the kernel tries to defer the allocation of memory to those processes. The kernel space range: 0xc0000000 - 0xffffffff A kernel processes gets its address (range) here. The kernel can directly access this 1 GB of addresses (well, not the full 1 GB, there are 128 MB reserved for high memory access). Processes spawned in kernel space are trusted, urgent and assumed error-free, the memory request gets processed instantaneously. Every kernel process can also access the user space range if it wishes to. And to achieve this, the kernel maps an address from the user space (the high memory) to its kernel space (the low memory), the 128 MB mentioned above are especially reserved for this. 1 Whether the split is 3/1, 2/2, or 1/3 is controlled by the CONFIG_VMSPLIT_... option; you can probably check under /boot/config* to see which option was selected for your kernel.
What are high memory and low memory on Linux?
1,368,521,572,000
I'm not using hosts.allow or hosts.deny, furthermore SSH works from my windows-machine (same laptop, different hard drive) but not my Linux machine. ssh -vvv root@host -p port gives: OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to host [host] port <port>. debug1: Connection established. debug1: identity file /home/torxed/.ssh/id_dsa type -1 debug1: identity file /home/torxed/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6 ssh_exchange_identification: read: Connection reset by peer On the windows machine, everything works fine, so I checked the security logs and the lines in there are identical, the server treats the two different "machines" no different and they are both allowed via public-key authentication. So that leads to the conclusion that this must be an issue with my local ArchLinux laptop.. but what? [torxed@archie ~]$ cat .ssh/known_hosts [torxed@archie ~]$ So that's not the problem... [torxed@archie ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination No conflicts with the firewall settings (for now).. [torxed@archie ~]$ ls -la .ssh/ total 20 drwx------ 2 torxed users 4096 Sep 3 2013 . drwx------ 51 torxed users 4096 May 11 11:11 .. -rw------- 1 torxed users 1679 Sep 3 2013 id_rsa -rw-r--r-- 1 torxed users 403 Sep 3 2013 id_rsa.pub -rw-r--r-- 1 torxed users 170 May 11 11:21 known_hosts Permissions appear to be fine (same on the server).. Also tried without configuring /etc/ssh/ssh_config with the same result except for a lot of auto-configuration going on in the client which ends up with the same error.
Originally posted on Ask Ubuntu If you have ruled out any "external" factors, the following set of steps usually helps to narrow it down. So while this doesn't directly answer your question, it may help tracking down the error cause. Troubleshooting sshd What I find generally very useful in any such cases is to start sshd without letting it daemonize. The problem in my case was that neither syslog nor auth.log showed anything meaningful. When I started it from the terminal I got: # $(which sshd) -Ddp 10222 /etc/ssh/sshd_config line 8: address family must be specified before ListenAddress. Much better! This error message allowed me to see what's wrong and fix it. Neither of the log files contained this output. NB: at least on Ubuntu the $(which sshd) is the best method to satisfy sshd requirement of an absolute path. Otherwise you'll get the following error: sshd re-exec requires execution with an absolute path. The -p 10222 makes sshd listen on that alternative port, overriding the configuration file - this is so that it doesn't clash with potentially running sshd instances. Make sure to choose a free port here. Finally: connect to the alternative port (ssh -p 10222 user@server). This method has helped me many many times in finding issues, be it authentication issues or other types. To get really verbose output to stdout, use $(which sshd) -Ddddp 10222 (note the added dd to increase verbosity). For more debugging goodness check man sshd. The main advantage of this method is that it allows you to check the sshd configuration without having to restart the sshd on the default port. Normally this should not interfere with existing SSH-connections, but I've seen it. So this allows one to validate the configuration file prior to - potentially - cutting off ones access to a remote server (for example I have that for some VPS and even for physical servers where I need to pay extra to get out-of-band access to the machine).
ssh_exchange_identification: Connection closed by remote host (not using hosts.deny)
1,368,521,572,000
The program is located in /usr/bin/mail. Upon execution, Version 8.1.2 01/15/2001 is shown. Entering list produces: Commands are: next, alias, print, type, Type, Print, visual, top, touch, preserve, delete, dp, dt, undelete, unset, mail, mbox, pipe, |, more, page, More, Page, unread, Unread, !, copy, chdir, cd, save, source, set, shell, version, group, write, from, file, folder, folders, ?, z, headers, help, =, Reply, Respond, reply, respond, edit, echo, quit, list, xit, exit, size, hold, if, else, endif, alternates, ignore, discard, retain, saveignore, savediscard, saveretain, core, #, inc, new Entering ? produces: Mail Command Description ------------------------- -------------------------------------------- t [message list] type message(s). n goto and type next message. e [message list] edit message(s). f [message list] give head lines of messages. d [message list] delete message(s). s [message list] <file> append message(s) to file. u [message list] undelete message(s). R [message list] reply to message sender(s). r [message list] reply to message sender(s) and all recipients. p [message list] print message list. pre [message list] make messages go back to /var/mail. m <recipient list> mail to specific recipient(s). q quit, saving unresolved messages in mbox. x quit, do not remove system mailbox. h print out active message headers. ! shell escape. | [msglist] command pipe message(s) to shell command. pi [msglist] command pipe message(s) to shell command. cd [directory] chdir to directory or home if none given fi <file> switch to file (%=system inbox, %user=user's system inbox). + searches in your folder directory for the file. set variable[=value] set Mail variable. Entering z shows the end of the list of messages - but that command is not presented in the ? help page. What program is this? Are there tutorials for its use? What are some common commands and helpful tricks for its use? How can the message list be navigated (the opposite of z) or refreshed? Clarification: This question is about the interactive program and not the script-able command - i.e. the result of typing mail with no flags or parameters into a terminal.
This page describes the interactive command in detail, and is in fact a fairly thorough tutorial. Describes commands such as z and z- : If there is more than a screenful of messages, then z will show the next screenful, and z- will show the previous screenful.
What is "mail", and how is it navigated?
1,368,521,572,000
This is in regard to linux, but if anyone knows of a general *nix method that would be good. I booted a system yesterday with an ethernet cable plugged in. "NetworkManager" is not installed, so once it started I went to look for the name of the ethernet interface with ifconfig to start a DHCP client manually, but it did not show anything other than lo. The NIC was listed via lspci, and the appropriate kernel driver was loaded. The system normally uses wifi, and I could remember the interface name for that was wlan0. When I tried ifconfig wlan0 up, wlan0 appeared. But the only ethernet interface names I could remember were eth[N] and em[N] -- neither of which worked. This document refers to "predictable interface names" but does not do a good job of explaining what they might be in simple terms. It does refer to a piece of source code which implies the name in this case might be deduced from the the PCI bus and slot numbers, which seems like an unnecessarily complicated hassle. Other searching around led me to believe that this might be determined by systemd in conjunction with udev, but there are almost 100 files in /usr/lib/udev/rules.d and spending an hour trying to determine where (and if) there's a systemd config file for this also seems ridiculous. It would also be nice to know for certain that they are available, not just how they might be named if they are, so I can rule out hardware problems, etc. Isn't there a simple way to find the names of available network interfaces on linux?
The simplest method I know to list all of your interfaces is ifconfig -a EDIT If you're on a system where that has been made obsolete, you can use ip link show
How can I find available network interfaces?
1,332,892,117,000
We have some new hardware in our office which runs its own customized Linux OS. How do I go about figuring which distro it's based on?
A question very close to this one was posted on Unix.Stackexchange HERE Giles has a pretty complete | cool answer for the ways he describes. # cat /proc/version Linux version 2.6.32-71.el6.x86_64 ([email protected]) (gcc version 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Fri May 20 03:51:51 BST 2011 # uname -a Linux system1.doofus.local 2.6.32-71.el6.x86_64 #1 SMP Fri May 20 03:51:51 BST 2011 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/issue CentOS Linux release 6.0 (Final) Kernel \r on an \m cat /proc/config.gz cat /usr/src/linux/config.gz cat /boot/config* Though I did some checking and this was not very reliable except on SUSE. # zcat /proc/config.gz | grep -i kernel CONFIG_SUSE_KERNEL=y # CONFIG_KERNEL_DESKTOP is not set CONFIG_LOCK_KERNEL=y Release Files in /etc (from Unix.com) Novell SuSE---> /etc/SuSE-release Red Hat--->/etc/redhat-release, /etc/redhat_version Fedora-->/etc/fedora-release Slackware--->/etc/slackware-release, /etc/slackware-version Old Debian--->/etc/debian_release, /etc/debian_version New Debian--->/etc/os-release Mandrake--->/etc/mandrake-release Yellow dog-->/etc/yellowdog-release Sun JDS--->/etc/sun-release Solaris/Sparc--->/etc/release Gentoo--->/etc/gentoo-release There is also a bash script at the Unix.com link someone wrote to automate checking. Figuring out what package manager you have is a good clue. rpm yum apt-get zypper +many more Though this is by no means foolproof as the vendor could use anything they want. It really just gives you a place to start. # dmesg | less Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 pretty much the same information as cat /proc/version & uname
How do I identify which Linux distro is running? [duplicate]
1,332,892,117,000
Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped?
Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? Yes and yes. Before we go any further, we should state this about memory. Memory gets divided into two distinct areas: The user space, which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine. The kernel space, which is the location where the code and data of the kernel is stored, and executes under. Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls. If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished. Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0. Typically in x86 architecture, there are 4 rings of protection. Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For an example of this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table, nor hlt to halt a processor. If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped? For an answer to this, please refer to the excellent answer by wag to What are high memory and low memory on Linux?.
What is difference between User space and Kernel space?
1,332,892,117,000
Is it possible to block the (outgoing) network access of a single process?
With Linux 2.6.24+ (considered experimental until 2.6.29), you can use network namespaces for that. You need to have the 'network namespaces' enabled in your kernel (CONFIG_NET_NS=y) and util-linux with the unshare tool. Then, starting a process without network access is as simple as: unshare -n program ... This creates an empty network namespace for the process. That is, it is run with no network interfaces, including no loopback. In below example we add -r to run the program only after the current effective user and group IDs have been mapped to the superuser ones (avoid sudo): $ unshare -r -n ping 127.0.0.1 connect: Network is unreachable If your app needs a network interface you can set a new one up: $ unshare -n -- sh -c 'ip link set dev lo up; ping 127.0.0.1' PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=32 time=0.066 ms Note that this will create a new, local loopback. That is, the spawned process won't be able to access open ports of the host's 127.0.0.1. If you need to gain access to the original networking inside the namespace, you can use nsenter to enter the other namespace. The following example runs ping with network namespace that is used by PID 1 (it is specified through -t 1): $ nsenter -n -t 1 -- ping -c4 example.com PING example.com (93.184.216.119) 56(84) bytes of data. 64 bytes from 93.184.216.119: icmp_seq=1 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=2 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=3 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=4 ttl=50 time=139 ms --- example.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 134.621/136.028/139.848/2.252 ms
Block network access of a process?
1,332,892,117,000
What benefit could I see by compiling a Linux kernel myself? Is there some efficiency you could create by customizing it to your hardware?
In my mind, the only benefit you really get from compiling your own linux kernel is: You learn how to compile your own linux kernel. It's not something you need to do for more speed / memory / xxx whatever. It is a valuable thing to do if that's the stage you feel you are at in your development. If you want to have a deeper understanding of what this whole "open source" thing is about, about how and what the different parts of the kernel are, then you should give it a go. If you are just looking to speed up your boot time by 3 seconds, then... what's the point... go buy an ssd. If you are curious, if you want to learn, then compiling your own kernel is a great idea and you will likely get a lot out of it. With that said, there are some specific reasons when it would be appropriate to compile your own kernel (as several people have pointed out in the other answers). Generally these arise out of a specific need you have for a specific outcome, for example: I need to get the system to boot/run on hardware with limited resources I need to test out a patch and provide feedback to the developers I need to disable something that is causing a conflict I need to develop the linux kernel I need to enable support for my unsupported hardware I need to improve performance of x because I am hitting the current limits of the system (and I know what I'm doing) The issue lies in thinking that there's some intrinsic benefit to compiling your own kernel when everything is already working the way it should be, and I don't think that there is. Though you can spend countless hours disabling things you don't need and tweaking the things that are tweakable, the fact is the linux kernel is already pretty well tuned (by your distribution) for most user situations.
What is the benefit of compiling your own linux kernel?
1,332,892,117,000
I've heard the term "mounting" when referring to devices in Linux. What is its actual meaning? How it handling now unlike older versions? I haven't done that manually via the command-line. Can you give the steps (commands) for mounting a simple device in Linux?
Unix systems have a single directory tree. All accessible storage must have an associated location in this single directory tree. This is unlike Windows where (in the most common syntax for file paths) there is one directory tree per storage component (drive). Mounting is the act of associating a storage device to a particular location in the directory tree. For example, when the system boots, a particular storage device (commonly called the root partition) is associated with the root of the directory tree, i.e., that storage device is mounted on / (the root directory). It's worth noting that mounting not only associates the device containing the data with a directory, but also with a filesystem driver, which is a piece of code that understands how the data on the device is organized and presents it as files and directories. Let's say you now want to access files on a CD-ROM. You must mount the CD-ROM on a location in the directory tree (this may be done automatically when you insert the CD). Let's say the CD-ROM device is /dev/cdrom and the chosen mount point is /media/cdrom. The corresponding command is mount /dev/cdrom /media/cdrom After that command is run, a file whose location on the CD-ROM is /dir/file is now accessible on your system as /media/cdrom/dir/file. When you've finished using the CD, you run the command umount /dev/cdrom or umount /media/cdrom (both will work; typical desktop environments will do this when you click on the “eject” or ”safely remove” button). Mounting applies to anything that is made accessible as files, not just actual storage devices. For example, all Linux systems have a special filesystem mounted under /proc. That filesystem (called proc) does not have underlying storage: the files in it give information about running processes and various other system information; the information is provided directly by the kernel from its in-memory data structures.
What is meant by mounting a device in Linux?
1,332,892,117,000
We are hosting an application on remote server. We need to test it with a limited network bandwidth (for users with bad Internet access). Can I limit my internet bandwidth? For instance: 128 KB per second. This question focuses on system-wide or container-wide solutions on Linux. See Limiting a specific shell's internet bandwidth usage for process- or session-specific solutions.
You can throttle the network bandwidth on the interface using the command called tc Man page available at http://man7.org/linux/man-pages/man8/tc.8.html For a simple script, try wondershaper. An example from using tc: tc qdisc add dev eth0 root tbf rate 1024kbit latency 50ms burst 1540
How to limit network bandwidth?
1,332,892,117,000
I want to kill all running processes of a particular user from either a shell script or native code on a Linux system. Do I have to read the /proc directory and look for these? Any ideas? Is there a dynamic mapping of the pids under UIDs in Linux? Isn't this in the proc? If not, then where is this list maintained? Should I read from it? Also where is the static list of all UIDs in the system so I can validate this this user exists and then proceed to kill all processes running under it?
Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME. Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD). pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html pkill - ... signal processes based on name and other attributes -u, --euid euid,... Only match processes whose effective user ID is listed. Either the numerical or symbolical value may be used. -U, --uid uid,... Only match processes whose real user ID is listed. Either the numerical or symbolical value may be used. Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill -u, --user user The next expression is a username. killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html killall - kill processes by name -u, --user Kill only processes the specified user owns. Command names are optional. I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc). I think, they will iterate over /proc digital subfolders and check every found process for match. To get list of users, use getpwent (it will get one user per call). skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u/-U argument and allow both numeric and textual user specifier.
How do I kill all a user's processes using their UID
1,332,892,117,000
I just ran df -h a minute ago and noticed a filesystem has been added that I'm not familiar with. Does anyone know why /run exists? Is this something that's been added by the kernel? By Arch Linux? run 10M 236K 9.8M 3% /run
Apparently, many tools (among them udev) will soon require a /run/ directory that is mounted early (as tmpfs). Arch developers introduced /run last month to prepare for this. The udev runtime data moved from /dev/.udev/ to /run/udev/. The /run mountpoint is supposed to be a tmpfs mounted during early boot, available and writable to for all tools at any time during bootup, it replaces /var/run/, which should become a symlink some day. [1] There is more detail here: http://www.h-online.com/open/news/item/Linux-distributions-to-include-run-directory-1219006.html [1] From thread on the Arch Projects ML
What is this new /run filesystem?
1,332,892,117,000
After finding out that several common commands (such as read) are actually Bash builtins (and when running them at the prompt I'm actually running a two-line shell script which just forwards to the builtin), I was looking to see if the same is true for true and false. Well, they are definitely binaries. sh-4.2$ which true /usr/bin/true sh-4.2$ which false /usr/bin/false sh-4.2$ file /usr/bin/true /usr/bin/true: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=2697339d3c19235 06e10af65aa3120b12295277e, stripped sh-4.2$ file /usr/bin/false /usr/bin/false: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=b160fa513fcc13 537d7293f05e40444fe5843640, stripped sh-4.2$ However, what I found most surprising was their size. I expected them to be only a few bytes each, as true is basically just exit 0 and false is exit 1. sh-4.2$ true sh-4.2$ echo $? 0 sh-4.2$ false sh-4.2$ echo $? 1 sh-4.2$ However I found to my surprise that both files are over 28KB in size. sh-4.2$ stat /usr/bin/true File: '/usr/bin/true' Size: 28920 Blocks: 64 IO Block: 4096 regular file Device: fd2ch/64812d Inode: 530320 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2018-01-25 19:46:32.703463708 +0000 Modify: 2016-06-30 09:44:27.000000000 +0100 Change: 2017-12-22 09:43:17.447563336 +0000 Birth: - sh-4.2$ stat /usr/bin/false File: '/usr/bin/false' Size: 28920 Blocks: 64 IO Block: 4096 regular file Device: fd2ch/64812d Inode: 530697 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2018-01-25 20:06:27.210764704 +0000 Modify: 2016-06-30 09:44:27.000000000 +0100 Change: 2017-12-22 09:43:18.148561245 +0000 Birth: - sh-4.2$ So my question is: Why are they so big? What's in the executable other than the return code? PS: I am using RHEL 7.4
In the past, /bin/true and /bin/false in the shell were actually scripts. For instance, in a PDP/11 Unix System 7: $ ls -la /bin/true /bin/false -rwxr-xr-x 1 bin 7 Jun 8 1979 /bin/false -rwxr-xr-x 1 bin 0 Jun 8 1979 /bin/true $ $ cat /bin/false exit 1 $ $ cat /bin/true $ Nowadays, at least in bash, the trueand false commands are implemented as shell built-in commands. Thus no executable binary files are invoked by default, both when using the false and true directives in the bash command line and inside shell scripts. From the bashsource, builtins/mkbuiltins.c: char *posix_builtins[] = { "alias", "bg", "cd", "command", "false", "fc", "fg", "getopts", "jobs", "kill", "newgrp", "pwd", "read", "true", "umask", "unalias", "wait", (char *)NULL }; Also per @meuh comments: $ command -V true false true is a shell builtin false is a shell builtin So it can be said with a high degree of certainty the trueand false executable files exist mainly for being called from other programs. From now on, the answer will focus on the /bin/true binary from the coreutilspackage in Debian 9 / 64 bits. (/usr/bin/true running RedHat. RedHat and Debian use both the coreutils package, analysed the compiled version of the latter having it more at hand). As it can be seen in the source file false.c, /bin/false is compiled with (almost) the same source code as /bin/true, just returning EXIT_FAILURE (1) instead, so this answer can be applied for both binaries. #define EXIT_STATUS EXIT_FAILURE #include "true.c" As it also can be confirmed by both executables having the same size: $ ls -l /bin/true /bin/false -rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/false -rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/true Alas, the direct answer to the question "why are true and false so large?" could be, because there are not anymore so pressing reasons to care about their top performance. They are not essential to bash performance, not being used anymore by bash (scripting). Similar comments apply to their size, 26KB for the kind of hardware we have nowadays is insignificant. Space is not at premium for the typical server/desktop anymore, and they do not even bother anymore to use the same binary for false and true, as it is just deployed twice in distributions using coreutils. Focusing, however, in the real spirit of the question, why something that should be so simple and small, gets so large? The real distribution of the sections of /bin/true is as these charts shows; the main code+data amounts to roughly 3KB out of a 26KB binary, which amounts to 12% of the size of /bin/true. The true utility got indeed more cruft code over the years, most notably the standard support for --version and --help. However, that it is not the (only) main justification for it being so big, but rather, while being dynamically linked (using shared libs), also having part of a generic library commonly used by coreutils binaries linked as a static library. The metada for building an elf executable file also amounts for a significant part of the binary, being it a relatively small file by today´s standards. The rest of the answer is for explaining how we got to build the following charts detailing the composition of the /bin/true executable binary file and how we arrived to that conclusion. As @Maks says, the binary was compiled from C; as per my comment also, it is also confirmed it is from coreutils. We are pointing directly to the author(s) git https://github.com/wertarbyte/coreutils/blob/master/src/true.c, instead of the gnu git as @Maks (same sources, different repositories - this repository was selected as it has the full source of the coreutils libraries) We can see the various building blocks of the /bin/truebinary here (Debian 9 - 64 bits from coreutils): $ file /bin/true /bin/true: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=9ae82394864538fa7b23b7f87b259ea2a20889c4, stripped $ size /bin/true text data bss dec hex filename 24583 1160 416 26159 662f true Of those: text (usually code) is around 24KB data (initialised variables, mostly strings) are around 1KB bss (uninitialized data) 0.5KB Of the 24KB, around 1KB is for fixing up the 58 external functions. That still leaves around roughly 23KB for rest of the code. We will show down bellow that the actual main file - main()+usage() code is around 1KB compiled, and explain what the other 22KB are used for. Drilling further down the binary with readelf -S true, we can see that while the binary is 26159 bytes, the actual compiled code is 13017 bytes, and the rest is assorted data/initialisation code. However, true.c is not the whole story and 13KB seems pretty much excessive if it were only that file; we can see functions called in main() that are not listed in the external functions seen in the elf with objdump -T true ; functions that are present at: https://github.com/coreutils/gnulib/blob/master/lib/progname.c https://github.com/coreutils/gnulib/blob/master/lib/closeout.c https://github.com/coreutils/gnulib/blob/master/lib/version-etc.c Those extra functions not linked externally in main() are: set_program_name() close_stdout() version_etc() So my first suspicion was partly correct, whilst the library is using dynamic libraries, the /bin/true binary is big because it has some static libraries included with it (but that is not the only cause). Compiling C code is not usually that inefficient for having such space unaccounted for, hence my initial suspicion something was amiss. The extra space, almost 90% of the size of the binary, is indeed extra libraries/elf metadata. While using Hopper for disassembling/decompiling the binary to understand where functions are, it can be seen the compiled binary code of true.c/usage() function is actually 833 bytes, and of the true.c/main() function is 225 bytes, which is roughly slightly less than 1KB. The logic for version functions, which is buried in the static libraries, is around 1KB. The actual compiled main()+usage()+version()+strings+vars are only using up around 3KB to 3.5KB. It is indeed ironic, such small and humble utilities have became bigger in size for the reasons explained above. related question: Understanding what a Linux binary is doing true.c main() with the offending function calls: int main (int argc, char **argv) { /* Recognize --help or --version only if it's the only command-line argument. */ if (argc == 2) { initialize_main (&argc, &argv); set_program_name (argv[0]); <----------- setlocale (LC_ALL, ""); bindtextdomain (PACKAGE, LOCALEDIR); textdomain (PACKAGE); atexit (close_stdout); <----- if (STREQ (argv[1], "--help")) usage (EXIT_STATUS); if (STREQ (argv[1], "--version")) version_etc (stdout, PROGRAM_NAME, PACKAGE_NAME, Version, AUTHORS, <------ (char *) NULL); } exit (EXIT_STATUS); } The decimal size of the various sections of the binary: $ size -A -t true true : section size addr .interp 28 568 .note.ABI-tag 32 596 .note.gnu.build-id 36 628 .gnu.hash 60 664 .dynsym 1416 728 .dynstr 676 2144 .gnu.version 118 2820 .gnu.version_r 96 2944 .rela.dyn 624 3040 .rela.plt 1104 3664 .init 23 4768 .plt 752 4800 .plt.got 8 5552 .text 13017 5568 .fini 9 18588 .rodata 3104 18624 .eh_frame_hdr 572 21728 .eh_frame 2908 22304 .init_array 8 2125160 .fini_array 8 2125168 .jcr 8 2125176 .data.rel.ro 88 2125184 .dynamic 480 2125272 .got 48 2125752 .got.plt 392 2125824 .data 128 2126240 .bss 416 2126368 .gnu_debuglink 52 0 Total 26211 Output of readelf -S true $ readelf -S true There are 30 section headers, starting at offset 0x7368: Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align [ 0] NULL 0000000000000000 00000000 0000000000000000 0000000000000000 0 0 0 [ 1] .interp PROGBITS 0000000000000238 00000238 000000000000001c 0000000000000000 A 0 0 1 [ 2] .note.ABI-tag NOTE 0000000000000254 00000254 0000000000000020 0000000000000000 A 0 0 4 [ 3] .note.gnu.build-i NOTE 0000000000000274 00000274 0000000000000024 0000000000000000 A 0 0 4 [ 4] .gnu.hash GNU_HASH 0000000000000298 00000298 000000000000003c 0000000000000000 A 5 0 8 [ 5] .dynsym DYNSYM 00000000000002d8 000002d8 0000000000000588 0000000000000018 A 6 1 8 [ 6] .dynstr STRTAB 0000000000000860 00000860 00000000000002a4 0000000000000000 A 0 0 1 [ 7] .gnu.version VERSYM 0000000000000b04 00000b04 0000000000000076 0000000000000002 A 5 0 2 [ 8] .gnu.version_r VERNEED 0000000000000b80 00000b80 0000000000000060 0000000000000000 A 6 1 8 [ 9] .rela.dyn RELA 0000000000000be0 00000be0 0000000000000270 0000000000000018 A 5 0 8 [10] .rela.plt RELA 0000000000000e50 00000e50 0000000000000450 0000000000000018 AI 5 25 8 [11] .init PROGBITS 00000000000012a0 000012a0 0000000000000017 0000000000000000 AX 0 0 4 [12] .plt PROGBITS 00000000000012c0 000012c0 00000000000002f0 0000000000000010 AX 0 0 16 [13] .plt.got PROGBITS 00000000000015b0 000015b0 0000000000000008 0000000000000000 AX 0 0 8 [14] .text PROGBITS 00000000000015c0 000015c0 00000000000032d9 0000000000000000 AX 0 0 16 [15] .fini PROGBITS 000000000000489c 0000489c 0000000000000009 0000000000000000 AX 0 0 4 [16] .rodata PROGBITS 00000000000048c0 000048c0 0000000000000c20 0000000000000000 A 0 0 32 [17] .eh_frame_hdr PROGBITS 00000000000054e0 000054e0 000000000000023c 0000000000000000 A 0 0 4 [18] .eh_frame PROGBITS 0000000000005720 00005720 0000000000000b5c 0000000000000000 A 0 0 8 [19] .init_array INIT_ARRAY 0000000000206d68 00006d68 0000000000000008 0000000000000008 WA 0 0 8 [20] .fini_array FINI_ARRAY 0000000000206d70 00006d70 0000000000000008 0000000000000008 WA 0 0 8 [21] .jcr PROGBITS 0000000000206d78 00006d78 0000000000000008 0000000000000000 WA 0 0 8 [22] .data.rel.ro PROGBITS 0000000000206d80 00006d80 0000000000000058 0000000000000000 WA 0 0 32 [23] .dynamic DYNAMIC 0000000000206dd8 00006dd8 00000000000001e0 0000000000000010 WA 6 0 8 [24] .got PROGBITS 0000000000206fb8 00006fb8 0000000000000030 0000000000000008 WA 0 0 8 [25] .got.plt PROGBITS 0000000000207000 00007000 0000000000000188 0000000000000008 WA 0 0 8 [26] .data PROGBITS 00000000002071a0 000071a0 0000000000000080 0000000000000000 WA 0 0 32 [27] .bss NOBITS 0000000000207220 00007220 00000000000001a0 0000000000000000 WA 0 0 32 [28] .gnu_debuglink PROGBITS 0000000000000000 00007220 0000000000000034 0000000000000000 0 0 1 [29] .shstrtab STRTAB 0000000000000000 00007254 000000000000010f 0000000000000000 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), l (large), p (processor specific) Output of objdump -T true (external functions dynamically linked on run-time) $ objdump -T true true: file format elf64-x86-64 DYNAMIC SYMBOL TABLE: 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __uflow 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getenv 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 abort 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strncmp 0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 _exit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __fpending 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 textdomain 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fclose 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 bindtextdomain 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dcgettext 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __ctype_get_mb_cur_max 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.4 __stack_chk_fail 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbrtowc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strrchr 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 lseek 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memset 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fscanf 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 close 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcmp 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fputs_unlocked 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 calloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strcmp 0000000000000000 w D *UND* 0000000000000000 __gmon_start__ 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fileno 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 malloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fflush 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 nl_langinfo 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 ungetc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __freading 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 realloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fdopen 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 setlocale 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __printf_chk 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 error 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 open 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fseeko 0000000000000000 w D *UND* 0000000000000000 _Jv_RegisterClasses 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_atexit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 exit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fwrite 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __fprintf_chk 0000000000000000 w D *UND* 0000000000000000 _ITM_registerTMCloneTable 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbsinit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 iswprint 0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3 __ctype_b_loc 0000000000207228 g DO .bss 0000000000000008 GLIBC_2.2.5 stdout 0000000000207220 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname 0000000000207230 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_name 0000000000207230 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname_full 0000000000207220 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_short_name 0000000000207240 g DO .bss 0000000000000008 GLIBC_2.2.5 stderr
Why are true and false so large?
1,332,892,117,000
I have eth0 and wlan0 according to ifconfig and I can ping google.com. How can I find out (with a normal user, not root) what interface is active, as in, what interface did the ping (or whatever, ping is not mandatory) use? I am using Ubuntu 11.04 or Fedora 14
You can use route to find your default route: $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 The Iface column in the line with destination default tells you which interface is used.
How to find out which interface am I using for connecting to the internet?
1,332,892,117,000
I am trying to control the volume using my programming script. How can I do the following in Fedora 15, Ubuntu linux? Mute/ Unmute Volume up and volume down Note: Please note that I use a web USB microphone/speaker and also Analogue microphone/speaker. I want to apply to all to be sure.
You can use amixer. It's in the alsa-utils package on Ubuntu and Debian. Run amixer without parameters to get an overview about your controls for the default device. You can also use alsamixer without parameters (from the same package) to get a more visual overview. Use F6 to see and switch between devices. Commonly, you might have PulseAudio and a hardware sound card to select from. Then use amixer with the set command to set the volume. For example, to set the master channel to 50%: amixer set Master 50% Master is the control name and should match one that you see when running without parameters. Note the % sign, without it, it will treat the value as a 0 - 65536 level. If PulseAudio is not your default device, you can use the -D switch: amixer -D pulse set Master 50% Other useful commands pointed out in the comments: To increase/decrease the volume use +/- after the number, use amixer set Master 10%+ amixer set Master 10%- To mute, unmute or toggle between muted/unmuted state, use amixer set Master mute amixer set Master unmute amixer set Master toggle Also note that there might be two different percentage scales, the default raw and for some devices a more natural scale based on decibel, which is also used by alsamixer. Use -M to use the latter. Finally, if you're interested only in PulseAudio, you might want to check out pactl (see one of the other answers).
How to use command line to change volume?
1,332,892,117,000
I read some resources about the mount command for mounting devices on Linux, but none of them is clear enough (at least for me). On the whole this what most guides state: $ mount (lists all currently mounted devices) $ mount -t type device directory (mounts that device) for example (to mount a USB drive): $ mount -t vfat /dev/sdb1 /media/disk What's not clear to me: How do I know what to use for "device" as in $ mount -t type device directory? That is, how do I know that I should use "/dev/sdb1" in this command $ mount -t vfat /dev/sdb1 /media/disk to mount my USB drive? what does the "-t" parameter define here? type? I read the man page ($ man mount) a couple of times, but I am still probably missing something. Please clarify.
You can use fdisk to have an idea of what kind of partitions you have, for example: fdisk -l Shows: Device Boot Start End Blocks Id System /dev/sda1 * 63 204796619 102398278+ 7 HPFS/NTFS /dev/sda2 204797952 205821951 512000 83 Linux /dev/sda3 205821952 976773119 385475584 8e Linux LVM That way you know that you have sda1,2 and 3 partitions. The -t option is the filesystem type; it can be NTFS, FAT, EXT. In my example, sda1 is ntfs, so it should be something like: mount -t ntfs /dev/sda1 /mnt/ USB devices are usually vfat and Linux are usually ext.
How to mount a device in Linux?
1,332,892,117,000
Is there such a thing as list of available D-Bus services? I've stumbled upon a few, like those provided by NetworkManager, Rhythmbox, Skype, HAL. I wonder if I can find a rather complete list of provided services/interfaces.
On QT setups (short commands and clean, human readable output) you can run: qdbus will list list the services available on the session bus and qdbus --system will list list the services available on the system bus. On any setup you can use dbus-send dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames Just like qdbus, if --session or no message bus is specified, dbus will send to the login session message bus. So the above will list the services available on the session bus. Use --system if you want instead to use the system wide message bus: dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames
A list of available D-Bus services
1,332,892,117,000
I entered crontab -r instead of crontab -e and all my cron jobs have been removed. What is the best way (or is there one) to recover those jobs?
crontab -r removes the only file containing the cron jobs. So if you did not make a backup, your only recovery options are: On RedHat/CentOS, if your jobs have been triggered before, you can find the cron log in /var/log/cron. The file will help you rewrite the jobs again. Another option is to recover the file using a file recovery tool. This is less likely to be successful though, since the system partition is usually a busy one and corresponding sectors probably have already been overwritten. On Ubuntu/Debian, if your task has run before, try grep CRON /var/log/syslog
Recover cron jobs accidently removed with crontab -r
1,332,892,117,000
Is it possible to set up system mail on a linux box to be sent via a different smtp server - maybe even with authentication? If so, how do I do this? If that's unclear, let give an example. If I'm at the command line and type: cat body.txt | mail -s "just a test" [email protected] is it possible to have that be sent via an external SMTP server, like G-mail ? I'm not looking for "a way to send mail from gmail from the command line" but rather an option to configure the entire system to use a specific SMTP server, or possibly one account on an SMTP server (maybe overriding the from address).
I found sSMTP very simple to use. In Debian based systems: apt-get install ssmtp Then edit the configuration file in /etc/ssmtp/ssmtp.conf A sample configuration to use your gmail for sending e-mails: # root is the person who gets all mail for userids < 1000 [email protected] # Here is the gmail configuration (or change it to your private smtp server) mailhub=smtp.gmail.com:587 [email protected] AuthPass=yourGmailPass UseTLS=YES UseSTARTTLS=YES Note: Make sure the "mail" command is present in your system. mailutils package should provide this one in Debian based systems. Update: There are people (and bug reports for different Linux distributions) reporting that sSMTP will not accept passwords with a 'space' or '#' character. If sSMTP is not working for you, this may be the case.
Can I set up system mail to use an external SMTP server?
1,332,892,117,000
Is there a simple way to find out which initsystem is being used e.g by a recent Debian wheezy or Fedora system? I'm aware that Fedora 21 uses systemd initsystem but that is because I read that and because all relevant scripts/symlinks are stored in /etc/systemd/. However, I'm not sure about e.g Debian squeeze or CentOS 6 or 7 and so on. Which techniques exist to verify such initsystem?
You can poke around the system to find indicators. One way is to check for the existence of three directories: /usr/lib/systemd tells you you're on a systemd based system. /usr/share/upstart is a pretty good indicator that you're on an Upstart-based system. /etc/init.d tells you the box has SysV init in its history The thing is, these are heuristics that must be considered together, possibly with other data, not certain indicators by themselves. The Ubuntu 14.10 box I'm looking at right now has all three directories. Why? Because Ubuntu just switched to systemd from Upstart in that version, but keeps Upstart and SysV init for backwards compatibility. In the end, I think the best answer is "experience." You will see that you have logged into a CentOS 7 box and know that it's systemd. How do you learn this? Playing around, RTFMing, etc. The same way you gain all experience. I realize this is not a very satisfactory answer, but that's what happens when there is fragmentation in the market, creating nonstandard designs. It's like asking how you know whether ls accepts -C, or --color, or doesn't do color output at all. Again, the answer is "experience."
How to find out if a system uses SysV, Upstart or Systemd initsystem [duplicate]
1,332,892,117,000
If my target has one device connected and many drivers for that device loaded, how can I understand what device is using which driver?
Just use /sys. Example. I want to find the driver for my Ethernet card: $ sudo lspci ... 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) $ find /sys | grep drivers.*02:00 /sys/bus/pci/drivers/r8169/0000:02:00.0 That is r8169. First I need to find coordinates of the device using lspci; then I find driver that is used for the devices with these coordinates.
Linux: How to find the device driver used for a device?
1,332,892,117,000
I'm going to be doing a fair amount of PHP work shortly, and I'm interested in learning RoR, so I installed Linux Mint 12 in my VirtualBox. The most frustrating aspect of the switch, so far, has been dealing with Linux permissions. It seems like I can't do anything useful (like, say, copy the Symfony2 tarball from my Downloads directory to my document root and extract it) without posing as the root via sudo. Is there an easy way to tell linux to give me unfettered access to certain directories without simply blowing open all of their permissions?
Two options come to my mind: Own the directory you want by using chown: sudo chown your_username directory (replace your_username with your username and directory with the directory you want.) The other thing you can do is work as root as long as you KNOW WHAT YOU ARE DOING. To use root do: sudo -s and then you can do anything without having to type sudo before every command. To exit this sudo -s shell terminal, type exit and you will be returned to the previous shell terminal.
Is there a way to stop having to write 'sudo' for every little thing in Linux?
1,332,892,117,000
I have seen on many blogs, using this command to enable IP forwarding while using many network security/sniffing tools on linux echo 1 > /proc/sys/net/ipv4/ip_forward Can anyone explain me in layman terms, what essentially does this command do? Does it turn your system into router?
"IP forwarding" is a synonym for "routing." It is called "kernel IP forwarding" because it is a feature of the Linux kernel. A router has multiple network interfaces. If traffic comes in on one interface that matches a subnet of another network interface, a router then forwards that traffic to the other network interface. So, let's say you have two NICs, one (NIC 1) is at address 192.168.2.1/24, and the other (NIC 2) is 192.168.3.1/24. If forwarding is enabled, and a packet comes in on NIC 1 with a "destination address" of 192.168.3.8, the router will resend that packet out of the NIC 2. It's common for routers functioning as gateways to the Internet to have a default route whereby any traffic that doesn't match any NICs will go through the default route's NIC. So in the above example, if you have an internet connection on NIC 2, you'd set NIC 2 as your default route and then any traffic coming in from NIC 1 that isn't destined for something on 192.168.2.0/24 will go through NIC 2. Hopefully there's other routers past NIC 2 that can further route it (in the case of the Internet, the next hop would be your ISP's router, and then their providers upstream router, etc.) Enabling ip_forward tells your Linux system to do this. For it to be meaningful, you need two network interfaces (any 2 or more of wired NIC cards, Wifi cards or chipsets, PPP links over a 56k modem or serial, etc.). When doing routing, security is important and that's where Linux's packet filter, iptables, gets involved. So you will need an iptables configuration consistent with your needs. Note that enabling forwarding with iptables disabled and/or without taking firewalling and security into account could leave you open to vulnerabilites if one of the NICs is facing the Internet or a subnet you don't have control over.
What is kernel ip forwarding?
1,332,892,117,000
cp -r is meant to copy files recursively, and cp -R for copying directories recursively. But I've checked, and both appear to copy both files and directories, the same thing. So, what's the difference actually?
While -R is posix well-defined, -r is not portable! On Linux, in the GNU and BusyBox implementations of cp, -r and -R are equivalent. On the other side, as you can read in the POSIX manual page of cp, -r behavior is implementation-defined. * If neither the -R nor -r options were specified, cp shall take actions based on the type and contents of the file referenced by the symbolic link, and not by the symbolic link itself. * If the -R option was specified: * If none of the options -H, -L, nor -P were specified, it is unspecified which of -H, -L, or -P will be used as a default. * If the -H option was specified, cp shall take actions based on the type and contents of the file referenced by any symbolic link specified as a source_file operand. * If the -L option was specified, cp shall take actions based on the type and contents of the file referenced by any symbolic link specified as a source_file operand or any symbolic links encoun- tered during traversal of a file hierarchy. * If the -P option was specified, cp shall copy any symbolic link specified as a source_file operand and any symbolic links encoun- tered during traversal of a file hierarchy, and shall not follow any symbolic links. * If the -r option was specified, the behavior is implementation- defined.
Difference between cp -r and cp -R (copy command)
1,332,892,117,000
Is there a way to execute a command in a different directory without having to cd to it? I know that I could simply cd in and cd out, but I'm just interested in the possibilities of forgoing the extra steps :)
I don't know if this counts, but you can make a subshell: $ (cd /var/log && cp -- *.log ~/Desktop) The directory is only changed for that subshell, so you avoid the work of needing to cd - afterwards.
Execute a specific command in a given directory without cd'ing to it?
1,332,892,117,000
How to limit process to one cpu core ? Something similar to ulimit or cpulimit would be nice. (Just to ensure: I do NOT want to limit percentage usage or time of execution. I want to force app (with all it's children, processes (threads)) to use one cpu core (or 'n' cpu cores)).
Under Linux, execute the sched_setaffinity system call. The affinity of a process is the set of processors on which it can run. There's a standard shell wrapper: taskset. For example, to pin a process to CPU #0 (you need to choose a specific CPU): taskset -c 0 mycommand --option # start a command with the given affinity taskset -c -pa 0 1234 # set the affinity of a running process There are third-party modules for both Perl (Sys::CpuAffinity) and Python (affinity) to set a process's affinity. Both of these work on both Linux and Windows (Windows may require other third-party modules with Sys::CpuAffinity); Sys::CpuAffinity also works on several other unix variants. If you want to set a process's affinity from the time of its birth, set the current process's affinity immediately before calling execve. Here's a trivial wrapper that forces a process to execute on CPU 0. #!/usr/bin/env perl use POSIX; use Sys::CPUAffinity; Sys::CpuAffinity::setAffinity(getpid(), [0]); exec $ARGV[0] @ARGV
How to limit a process to one CPU core in Linux? [duplicate]
1,332,892,117,000
I understand that reads to /dev/random may block, while reading /dev/urandom is guaranteed not to block. Where does the letter u come into this? What does it signify? Userspace? Unblocking? Micro? Update: Based on the initial wording of the question, there has been some debate over the usefulness of /dev/random vs /dev/urandom. The link Myths about /dev/urandom has been posted three times below, and is summarised in this answer to the question When to use /dev/random vs /dev/urandom.
Unlimited. In Linux, comparing the kernel functions named random_read and random_read_unlimited indicates that the etymology of the letter u in urandom isunlimited. This is confirmed by line 114: The /dev/urandom device does not have this limit [...] Update: Regarding which came first for Linux, /dev/random or /dev/urandom, @Stéphane Chazelas gave the post with the original patch and @StephenKitt showed they were both introduced simultaneously.
What does the letter 'u' mean in /dev/urandom?
1,332,892,117,000
I've never really got how chmod worked up until today. I followed a tutorial that explained a big deal to me. For example, I've read that you've got three different permission groups: owner (u) group (g) everyone (o) Based on these three groups, I now know that: If the file is owned by the user, the user permissions determine the access. If the group of the file is the same as the user's group, the group permission determine the access. If the user is not the file owner, and is not in the group, then the other permission is used. I've also learned that you've got the following permissions: read (r) write (w) execute (x) I created a directory to test my newly acquired knowledge: mkdir test Then I did some tests: chmod u+rwx test/ # drwx------ chmod g+rx test/ # drwxr-x--- chmod u-x test/ # drw-r-x--- After fooling around for some time I think I finally got the hang of chmod and the way you set permission using this command. But... I still have a few questions: What does the d at the start stand for? What's the name and use of the containing slot and what other values can it hold? How can I set and unset it? What is the value for this d? (As you only have 7=4+2+1 7=4+2+1 7=4+2+1) Why do people sometimes use 0777 instead of 777 to set their permissions? But as I shouldn't be asking multiple questions, I'll try to ask it in one question. In UNIX based system such as all Linux distributions, concerning the permissions, what does the first part (d) stand for and what's the use for this part of the permissions?
I’ll answer your questions in three parts: file types, permissions, and use cases for the various forms of chmod. File types The first character in ls -l output represents the file type; d means it’s a directory. It can’t be set or unset, it depends on how the file was created. You can find the complete list of file types in the ls documentation; those you’re likely to come across are -: “regular” file, created with any program which can write a file b: block special file, typically disk or partition devices, can be created with mknod c: character special file, can also be created with mknod (see /dev for examples) d: directory, can be created with mkdir l: symbolic link, can be created with ln -s p: named pipe, can be created with mkfifo s: socket, can be created with nc -U D: door, created by some server processes on Solaris/openindiana. Permissions chmod 0777 is used to set all the permissions in one chmod execution, rather than combining changes with u+ etc. Each of the four digits is an octal value representing a set of permissions: suid, sgid and “sticky” (see below) user permissions group permissions “other” permissions The octal value is calculated as the sum of the permissions: “read” is 4 “write” is 2 “execute” is 1 For the first digit: suid is 4; binaries with this bit set run as their owner user (commonly root) sgid is 2; binaries with this bit set run as their owner group (this was used for games so high scores could be shared, but it’s often a security risk when combined with vulnerabilities in the games), and files created in directories with this bit set belong to the directory’s owner group by default (this is handy for creating shared folders) “sticky” (or “restricted deletion”) is 1; files in directories with this bit set can only be deleted by their owner, the directory’s owner, or root (see /tmp for a common example of this). See the chmod manpage for details. Note that in all this I’m ignoring other security features which can alter users’ permissions on files (SELinux, file ACLs...). Special bits are handled differently depending on the type of file (regular file or directory) and the underlying system. (This is mentioned in the chmod manpage.) On the system I used to test this (with coreutils 8.23 on an ext4 filesystem, running Linux kernel 3.16.7-ckt2), the behaviour is as follows. For a file, the special bits are always cleared unless explicitly set, so chmod 0777 is equivalent to chmod 777, and both commands clear the special bits and give everyone full permissions on the file. For a directory, the special bits are never fully cleared using the four-digit numeric form, so in effect chmod 0777 is also equivalent to chmod 777 but it’s misleading since some of the special bits will remain as-is. (A previous version of this answer got this wrong.) To clear special bits on directories you need to use u-s, g-s and/or o-t explicitly or specify a negative numeric value, so chmod -7000 will clear all the special bits on a directory. In ls -l output, suid, sgid and “sticky” appear in place of the x entry: suid is s or S instead of the user’s x, sgid is s or S instead of the group’s x, and “sticky” is t or T instead of others’ x. A lower-case letter indicates that both the special bit and the executable bit are set; an upper-case letter indicates that only the special bit is set. The various forms of chmod Because of the behaviour described above, using the full four digits in chmod can be confusing (at least it turns out I was confused). It’s useful when you want to set special bits as well as permission bits; otherwise the bits are cleared if you’re manipulating a file, preserved if you’re manipulating a directory. So chmod 2750 ensures you’ll get at least sgid and exactly u=rwx,g=rx,o=; but chmod 0750 won’t necessarily clear the special bits. Using numeric modes instead of text commands ([ugo][=+-][rwxXst]) is probably more a case of habit and the aim of the command. Once you’re used to using numeric modes, it’s often easier to just specify the full mode that way; and it’s useful to be able to think of permissions using numeric modes, since many other commands can use them (install, mknod...). Some text variants can come in handy: if you simply want to ensure a file can be executed by anyone, chmod a+x will do that, regardless of what the other permissions are. Likewise, +X adds the execute permission only if one of the execute permissions is already set or the file is a directory; this can be handy for restoring permissions globally without having to special-case files v. directories. Thus, chmod -R ug=rX,u+w,o= is equivalent to applying chmod -R 750 to all directories and executable files and chmod -R 640 to all other files.
Understanding UNIX permissions and file types
1,332,892,117,000
I have done some research about this on Google, but the results were cloudy. Why is the / sign used to denote the root directory. Are there any solid reasons behind it?
The forward slash / is the delimiting character which separates directories in paths in Unix-like operating systems. This character seems to have been chosen sometime in the 1970's, and according to anecdotal sources, the reasons might be related to that the predecessor to Unix, the Multics operating system, used the > character as path separator, but the designers of Unix had already reserved the characters > and < to signify I/O redirection on the shell command line well before they had a multi-level file system. So when the time came to design the filesystem, they had to find another character to signify pathname element separation. A thing to note here is that in the Lear-Siegler ADM-3A terminal in common use during the 1970's, from which amongst other things the practice of using the ~ character to represent the home directory originates, the / key is next to the > key: As for why the root directory is denoted by a single /, it is a convention most likely influenced by the fact that the root directory is the top-level directory of the directory hierarchy, and while other directories may be beneath it, there usually isn't a reason to refer to anything outside the root directory. Similarly the directory entry itself has no name, because it's the boundary of the visible directory tree.
Why is the root directory denoted by a / sign?
1,332,892,117,000
Within the output of top, there are two fields, marked "buff/cache" and "avail Mem" in the memory and swap usage lines: What do these two fields mean? I've tried Googling them, but the results only bring up generic articles on top, and they don't explain what these fields signify.
top’s manpage doesn’t describe the fields, but free’s does: buffers Memory used by kernel buffers (Buffers in /proc/meminfo) cache Memory used by the page cache and slabs (Cached and SReclaimable in /proc/meminfo) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free) Basically, “buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping (see How can I get the amount of available memory portably across distributions? for a lot more detail on that).
What do the "buff/cache" and "avail mem" fields in top mean?
1,332,892,117,000
In the grub.conf configuration file I can specify command line parameters that the kernel will use, i.e.: kernel /boot/kernel-3-2-1-gentoo root=/dev/sda1 vga=791 After booting a given kernel, is there a way to display the command line parameters that were passed to the kernel in the first place? I've found sysctl, sysctl --all but sysctl shows up all possible kernel parameters.
$ cat /proc/cmdline root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ip_conntrack.hashsize=8192 nf_conntrack.hashsize=8192 ro devtmpfs.mount=1 $
How to display the Linux kernel command line parameters given for the current boot?
1,332,892,117,000
In Linux Mint 17.3 / 18 iwconfig says the power management of my wireless card is turned on. I want to turn it off permanently or some workaround on this issue. sudo iwconfig wlan0 power off works, until I reboot the laptop. Also, if I randomly check iwconfig, sometimes it's on, despite I did run this command. I read some articles about making the fix permanent. All of them contained the first step "Go to directory /etc/pm/power.d", which in my case did not exist. I followed these steps: sudo mkdir -p /etc/pm/power.d sudo nano /etc/pm/power.d/wireless_power_management_off I entered these two lines into the file: #!/bin/bash /sbin/iwconfig wlan0 power off And I finished with setting proper user rights: sudo chmod 700 /etc/pm/power.d/wireless_power_management_off But after reboot the power management is back on. iwconfig after manually turning power management off eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"SSID" Mode:Managed Frequency:2.462 GHz Access Point: 00:00:00:00:00:00 Bit Rate=24 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:18 Missed beacon:0 lo no wireless extensions. I don't think this question applies only to Linux Mint, it is a general issue of particular wireless adapters.
Open this file with your favorite text editor, I use nano here: sudo nano /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf By default there is: [connection] wifi.powersave = 3 Change the value to 2. Possible values for the wifi.powersave field are: NM_SETTING_WIRELESS_POWERSAVE_DEFAULT (0): use the default value NM_SETTING_WIRELESS_POWERSAVE_IGNORE (1): don't touch existing setting NM_SETTING_WIRELESS_POWERSAVE_DISABLE (2): disable powersave NM_SETTING_WIRELESS_POWERSAVE_ENABLE (3): enable powersave (Informal source on GitHub for these values.) To take effect, just run: sudo systemctl restart NetworkManager
How to turn off Wireless power management permanently
1,332,892,117,000
When I do a lspci -k on my Kubuntu with a 3.2.0-29-generic kernel I can see something like this: 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [Quadro NVS 290] (rev a1) Subsystem: NVIDIA Corporation Device 0492 Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb There is a kernel driver nvidia and kernel modules nvidia_current, nouveau, nvidiafb. Now I wondered what might be the difference between Kernel drivers and Kernel modules?
A kernel module is a bit of compiled code that can be inserted into the kernel at run-time, such as with insmod or modprobe. A driver is a bit of code that runs in the kernel to talk to some hardware device. It "drives" the hardware. Most every bit of hardware in your computer has an associated driver.¹ A large part of a running kernel is driver code.² A driver may be built statically into the kernel file on disk.³ A driver may also be built as a kernel module so that it can be dynamically loaded later. (And then maybe unloaded.) Standard practice is to build drivers as kernel modules where possible, rather than link them statically to the kernel, since that gives more flexibility. There are good reasons not to, however: Sometimes a given driver is absolutely necessary to help the system boot up. That doesn't happen as often as you might imagine, due to the initrd feature. Statically built drivers may be exactly what you want in a system that is statically scoped, such as an embedded system. That is to say, if you know in advance exactly which drivers will always be needed and that this will never change, you have a good reason not to bother with dynamic kernel modules. If you build your kernel statically and disable Linux's dynamic module loading feature, you prevent run-time modification of the kernel code. This provides additional security and stability at the expense of flexibility. Not all kernel modules are drivers. For example, a relatively recent feature in the Linux kernel is that you can load a different process scheduler. Another example is that the more complex types of hardware often have multiple generic layers that sit between the low-level hardware driver and userland, such as the USB HID driver, which implements a particular element of the USB stack, independent of the underlying hardware. Asides: One exception to this broad statement is the CPU chip, which has no "driver" per se. Your computer may also contain hardware for which you have no driver. The rest of the code in an OS kernel provides generic services like memory management, IPC, scheduling, etc. These services may primarily serve userland applications, as with the examples linked previously, or they may be internal services used by drivers or other intra-kernel infrastructure. The one in /boot, loaded into RAM at boot time by the boot loader early in the boot process.
What is the difference between kernel drivers and kernel modules?