date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,306,859,455,000
I'm pretty new to using Linux but am setting up my MySQL databases on an Amazon ec2 instance. I followed some directions I found about resetting the user login pass by using the --skip-grant-tables option of MySQL. Now I am trying to add a user and can't figure out how to turn that option off. This is what I'm trying to do: mysql> GRANT CREATE,SELECT,INSERT,UPDATE,DELETE ON ...my db username and pass but I get this error: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement How do I turn this option off?
Just stop and restart MySQL normally.
Turn off --skip-grant-tables in MySQL
1,306,859,455,000
I'm new to X11 and want to understand if it is really as dangerous as they say on the Internet. I will explain how I understand this. Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. But, if we install programs from the official repository (for example, for Debian), which are unlikely to contain keyloggers, etc., then the danger seems exaggerated. Am I wrong? Yes, you can open applications on separate servers (for example, Xephyr), but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient.
Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. All the X11 clients on a desktop can access each other in depth, including getting the content of any window, changing it, closing any window, faking key and mouse events to any other client, grabbing any input device, etc. The X11 protocol design is based on the idea that the clients are all TRUSTED and will collaborate, not step on each other's toes (the latter completely broken by modern apps like Firefox, Chrome or Java). BUT, if we install programs from the official repository (for example, for Debian), which are unlikely to contain keyloggers, etc., then the danger problem is clearly exaggerated. Am I wrong? Programs have bugs, which may be exploited. The X11 server and libraries may not be up-to-date. For instance, any X11 client can crash the X server in the current version of Debian (Buster 10) via innocuous Xkb requests. (That was fixed in the upstream sources, but didn't make it yet in Debian). If it's able to crash it, then there's some probability that it's also able to execute code with the privileges of the X11 server (access to hardware, etc). For the problems with the lax authentication in Xwayland (and the regular Xorg Xserver in Debian), see the notes of the end of this answer. Yes, you can open applications on separate servers (for example, Xephyr), but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient. Notice that unless you take extra steps, Xephyr allows any local user to connect to it by default. See this for a discussion about it. Creating a shared clipboard between multiple X11 servers is an interesting problem, which deserves its own Q&A, rather than mixed with this.
Is X11 dangerous?
1,306,859,455,000
I want to run a command when the user becomes inactive (the system is idle). For example: echo "You started to be inactive." Also, when the user becomes active again (the system is not idle anymore): echo "You started to be active, again." I need a shell script that will do this. Is this possible without a timer/interval? Maybe some system events?
This thread on the ArchLinux forums contains a short C program that queries the xscreensaver for information how long the user has been idle, this seems to be quite close to your requirements: #include <X11/extensions/scrnsaver.h> #include <stdio.h> int main(void) { Display *dpy = XOpenDisplay(NULL); if (!dpy) { return(1); } XScreenSaverInfo *info = XScreenSaverAllocInfo(); XScreenSaverQueryInfo(dpy, DefaultRootWindow(dpy), info); printf("%u\n", info->idle); return(0); } Save this as getIdle.c and compile with gcc -o getIdle getIdle.c -lXss -lX11 to get an executable file getIdle. This program prints the "idle time" (user does not move/click with mouse, does not use keyboard) in milliseconds, so a bash script that builds upon this could looke like this: #!/bin/bash idle=false idleAfter=3000 # consider idle after 3000 ms while true; do idleTimeMillis=$(./getIdle) echo $idleTimeMillis # just for debug purposes. if [[ $idle = false && $idleTimeMillis -gt $idleAfter ]] ; then echo "start idle" # or whatever command(s) you want to run... idle=true fi if [[ $idle = true && $idleTimeMillis -lt $idleAfter ]] ; then echo "end idle" # same here. idle=false fi sleep 1 # polling interval done This still needs regular polling, but it does everything you need...
Run a command when system is idle and when is active again
1,306,859,455,000
Not long ago, I have created new Software RAID array (mdadm) with 4 drives in RAID6. It seems to work just fine. mdstat follows: Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sda1[0] sde1[3] sdd1[2] sdb1[1] 1953260544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/8 pages [0KB], 65536KB chunk unused devices: <none> What is bugging me, is the bitmap: 0/8 pages part, which I don't understand. The question is: Is this a potential problem or not? And please, elaborate a little on what the bitmap is actually about. Full detail of this array follows: /dev/md0: Version : 1.2 Creation Time : Tue Nov 1 13:44:13 2016 Raid Level : raid6 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Dec 2 13:05:18 2016 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : backup-server:0 (local to host backup-server) UUID : 023f115d:212b130c:f05b072b:b14c2819 Events : 1664 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1
The bitmap line in /proc/mdstat indicates how much memory is being used to cache the write-intent bitmap. Basically, in RAID setups with redundant devices, mdadm can use a "bitmap" to keep track of which blocks may be out of sync (because they've been written to). When a block is written to the mdadm device, it is flagged in the bitmap, and then written to the underlying devices; once enough time has passed without activity in the block that mdadm can be sure that it's been written to all the devices, the flag is removed from the bitmap. It's useful to speed up resyncs after a system crash, or after a disk is removed and re-added (without being changed). In your case, 0/8 means that no memory is being used for the in-memory bitmap cache. This is a good thing: there's a good chance that all the underlying devices are synced. (In theory there could be entries in the on-disk bitmap that aren't cached in memory, but that's unlikely if the cache is completely empty.) md(4) has more information.
What is the bitmap's meaning in mdstat
1,306,859,455,000
What is the practical usage of /etc/networks file? As I understand, one can give names to networks in this file. For example: root@fw-test:~# cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 google-dns 8.8.4.4 root@fw-test:~# However, if I try to use this network name for example in ip utility, it does not work: root@fw-test:~# ip route add google-dns via 104.236.63.1 dev eth0 Error: an inet prefix is expected rather than "google-dns". root@fw-test:~# ip route add 8.8.4.4 via 104.236.64.1 dev eth0 root@fw-test:~# What is the practical usage of /etc/networks file?
As written in the manual page, the /etc/networks file is to describe symbolic names for networks. With network, it is meant the network address with tailing .0 at the end. Only simple Class A, B or C networks are supported. In your example the google-dns entry is wrong. It's not a A,B or C network. It's an ip-address-hostname-relationship therefore it belongs to /etc/hosts. Actually the default entry is also not conformant. Lets imagine you have an ip address 192.168.1.5 from your corporate network. An entry in /etc/network could then be: corpname 192.168.1.0 When using utilities like route or netstat, those networks are translated (if you don't suppress resolution with the -n flag). A routing table could then look like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 corpname * 255.255.255.0 U 0 0 0 eth0
practical usage of /etc/networks file
1,306,859,455,000
I am trying to find a way to remap keyboard keys forcefully. I tried using xmodmap and setxkbmap, but they do not work for one specific application. Such commands work for other normal windowed/applications on X tho. I think the application may be reading the keyboard raw data and ignoring X input? So, how to remap keys without using xmodmap and setxkbmap? if it is ever possible to be done using some software. I also tried xkeycaps, xkbcomp, but did not try loadkeys, as it is running on X. I found here that I could try setkeycodes, "because after assigning kernel keycode the button should work in xorg", but I also found that "you can't use 'setkeycodes' on USB keyboards", that's my case (I am interested in case someone make it work on ps2 as I think I could use an adapter). This seemed promising "Map scancodes to keycodes", but after a few tests nothing changed, here are they: I found keycode "36" ("j" key) at vt1 with showkey I found scancode "7e" (keypad ".") at vt1 with showkey --scancodes $cat >/etc/udev/hwdb.d/90-custom-keyboard.hwdb keyboard:usb:v*p* keyboard:dmi:bvn*:bvr*:bd*:svn*:pn*:pvr* KEYBOARD_KEY_7e=36 $udevadm hwdb --update #updates file: /lib/udev/hwdb.bin $udevadm trigger #should apply the changes but nothing happened $cat /lib/udev/hwdb.bin |egrep "KEYBOARD_KEY_7e.{10}" -ao KEYBOARD_KEY_7eleftmeta $#that cat on hwdb.bin did not change after the commands.. Obs.: did not work either with: KEYBOARD_KEY_7e=j Some more alternative ways (by @vinc17) to find the keys: evtest /dev/input/by-id/... or input-kbd 3 (put the id index found at ls -l /dev/input/by-id/* from ex. event3) PS.: *If you are interested on testing yourself, the related thread for the application is this: http://forums.thedarkmod.com/topic/14266-keyboard-issue-in-new-version-108/ The issues I have are the same: some keys (KP_Decimal, DownArrow, UpArrow, RightArrow) are ignored and considered all with the same value there "0x00"
First find the scancode of the key that needs to be remapped, e.g. with the evtest utility. A line like the following one (with MSC_SCAN in it) should be output: Event: time 1417131619.686259, type 4 (EV_MSC), code 4 (MSC_SCAN), value 70068 followed by a second one giving the current key code. If no MSC_SCAN line is output, this is due to a kernel driver bug, but the scancode can still be found with the input-kbd utility; evtest should have given the key code, so that it should be easy to find the corresponding line in the input-kbd output (e.g. by using grep). Once the scancodes of the keys to be remapped have been determined, create a file such as /etc/udev/hwdb.d/98-custom-keyboard.hwdb containing the remappings. The beginning of the file /lib/udev/hwdb.d/60-keyboard.hwdb gives some information. In my case (which works), I have: evdev:input:b0003v05ACp0221* KEYBOARD_KEY_70035=102nd # Left to z: backslash bar KEYBOARD_KEY_70064=grave # Left to 1: grave notsign KEYBOARD_KEY_70068=insert # F13: Insert (Before udev 220, I had to use keyboard:usb:v05ACp0221* for the first line.) The evdev: string must be at the beginning of the line. Note that the letters in the vendor and product id should be capital letters. Each KEYBOARD_KEY_ settings should have exactly one space before (note: a line with no spaces will give an error message, and a line with two spaces were silently ignored with old udev versions). KEYBOARD_KEY_ is followed by the scancode in hexadecimal (like what both evtest and input-kbd give). Valid values could be obtained from either the evtest output or the input-kbd output, or even from the /usr/include/linux/input.h file: for instance, KEY_102ND would give 102nd (by removing KEY_ and converting to lower case), which I used above. After the file is saved, type: udevadm hwdb --update to (re)build the database /etc/udev/hwdb.bin (you can check its timestamp). Then, udevadm trigger --sysname-match="event*" will take the new settings into account. You can check with evtest. In 2014, the released udev had incomplete/buggy information in /lib/udev/hwdb.d/60-keyboard.hwdb, but you can look at the latest development version of the file and/or my bug report and discussion concerning the documentation and spacing issues. If this doesn't work, the problem might be found after temporarily increasing the log level of udevd with udevadm control (see the udevadm(8) man page for details). For old udev versions such as 204, this method should still work.
keyboard hard remap keys?
1,306,859,455,000
I'm trying to find a way to determine Linux distribution name and version that would work on most (or ideally, all) modern distributions. I noticed that /etc/os-release contains the info I need on the distributions I tried (CentOS, Debian), but how safe is it to rely on the presence of it? Commands such as uname -a don't really contain the same info, and lsb_release is apparently not present on e.g. minimal CentOS. Is there a quick way to find out exactly what distros come with /etc/os-release? Moreover, is /etc/os-release guaranteed to contain NAME, VERSION and PRETTY_NAME fields?
Any system running systemd should have /etc/os-release, which is specified as part of systemd. Some systems without systemd might have it too (e.g. Debian 8 where systemd is optional but /etc/os-release is installed in all cases). According to the specification, all fields are optional, and some have defaults ("Linux" for NAME and PRETTY_NAME). You’ll find more background in the /etc/os-release announcement.
On what Linux distributions can I rely on the presence of /etc/os-release?
1,306,859,455,000
I have a question regarding making my own unit (service) file for Systemd. I've read the documentation and had some questions. After searching around, I found this very helpful answer that gives some detail about some of the questions I was having. How to write a systemd .service file running systemd-tmpfiles Although I find that answer useful, there is still one part that I do not understand. Mainly this part: Since we actually want this service to run later rather than sooner, we then specify an "After" clause. This does not actually need to be the same as the WantedBy target (it usually isn't) My understanding of After is that it is pretty straight forward. The service (or whatever you are defining) will run after the unit listed in After. Similarly, WantedBy seems pretty straight forward. You are defining that the unit you list has a Want to your service. So for a target like multi-user or graphical, your unit should be run in order for systemd to consider that target reached. Now, assuming my understanding of how these declarations work is correct so far, my question is this: Why would it even work to list the same unit in the After and WantedBy clauses? For example, defining a unit that is After multi-user.target and also WantedBy multi-user.target seems to me like it would lead to an impossible situation where the unit needs to be started after the target is reached, but also it needs to be started for the target to be considered "reached". Am I misunderstanding something?
The systemd manual discusses the relationship between Before/After and Requires/Wants/Bindto in the Before=, After= section: Note that this setting is independent of and orthogonal to the requirement dependencies as configured by Requires=, Wants= or BindsTo=. It is a common pattern to include a unit name in both the After= and Requires= options, After does not imply Wants or WantedBy, nor does it conflict with those settings. If both units are triggered to start, After will affect the order, regardless of the dependency chain. If the module listed in After is not somewhere in the dependency chain, it won't be loaded, since After does not imply any dependency.
Systemd Unit File - WantedBy and After
1,306,859,455,000
I need to run tail -f against a log file, but only for specific amount of time, for example 20 seconds, and then exit. What would be an elegant way to achieve this?
With GNU timeout: timeout 20 tail -f /path/to/file
'tail -f' for a specific amount of time [duplicate]
1,306,859,455,000
I have the following data (a list of R packages parsed from a Rmarkdown file), that I want to turn into a list I can pass to R to install: d3heatmap data.table ggplot2 htmltools htmlwidgets metricsgraphics networkD3 plotly reshape2 scales stringr I want to turn the list into a list of the form: 'd3heatmap', 'data.table', 'ggplot2', 'htmltools', 'htmlwidgets', 'metricsgraphics', 'networkD3', 'plotly', 'reshape2', 'scales', 'stringr' I currently have a bash pipeline that goes from the raw file to the list above: grep 'library(' Presentation.Rmd \ | grep -v '#' \ | cut -f2 -d\( \ | tr -d ')' \ | sort | uniq I want to add a step on to turn the new lines into the comma separated list. I've tried adding tr '\n' '","', which fails. I've also tried a number of the following Stack Overflow answers, which also fail: https://stackoverflow.com/questions/1251999/how-can-i-replace-a-newline-n-using-sed This produces library(stringr)))phics) as the result. https://stackoverflow.com/questions/10748453/replace-comma-with-newline-in-sed This produces ,% as the result. Can sed replace new line characters? This answer (with the -i flag removed), produces output identical to the input.
You can add quotes with sed and then merge lines with paste, like that: sed 's/^\|$/"/g'|paste -sd, - If you are running a GNU coreutils based system (i.e. Linux), you can omit the trailing '-'. If you input data has DOS-style line endings (as @phk suggested), you can modify the command as follows: sed 's/\r//;s/^\|$/"/g'|paste -sd, -
Turning separate lines into a comma separated list with quoted entries
1,306,859,455,000
we want to build 6 mount point folders as example /data/sdb /data/sdc /data/sdd /data/sde /data/sdf /data/sdg so we wrote this simple bash script using array folder_mount_point_list="sdb sdc sdd sde sdf sdg" folderArray=( $folder_mount_point_list ) counter=0 for i in disk1 disk2 disk3 disk4 disk4 disk5 disk6 do folder_name=${folderArray[counter]} mkdir /data/$folder_name let counter=$counter+1 done now we want to change the code without counter and let=$counter=counter+1 is it possible to shift each loop the array in order to get the next array value? as something like ${folderArray[++]}
A general remark. It does not make sense to define an array like this: folder_mount_point_list="sdb sdc sdd sde sdf sdg" folderArray=( $folder_mount_point_list ) You would do this instead: folderArray=(sdb sdc sdd sde sdf sdg) Now to your question: set -- sdb sdc sdd sde sdf sdg for folder_name; do mkdir "/data/$folder_name" done or set -- sdb sdc sdd sde sdf sdg while [ $# -gt 0 ]; do mkdir "/data/$1" shift done
how to shift array value in bash
1,306,859,455,000
In terminal emulation applications, pressing CTRL + Left / Right arrows jumps from one word to the previous or next one. Is it possible to have the same functionality in a Linux console, whether it is in text or in framebuffer modes? In my configuration, the CTRL + arrow keys are transformed into escaped character sequences and not interpreted.
This is possible if and only if the terminal sends different escape sequences for Ctrl+Left vs Left. This is not the case by default on the Linux console (at least on my machine). You can make it so by modifying the keymap. The exact file to modify may depend on your distribution; on Debian lenny, the file to modify is /etc/console/boottime.kmap.gz. You need lines like control keycode 105 = F100 string F100 = "\033O5D" control keycode 106 = F101 string F101 = "\033O5C" You might as well choose the same escape sequences as your X terminal emulator. To find out what the control sequence is, type Ctrl+V Ctrl+Left in a shell; this inserts (on my machine) ^[O5D where ^[ is an escape character. In the keymap file, \033 represents an escape character. Configuring the application in the terminal to decode the escape sequence is a separate problem, .
How do I jump to the next or previous word with CTRL + arrow keys in a console?
1,306,859,455,000
I was checking unshare command and according to it's man page, unshare - run program with some namespaces unshared from parent I also see there is a type of namespace listed as, mount namespace mounting and unmounting filesystems will not affect rest of the system. What exactly is the purpose of this mount namespace? I am trying to understand this concept with the help of some example.
Running unshare -m gives the calling process a private copy of its mount namespace, and also unshares file system attributes so that it no longer shares its root directory, current directory, or umask attributes with any other process. So what does the above paragraph say? Let us try and understand using a simple example. Terminal 1: I do the below commands in the first terminal. #Creating a new process unshare -m /bin/bash #creating a new mount point secret_dir=`mktemp -d --tmpdir=/tmp` #creating a new mount point for the above created directory. mount -n -o size=1m -t tmpfs tmpfs $secret_dir #checking the available mount points. grep /tmp /proc/mounts The last command gives me the output as, tmpfs /tmp/tmp.7KtrAsd9lx tmpfs rw,relatime,size=1024k 0 0 Now, I did the following commands as well. cd /tmp/tmp.7KtrAsd9lx touch hello touch helloagain ls -lFa The output of the ls command is, ls -lFa total 4 drwxrwxrwt 2 root root 80 Sep 3 22:23 ./ drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../ -rw-r--r-- 1 root root 0 Sep 3 22:23 hello -rw-r--r-- 1 root root 0 Sep 3 22:23 helloagain So what is the big deal in doing all this? Why should I do it? I open another terminal now (terminal 2) and do the below commands. cd /tmp/tmp.7KtrAsd9lx ls -lFa The output is as below. ls -lFa total 8 drwx------ 2 root root 4096 Sep 3 22:22 ./ drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../ The files hello and helloagain are not visible and I even logged in as root to check these files. So the advantage is, this feature makes it possible for us to create a private temporary filesystem that even other root-owned processes cannot see or browse through. From the man page of unshare, mount namespace Mounting and unmounting filesystems will not affect the rest of the system (CLONE_NEWNS flag), except for filesystems which are explicitly marked as shared (with mount --make-shared; see /proc/self/mountinfo for the shared flags). It's recommended to use mount --make-rprivate or mount --make-rslave after unshare --mount to make sure that mountpoints in the new namespace are really unshared from the parental namespace. The memory being utilized for the namespace is VFS which is from kernel. And - if we set it up right in the first place - we can create entire virtual environments in which we are the root user without root permissions. References: The example is framed using the details from this blog post. Also, the quotes of this answer are from this wonderful explanation from Mike. Another wonderful read regarding this can be found from the answer from here.
per process private file system mount points
1,306,859,455,000
I am using the newest version of netcat (v1.10-41.1) which does not seem to have an option for IPv6 addresses (as the -6 was in the older versions of nc). If I type in nc -lvnp 2222 and check listening ports with netstat -punta, the server appears to be listening on port 2222 for IPv4 addresses only: tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 2839/nc tcp6 is not active like, for example, my apache2 server: tcp6 0 0 :::80 :::* LISTEN -
There are at least 3 or 4 different implementations of netcat as seen on Debian: netcat-traditional 1.10-41 the original which doesn't support IPv6: probably what you installed. netcat6 which was made to offer IPv6 (oldstable, superseded). netcat-openbsd 1.130-3 . Does support IPv6. ncat 7.70+dfsg1-3 probably a bit newer since not in Debian stable, provided by nmap, does support IPv6. I'd go for the openbsd one. Each version can have subtly different syntax, so take care. By the way: socat is a much better tool able to really do much more than netcat. You should try it!
Netcat - How to listen on a TCP port using IPv6 address?
1,306,859,455,000
I am partitioning eMMC using following commands in the script, parted /dev/mmcblk0 --script mklabel gpt parted /dev/mmcblk0 --script mkpart primary ext4 32MB 132MB parted /dev/mmcblk0 --script mkpart primary ext4 233MB 433MB parted /dev/mmcblk0 --script mkpart primary ext4 433MB 533MB parted /dev/mmcblk0 --script mkpart primary ext4 533MB 593MB parted /dev/mmcblk0 --script mkpart primary ext4 593MB 793MB parted /dev/mmcblk0 --script mkpart primary ext4 793MB 3800MB parted /dev/mmcblk0 --script align-check min 1 Is it the correct way to create partition in the script ? Is there any better way ? After creating first partition i am getting following warning Warning: The resulting partition is not properly aligned for best performance. Do i need to worry about it ? I tried parted /dev/mmcblk0 --script align-check min 1 but not sure that's the solution. Any pointers for that? I am going through this link meanwhile any other suggestions ? Edit: Just a quick reference for frostschutz reply, MiB = Mebibyte = 1024 KiB KiB = Kibibyte = 1024 Bytes MB = Megabyte = 1,000 KB KB = Kilobyte = 1,000 Bytes
It's correct in principle but you might consider reducing it to a single parted call. parted --script /device \ mklabel gpt \ mkpart primary 1MiB 100MiB \ mkpart primary 100MiB 200MiB \ ... Your alignment issue is probably because you use MB instead of MiB. You should not need an actual align-check command when creating partitions on MiB boundaries / on a known device.
Scripteable GPT partitions using parted
1,306,859,455,000
Between Debian 5 and 6, the default suggested value for kernel.printk in /etc/sysctl.conf was changed from kernel.printk = 4 4 1 7 to kernel.printk = 3 4 1 3. I understand that the first value corresponds to what is going to the console. What are the next 3 values for? Do the numerical values have the same meaning as the syslog log levels? Or do they have different definitions? Am I missing some documentation in my searching, or is the only location to figure this out the kernel source.
Sysctl settings are documented in Documentation/sysctl/*.txt in the kernel source tree. On Debian, install linux-doc to have the documentation in usr/share/doc/linux-doc-*/Documentation/ (most distributions have a similar package). From Documentation/sysctl/kernel.txt: The four values in printk denote: console_loglevel, default_message_loglevel, minimum_console_loglevel and default_console_loglevel respectively. These values influence printk() behavior when printing or logging error messages. See man 2 syslog for more info on the different loglevels. console_loglevel: messages with a higher priority than this will be printed to the console default_message_loglevel: messages without an explicit priority will be printed with this priority minimum_console_loglevel: minimum (highest) value to which console_loglevel can be set default_console_loglevel: default value for console_loglevel I don't find any clear prose explanation of what default_console_loglevel is used for. In the Linux kernel source, the kernel.printk sysctl sets console_printk. The default_console_loglevel field doesn't seem to be used anywhere.
Description of kernel.printk values
1,306,859,455,000
There are many resources on the Internet to create a custom vagrant box from a VirtualBox instance. But I want to know a direct method to create a custom vagrant box directly from a kvm/libvirt instance . Please don't suggest vagrant-mutate or any that convert VirtualBox to another provider.
after spending time with vagrant i got the solution for custom box. first of all install any Linux OS in libvirt/qvm and login to it for customization and create vagrant user with password vagrant adduser vagrant vagrant user should be able to run sudo commands without a password prompt sudo visudo -f /etc/sudoers.d/vagrant and paste vagrant ALL=(ALL) NOPASSWD:ALL do whatever you want to customize your vagrant box and install openssh-server if not installed previously sudo apt-get install -y openssh-server put ssh key from vagrant user mkdir -p /home/vagrant/.ssh chmod 0700 /home/vagrant/.ssh wget --no-check-certificate \ https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub \ -O /home/vagrant/.ssh/authorized_keys chmod 0600 /home/vagrant/.ssh/authorized_keys chown -R vagrant /home/vagrant/.ssh open sudo vi /etc/ssh/sshd_config and change PubKeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys PermitEmptyPasswords no PasswordAuthentication no restart ssh service using sudo service ssh restart install additional development packages for the tools to properly compile and install sudo apt-get install -y gcc build-essential linux-headers-server do any change that you want and shutdown the VM . now , come to host machine on which guest VM is running and goto the /var/lib/libvirt/images/ and choose raw image in which you did the change and copy somewhere for example /test cp /var/lib/libvirt/images/test.img /test create two file metadata.json and Vagrantfile in /test do entry in metadata.json { "provider" : "libvirt", "format" : "qcow2", "virtual_size" : 40 } and in Vagrantfile Vagrant.configure("2") do |config| config.vm.provider :libvirt do |libvirt| libvirt.driver = "kvm" libvirt.host = 'localhost' libvirt.uri = 'qemu:///system' end config.vm.define "new" do |custombox| custombox.vm.box = "custombox" custombox.vm.provider :libvirt do |test| test.memory = 1024 test.cpus = 1 end end end convert test.img to qcow2 format using sudo qemu-img convert -f raw -O qcow2 test.img ubuntu.qcow2 rename ubuntu.qcow2 to box.img mv ubuntu.qcow2 box.img Note: currently,libvirt-vagrant support only qcow2 format. so , don't change the format just rename to box.img. because it takes input with name box.img by default. create box tar cvzf custom_box.box ./metadata.json ./Vagrantfile ./box.img add box to vagrant vagrant box add --name custom custom_box.box go to any directory where you want to initialize vagrant and run command bellow that will create Vagrant file vagrant init custom start configuring vagrant VM vagrant up --provider=libvirt enjoy !!!
how to create custom vagrant box from libvirt/kvm instance?
1,306,859,455,000
I am finding a way to get the filename assigned to a variable in my shell script. But my file has naming format as file-1.2.0-SNAPSHOT.txt. Here the numbers may change sometimes, now how can i assign this filename to a variable. Any regex can be used? or grep? or find? or file? My directory consists of following files: file-1.2.0-SNAPSHOT.txt newFile-1.0.0.txt sample.txt My script sc.sh: file_path="/home/user/handsOn" var=$file_path/file-1.2.0-SNAPSHOT.txt newLocation=/new_path cp $var $newLocation Now the file version changes sometimes. My script should work for any version number. How can I assign the matched filename to variable? Help me out. TIA
Finally, I got the solution after many trial and error methods: cd $file_path && fVar=$(find -type f -name 'file-[0-9].[0-9].[0-9]-SNAPSHOT.txt'); echo $fVar # output is like ./file-1.2.0-SNAPSHOT.txt fT=${fVar:2} # removing first two characters'./' echo "$fT" # output is file-1.2.0-SNAPSHOT.txt Thanks Rakesh for contributing your answer, it helped me.
Find a file matching with certain pattern and giving that file name as value to a variable in shell script?
1,306,859,455,000
Is it possible to mount a loopback file as read-only, and redirect all writes to RAM?
Update: It seems there are 2 other simpler ways to do this on Ubuntu (at least the later versions): sudo apt-get install overlayroot followed by setting overlayroot="tmpfs:swap=1,recurse=0" in /etc/overlayroot.local.conf. sudo apt-get install fsprotect followed by passing fsprotect as a kernel parameter I finally figured out how to do this with the root filesystem (in Ubuntu 11.04)! The steps for making a system bootable are simple. I used this guide in combination with this guide and a bunch of web searches to figure out how to get it working properly, without bugs. Summary: Run: sudo apt-get install fsprotect apparmor-utils Save this to /etc/initramfs-tools/scripts/init-bottom/__rootaufs. I don't think the name actually matters, but the beginning __ might be used for ordering purposes, so if you change the name, you might want to keep the underscores. (This is a copy of this file.) #!/bin/sh -e case $1 in prereqs) exit 0 ;; esac for x in $(cat /proc/cmdline); do case $x in root=*) ROOTNAME=${x#root=} ;; aufs=*) UNION=${x#aufs=} case $UNION in LABEL=*) UNION="/dev/disk/by-label/${UNION#LABEL=}" ;; UUID=*) UNION="/dev/disk/by-uuid/${UNION#UUID=}" ;; esac ;; esac done if [ -z "$UNION" ]; then exit 0 fi # make the mount points on the init root file system mkdir /aufs /ro /rw # mount read-write file system if [ "$UNION" = "tmpfs" ]; then mount -t tmpfs rw /rw -o noatime,mode=0755 else mount $UNION /rw -o noatime fi # move real root out of the way mount --move ${rootmnt} /ro mount -t aufs aufs /aufs -o noatime,dirs=/rw:/ro=ro # test for mount points on union file system [ -d /aufs/ro ] || mkdir /aufs/ro [ -d /aufs/rw ] || mkdir /aufs/rw mount --move /ro /aufs/ro mount --move /rw /aufs/rw # strip fstab off of root partition grep -v $ROOTNAME /aufs/ro/etc/fstab > /aufs/etc/fstab mount --move /aufs /root exit 0 In /etc/default/grub, find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT, and inside the quotes that follow, add the parameter aufs=tmpfs. Bonus: If you need to occasionally turn off the redirection temporarily, simply remove this argument from the kernel parameter list. You can probably do this by holding the Shift key when the system is booting, to show the GRUB menu; then press e to edit the parameters, and just erase the aufs=... parameter from the list. Append these lines to /etc/sysctl.conf. (Warning: Potential security risk.) kernel.yama.protected_nonaccess_hardlinks = 0 kernel.yama.protected_sticky_symlinks = 0 Run these lines: sudo aa-complain dhclient3 sudo chmod 0755 /etc/initramfs-tools/scripts/init-bottom/__rootaufs sudo update-initramfs -k all -u sudo update-grub If everything went well, when you reboot, you will be doing so into a temporary file system. The RAM part will be at /rw, and the disk image will be at /ro, but of course it will be read-only. Nevertheless, if you have booted into a temporary system but need to make a permanent change, you can re-mount the /ro file system by saying sudo mount -o remount,rw /ro to make it writable, and then you can make whatever modifications needed to that directory.
Mount a filesystem read-only, and redirect writes to RAM?
1,306,859,455,000
I'm running software that overloads disk IO sometimes. I don't need fast response from that software, I need fast response from other applications, so I could set low process priority for that. I want to ask how process priority affects disk IO priority for a process. I tried a small experiment: I set low priority (in System Monitor under GNOME) for a process and checked IO priority with ionice. Result: IO priority = 0 for normal process priority IO priority = 4 for low process priority But will this always work like this? Is IO priority always reduced when I reduce process priority?
Under Linux, by default, a process's IO priority is derived from its CPU priority according to the formula io_priority = (cpu_nice + 20) / 5 IO priority ranges from 0 to 7 with 0 being the highest priority. CPU niceness ranges from -20 to 19 with -20 being the highest priority. You can use the ionice command to change a process's IO priority. If you want that process to run only when the system isn't otherwise busy, make it run under the “idle” class rather than the default “best-effort” class: ionice -c 3 -p $PID ionice -c 3 mycommand --someoption Even with the lowest priority, a disk-intensive process tends to slow the system down, if nothing else because it pollutes the cache. See the ionice man page for more information.
How disk IO priority is related with process priority?
1,306,859,455,000
What *nix command would cause the hard drive arm to rapidly switch between the centre and the edge of the platter? In theory it should soon cause a mechanical failure. It is for an experiment with old hard drives.
hdparm --read-sector N will issue a low-level read of sector N bypassing the block layer abstraction. Use -I to get the device's number of sectors.
Command to force hard drive arm to move to a specific position on the platter
1,306,859,455,000
In my setup, I have two disks that are each formatted in the following way: (GPT) 1) 1MB BIOS_BOOT 2) 300MB LINUX_RAID 3) * LINUX_RAID The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md devices and not on sd - please tell me if this is wrong). How do I setup GRUB correctly so that if one drive fails, the other will still boot? And by extension, that a replacement drive will automatically include GRUB, too? If this is even possible, of course.
If the two disks are /dev/sda and /dev/sdb, run both grub-install /dev/sda and grub-install /dev/sdb. Then both drives will be able to boot alone. Make sure that your Grub configuration doesn't hard-code disks like (hd0), but instead searches for the boot and root filesystems' UUIDs. I'm not aware of support in Grub to declare two disks as being in a RAID-1 array so that grub-install would automatically write to both. This means you'll need to run grub-install again if you replace one disk; it's one more thing to do in addition to adding new members to the RAID arrays.
How to correctly install GRUB on a soft RAID 1?
1,306,859,455,000
My desktop system is: $ uname -a Linux xmachine 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:25:36 UTC 2011 i686 i686 i386 GNU/Linux By running ps a | grep getty, I get this output: 900 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4 906 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5 915 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2 917 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3 923 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6 1280 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1 5412 pts/1 S+ 0:00 grep --color=auto getty I think ttyX processes is for input/ouput devices but I am not quite sure. Based on this I am wondering that why there are 6 ttyX processes running? I have only one input devices(keyboard) actually.
This shows because one getty process is running on each virtual console (VC) between tty1 and tty6. You can access them by changing your active virtual console using Alt-F1 through Alt-F6 (Ctrl-Alt-F1 and Ctrl-Alt-F6 respectively if you are currently within X). For more information on what a TTY is, see this question, and for information on virtual consoles, see this Wikipedia article.
why there are six getty processes running on my desktop?
1,306,859,455,000
My laptop (an HP with an i3 chip) overheats like crazy every time I run a resource heavy process (like a large compilation, extracting large tarballs or ... playing Flash). I am currently looking into some cooling solutions but got the idea of limiting global CPU consumption. I figured that if the CPU is capped, chances are the temperature will stop increasing frantically, and I'm willing to sacrifice a little performance in order to get the job done. Am I wrong in my reasoning? How can I proceed to cap the CPU usage overall? If it helps, I'm running Debian.
I don't know that limiting CPU to the whole system is something that's possible without a lot of hacking, but you can easily limit the amount of CPU used by a single process using cpulimit The only way I can think of you being able to use this effectively is writing a wrapper script (can't really call it a script, it's so small) for the applications which you know are resource hogs. Say for example, you find google-chrome uses a lot of CPU, you could replace the google-chrome binary in your path with something like: #! /bin/bash cpulimit --limit 70 /usr/bin/google-chrome-bin I haven't tested this so take it with a grain of salt. From cpulimit's website, it seems like you might be able to set rules for cpu limits on different applications. I'm not sure, you'd have to take a look.
Is there a way to limit overall CPU consumption?
1,306,859,455,000
Short version: How to disable audit messages (dmesg) on a Fedora system? A Fedora system keeps logging "audit: success" messages in dmesg - in such an extreme way that dmesg has become unusable because it's filled up by these messages (dmesg | grep -v audit is empty). These messages are completely useless as they obviously want to inform the user that some every-day internal process has succeeded (which might be of interest when debugging something, but it's just noise in this case). Even the command line interface (when switching to a non-X tty with Ctrl + Alt + F2) has become unusable as it's always cluttered with these audit messages, it's impossible to read the output of the commands that are actually run by the user. For example, after entering the username (login), an audit message is spewed out (apparently telling the user that something was formatted/printed successfully): audit: type=1131 audit(1446913801.945:10129): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' It appears that most of these messages indicate "success", however there are also many audit messages which do not contain this keyword. Running Chromium triggers hundreds of these: audit: type=1326 audit(1446932349.568:10307): auid=500 uid=500 gid=500 ses=2 pid=1593 comm="chrome" exe="/usr/lib64/chromium/chrome" sig=0 arch=c000003e syscall=273 compat=0 ip=0x7f9a1d0a34f4 code=0x50000 Other messages include: audit: type=1131 audit(1446934361.948:10327): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' audit: type=1103 audit(1446926401.821:10253): pid=28148 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=p am_env,pam_unix acct="user" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' Generally, the majority of recent audit messages (at the time of writing) contains the keyword "NetworkManager" or "chrome". How can these messages be disabled completely? Additional points: In case anyone might be thinking "you should read and analyze these audit messages, not disable them, they could be important", no they are not important, they're almost exclusively "success" messages. Nobody needs to be told that something which is supposed to work did in fact work. However, if one actually significant message was being logged, it would never be noticed in the storm of thousands of insignificant messages. In any case, no audit logging is wanted on this particular system (it's running in a controlled environment anyway). Clearly, something must be very misconfigured on this system. However, it was once a default Fedora installation which has been upgraded whenever a new release came out. Maybe it's just a simple setting that has to be changed, but as it did not happen changing the system configuration manually (on purpose), this stackexchange.com question will hopefully help others who happen to have gotten their system in the same state. It's now a Fedora 22 system, running Linux 4.0.6 (systemd 219). It's a standard Fedora desktop installation, currently running KDE. SELinux is disabled (/etc/selinux/config is set to "disabled"). Update: After upgrading to Fedora 23 (kernel 4.2.5, systemd 222), there are fewer audit messages than before.
Firstly, on fedora, both auditd and auditctl come from the same package (unconfusingly named audit). So if you don't have auditctl, something else is wrong. Try this: rpm -ql audit |grep ctl If that gives you nothing, then you do not have the audit package installed at all. Secondly, the first "human" language line in the grub.cfg file you mentioned says "DO NOT EDIT" on my system. This is a clue that any manual changes to the file can be lost. The correct place to edit the grub config on a fedora/redhat system is the one file you specifically suggested as not being necessary to change (/etc/default/grub). In reality, this is the only "safe" way to make the proposed change and survive kernel upgrades. This is because it is used as part of the source configuration during kernel upgrades, to regenerate a working grub.cfg. Look up the grub2-mkconfig command (and it's friends). Details are here: https://fedoraproject.org/wiki/GRUB_2 Your answer is not wrong, but I found it a little confusing. I hate the grub command line, and IMHO anyone who is likely to miss adding a whitespace char on a kernel command line would probably not thank any one for being lead down that road. Still, some people like to learn the hard way I know. All commands below need to be run as root (which is in and of itself a dangerous thing to suggest). For a running system: auditctl -e 0 If you cannot find auditctl, check your PATH and also consider: dnf install audit This should at least reduce if not disable the messages until such a time as you can reboot. To persist beyond reboots, edit /etc/default/grub and change the GRUB_CMDLINE_LINUX line to add "audit=0" to the end, then use grub2-mkconfig to regenerate the grub.cfg. This final step also puts a layer of validation between your change, and the running system.
How to disable useless "audit success" log entries in dmesg
1,306,859,455,000
There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of top. More on the topic e.g. here Is there a way to limit the amount of memory a particular process can use in Unix? setrlimit(2) documentation says: A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2). It should be understood in the following way: If a process has a RLIMIT_AS of e.g. 2GB, then it cannot allocate more memory than 2GB. When it spawns a child, the address space limit of 2GB will be passed on to the child, but counting starts from 0. The 2 processes together can take up to 4GB of memory. But what would be the useful way to constrain the sum total of memory allocated by a whole tree of processes?
I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently. As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups. sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroup Make sure $USER is your user. Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup. You can then set the limit to, lets say 500 MB, by doing this: echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes Now lets run Vim: cgexec -g memory:myGroup vim The vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.
How to limit the total resources (memory) of a process and its children
1,306,859,455,000
I am looking for specific details as to why isn't GNU/Linux currently SUS (Single UNIX Specification) v3 or even better SUS v4 compliant? What application APIs and user utilities does it miss or implement in a non-SUS compliant way?
To get a certification you need to pay, and it's actually really expensive. That's why BSD-like and GNU/Linux OS vendors don't apply for it. So there isn't even a reason to check whether GNU/Linux is compliant or not. http://en.wikipedia.org/wiki/Single_UNIX_Specification#Non-registered_Unix-like_systems Most of all, the GNU/Linux distribution follows the Linux Standard Base, which is free of charge, and recognized by almost all Linux vendors. http://en.wikipedia.org/wiki/Linux_Standard_Base Edit: As my answer is not completely correct, I'll add the @vonbrand comments: Linus (and people involved in the development of other parts of Linux distributions) follow the pragmatic guideline to make it as close to POSIX as is worthwhile. There are parts of POSIX (like the (in)famous STREAMS) that are ill-conceived, impossible to implement efficiently, or just codification of historic relics that should be replaced by something better. ... therefore, does it make it harder to obtain a certification? Sure. POSIX mandates some interface, which Linux just won't ever have. Case closed.
Why isn't GNU/Linux SUS v3+ compliant?
1,306,859,455,000
Assuming I want to test if a library is installed and usable by a program. I can use ldconfig -p | grep mylib to find out if it's installed on the system. but what if the library is only known via setting LD_LIBRARY_PATH? In that case, the program may be able to find the library, but ldconfig won't. How can I check if the library is in the combined linker path? I'll add that I'm looking for a solution that will work even if I don't actually have the program at hand (e.g. the program isn't compiled yet), I just want to know that a certain library exists in ld's paths.
ldconfig can list all the libraries it has access to. These libraries are also stored in its cache. /sbin/ldconfig -v -N will crawl all the usual library paths, list all the available libraries, without reconstructing the cache (which is not possible if you're a non-root user). It does NOT take into account libraries in LD_LIBRARY_PATH (contrarily to what this post said before edit) but you can pass additional libraries to the command line by using the line below: /sbin/ldconfig -N -v $(sed 's/:/ /g' <<< $LD_LIBRARY_PATH)
Find out if library is in path
1,306,859,455,000
I have the following free spaces on 2 disks: SSD - 240G (sda) non-SSD - 240G (sdb) I understand that I should use SSD to install packages and non-SSD just for storing data. What's the best partitioning schema (including swap) in my case? When I tried automatic partitioning it installs only on 1 disk and dedicating 8G for swap. PS. I'm going to install Linux Mint as a dual-boot alongside with Windows 7, which is already installed. UPDATE: I have 8GB of RAM Windows has been installed on non-SSD drive.
On a hybrid solid-state and spinning disk system (like the one I'm typing this), you have two to three aims: Speed up your system: as much commonly used data as possible stays on the SSD. Keep volatile data off the SSD to reduce wear. Optional: have some level of redundancy by using an md(4) (‘software RAID’) setup across the SSD and HDD(s). If you're just meeting the first two goals, it's a simple task of coming up with a scheme somewhat like this (depending on which of these filesystems you use): Solid state: / (root filesystem), /usr, /usr/local, /opt Spinning disk: /var, /home, /tmp, swap Since you have two disks, though, you can read the Multi HDD/SSD article on the Debian wiki. It'll walk you through setting up md(4) devices with your SSD as a ‘mostly-read’ device (fast reads, fewer writes), your HDD as a ‘mostly-write’ device (no-wear writes, fewer reads). The filesystems that would normally go on the SSD alone can now go on this md device. The kernel will read mostly from the SSD (with occasional, brief forays into the HDD to increase read throughput even more). It'll write to the HDD, but handle SSD writes with care to avoid wearing out the device. You get the best of both worlds (almost), and you don't have to worry about SSD wear rendering your data useless. My laptop is running on a similar layout where /, /usr and /usr/local are on a RAID-1 device across a 64 GB SSD and a 64 GB partition on the 1TB HDD, and the rest of the filesystems are on the rest of the HDD. The rest of the HDD is one of two members of a RAID-1 setup, with one disk usually missing. When I'm at home, I plug in the second disk and let the md device synchronise. It's an added level of redundancy and an extra 1–7 day backup¹). You should also have a look at the basic SSD optimisation guide for Debian (and friends). Oh, and it's not guaranteed you'll be able to do this all via the installer. You may have to boot a rescue disk prior to installation, prepare (at least) the md(4) devices (I do the LVM PVs, VGs and LVs too because it's easier on the CLI), then boot the installer and just point out the volumes to it. ¹ RAID ≠ backup policy. I also have proper backups.
partitioning using 2 hard disks (SSD and non-SSD) in linux [closed]
1,306,859,455,000
journalctl -b | grep Supervising | wc -l 2819 Distro is Fedora 35, vanilla, with PipeWire running the show. I'm pretty sure all modern Linux distros are affected but people don't care. There's no rsyslog here and journald doesn't support filtering. This is getting ridiculous. I can patch it for sure but the question is how it can be done without applying patches and rebuilding. The thing, /usr/libexec/rtkit-daemon, doesn't even have a man page and nothing in its --help offers any clues. There's a related question which has never been answered as well: rtkit: list threads it is "supervising"? I can only think of running rtkit-daemon through some wrapper which simply disables all the features related to /dev/log/system logging. Has anyone seen anything like that? I've filed a bug report just in case.
As Artem rightly wrote, the systemd journal has very limited filtering capabilities. Hence the only solution to limit services or desktop applications verbosity is to organize the filtering earlier in the logging pipeline, before any further processing of any sort. In order to achieve this for a systemd service : A/ Locate the directory associated with the service you want to tune. Usually based in (/usr)/lib/systemd/system for services distributed at package's install time such as rtkit-daemon. In this particular case : rtkit-daemon.service.d B/ Within this directory, (or better in a system-wide configuration subdir /etc/systemd/system/rtkit-daemon.service.d since it won't then be silently removed by further package upgrades) edit or create a log.conf file in order to insert the following statements : [Service] LogLevelMax=X With X standing for the desired numeric loglevel or its associated alphabetical symbol taken from the following list : 0 or emergency, (highest priority messages) 1 or alert, 2 or critical, 3 or error, 4 or warning, 5 or notice, 6 or info 7 or debuginfo (lowest priority messages) For a given level chosen, logs of all higher levels won't be output. Note that if no loglevel is specified in whatever systemd service .conf file, the loglevel of the daemon defaults to 7, in other words allowing the highest level of verbosity. Regarding your specific need as worded in the title, LogLevelMax=5 (notice) should suffice (6 as reported in comments). C/ Save and exit your editor then run the two following commands : systemctl daemon-reload systemctl restart rtkit-daemon.service Nota Bene : Since "New style daemons" (sic)… will be executed in their own session, with standard input connected to /dev/null and standard output/error connected to the systemd-journald.service(8) logging service logging can be achieved via whatever simple (f)print(f). It can then be possible to completely silence the daemon by simply redirecting its stdout and stderr to /dev/null. While I easily imagine this is not recommended (since wiseness would command to let at least critical errors have their way to syslog), this redirection can be achieved via the following statements : [Service] StandardOutput=null StandardError=null Credits : Answer based on systemd.exec documentation
Stop rtkit-daemon from spamming logs with "Supervising X threads of Y processes of Z users"
1,306,859,455,000
Doing a ps on my Linux box shows that systemd runs with the command line options --switched-root and --deserialize. Nothing in the man page or /usr/share/doc/systemd mentions them, and Google hasn't been much help. So, what do they do? I'm guessing that --switched-root has something to do with pivot_root, but that's just a guess.
These are intentionally undocumented internal parts of systemd. Very simply, therefore: --deserialize is used to restore saved internal state that a previous invocation of systemd, exec()ing this one, has written out to a file. Its option argument is an open file descriptor for that process. --switched-root is used to tell this invocation of systemd that it has been invoked from systemd managing an initramfs, and so should behave accordingly — including turning off some of the behaviour otherwise caused by --deserialize.
What are the systemd command line options "--switched-root" and "--deserialize"?
1,344,351,634,000
I'm working on an embedded Linux system (128MB RAM) without any swap partition. Below is its top output: Mem: 37824K used, 88564K free, 0K shrd, 0K buff, 23468K cached CPU: 0% usr 0% sys 0% nic 60% idle 0% io 38% irq 0% sirq Load average: 0.00 0.09 0.26 1/50 1081 PID PPID USER STAT VSZ %MEM CPU %CPU COMMAND 1010 1 root S 2464 2% 0 8% -/sbin/getty -L ttyS0 115200 vt10 1081 1079 root R 2572 2% 0 1% top 5 2 root RW< 0 0% 0 1% [events/0] 1074 994 root S 7176 6% 0 0% sshd: root@ttyp0 1019 1 root S 13760 11% 0 0% /SecuriWAN/mi 886 1 root S 138m 112% 0 0% /usr/bin/rstpd 51234 <== 112% MEM?!? 1011 994 root S 7176 6% 0 0% sshd: root@ttyp2 994 1 root S 4616 4% 0 0% /usr/sbin/sshd 1067 1030 root S 4572 4% 0 0% ssh passive 932 1 root S 4056 3% 0 0% /sbin/ntpd -g -c /etc/ntp.conf 1021 1 root S 4032 3% 0 0% /SecuriWAN/HwClockSetter 944 1 root S 2680 2% 0 0% dbus-daemon --config-file=/etc/db 1030 1011 root S 2572 2% 0 0% -sh 1079 1074 root S 2572 2% 0 0% -sh 1 0 root S 2460 2% 0 0% init 850 1 root S 2460 2% 0 0% syslogd -m 0 -s 2000 -b 2 -O /var 860 1 root S 2460 2% 0 0% klogd -c 6 963 1 root S 2184 2% 0 0% /usr/bin/vsftpd /etc/vsftpd.conf 3 2 root SW< 0 0% 0 0% [ksoftirqd/0] 823 2 root SWN 0 0% 0 0% [jffs2_gcd_mtd6] ps (which doesn't understand any options besides -w on busybox) shows: PID USER VSZ STAT COMMAND 1 root 2460 S init 2 root 0 SW< [kthreadd] 3 root 0 SW< [ksoftirqd/0] 4 root 0 SW< [watchdog/0] 5 root 0 SW< [events/0] 6 root 0 SW< [khelper] 37 root 0 SW< [kblockd/0] 90 root 0 SW [pdflush] 91 root 0 SW [pdflush] 92 root 0 SW< [kswapd0] 137 root 0 SW< [aio/0] 146 root 0 SW< [nfsiod] 761 root 0 SW< [mtdblockd] 819 root 0 SW< [rpciod/0] 823 root 0 SWN [jffs2_gcd_mtd6] 850 root 2460 S syslogd -m 0 -s 2000 -b 2 -O /var/log/syslog 860 root 2460 S klogd -c 6 886 root 138m S /usr/bin/rstpd 51234 945 root 2680 S dbus-daemon --config-file=/etc/dbus-system.conf --for 964 root 2184 S /usr/bin/vsftpd /etc/vsftpd.conf 984 root 4616 S /usr/sbin/sshd 987 root 952 S /sbin/udhcpd /ftp/dhcpd.conf 1002 root 4056 S /sbin/ntpd -g -c /ftp/ntp.conf 1022 root 2464 S -/sbin/getty -L ttyS0 115200 vt102 1023 root 7176 S sshd: root@ttyp0 1028 root 2572 S -sh 1030 root 2572 R ps When you look at process 886, you see that it uses 112% of the availble memory and has VSZ (virtual memory size) of 138MB. That doesn't make any sense to me. In the top man page it says: %MEM -- Memory usage (RES) A task's currently used share of available physical memory. How can this process consume more than 100% memory? And if it's such a memory hog, why are there still 88564K RAM free on the system?
The man page you refer to comes from the procps version of top. But you're on an embedded system, so you have the busybox version of top. It looks like busybox top calculates %MEM as VSZ/MemTotal instead of RSS/MemTotal. The latest version of busybox calls that column %VSZ to avoid some confusion. commit log
What do top's %MEM and VSZ mean?
1,344,351,634,000
I have two Ubuntu-x86_64 systems. One is version 10.04, the other 12.04 and there is a difference in the structure of the lib directories. This doesn't surprise me, but I'm curious if anyone knows why. Is there a good™ reason why? 10.04 2.6.32-38-server #83-Ubuntu SMP Wed Jan 4 11:26:59 UTC 2012 x86_64 GNU/Linux /usr/lib /usr/lib32 /usr/lib64 12.04 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 GNU/Linux /usr/lib /usr/lib/x86_64-linux-gnu
Debian and Ubuntu are moving to a new multiarch implementation (spec). Among other things, this involves moving arch-specific libraries into /usr/lib/<triplet>, dropping the limitations of lib32 and lib64 (where will the new x32 ABI go? where do qemulated binaries live? etc.) as well as extending the package manager to handle mixed-architecture installations much more sanely.
Where did /usr/lib64 go and what is /usr/lib/x86_64-linux-gnu?
1,344,351,634,000
What differences do these file systems have that might be relevant to someone choosing between them?
I'll just name a few pro and con points for each. This is by no means an exhausting list, just an indication. If there are some big omissions that need to be in this list, leave a comment and I'll add them, so we get a nice, big list in one place. ext4 Pro: supported by all distro's, commercial and not, and based on ext3, so it's widely tested, stable and proven all kinds for nice features (like extents, subsecond timestamps) which ext3 does not have. Ability to shrink filesystem Con: rumor has it that it is slower than ext3, the fsync dataloss soap XFS Pro: support for massive filesystems (up to 8 exabytes (yes, 'exa') on 64-bit systems) online defrag supported on upcoming RHEL6 as the 'large filesystem' option proven track record: xfs has been around for ages Con: wikipedia mentions slow metadata operations, but I wouldn't know about that potential dataloss on power cut, UPS is recommended, not really suitable for home systems Unable to shrink the filesystem - See https://xfs.org/index.php/Shrinking_Support JFS Pro: said to be fast (I have little experience with JFS) originated in AIX: proven technology Con: used and supported by virtually no-one, except IBM (correct me if I'm wrong; I have never seen or heard about JFS used in production, though it obviously must be, somewhere) ReiserFS Pro: fast with small files very space efficient stable and mature Con: not very active project anymore, next generation reiser 4 has succeeded it no online defragmenter Reiser 4 Pro: very fast with small files atomic transactions very space efficient metadata namespaces plugin architecture, (crypto, compression, dedup and meta data plugins possible) Con: Reiser4 has a very uncertain future and has not been merged yet main supporting distro (SuSE) dropped it years ago Hans Reiser's 'legal issues' are not really helping I recommend this page for further reading.
What are the differences between ext4, ReiserFS, JFS, and XFS? [closed]
1,344,351,634,000
Getfattr dumps a listing of extended attributes for a selected file. However, getfattr --dump filename only to dumps the user.* namespace and not the security.*, system.*, and trusted.* namespaces. Generally, there are no user namespace attributes unless you attached one to a file manually. Yes I know I can get the SELinux information by using getfattr -n security.selinux filename. In this case, I know the specific identification of the extended attribute. I have tried this as the root user. I'd assume that the root user with full capabilities is able to access this information. But you only get the user.* namespace dump. The question is how can I easily get a full dump of all the extended attribute namespaces of a file without knowing the names of all the keys in all the namespaces?
I hate to do this but the answer is (after more research): getfattr -d -m - file I apparently missed this in my reading of the man page: -m pattern, --match=pattern    Only include attributes with names matching the regular expression pattern. [...] Specify "-" for including all attributes.
How do I get a dump of all extended attributes for a file?
1,344,351,634,000
In a shell script, how can I test programmatically whether or not the terminal supports 24-bit or true color? Related: This question is about printing a 24-bit / truecolor test pattern for eyeball verification
This source says to check if $COLORTERM contains 24bit or truecolor. sh [ "$COLORTERM" = truecolor ] || [ "$COLORTERM" = 24bit ] bash / zsh: [[ $COLORTERM =~ ^(truecolor|24bit)$ ]]
Check if terminal supports 24-bit / true color
1,344,351,634,000
At this time no ansver for this problem. Usually after some problems with readings or writings to block device, kernel decides to switch flag for WHOLE DEVICE as read-only. After this any writings to any partition / filesystem located on this device cause switch it as readonly together with device state, because any writings are impossible. Example from dmesg, this is simulation for guest linux on windows8 using VirtualBox when defrag takes guests device image: [11903.002030] ata3.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen [11903.003179] ata3.00: failed command: READ FPDMA QUEUED [11903.003364] ata3.00: cmd 60/08:00:a8:77:57/00:00:00:00:00/40 tag 0 ncq 4096 in [11903.003385] res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [11903.004074] ata3.00: status: { DRDY } [11903.004248] ata3: hard resetting link [11903.325703] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [11903.327097] ata3.00: configured for UDMA/133 [11903.328025] ata3.00: device reported invalid CHS sector 0 [11903.329664] ata3: EH complete [11941.000472] ata3.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen [11941.000769] ata3.00: failed command: READ FPDMA QUEUED [11941.000952] ata3.00: cmd 60/08:00:c8:77:57/00:00:00:00:00/40 tag 0 ncq 4096 in [11941.000961] res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [11941.001353] ata3.00: status: { DRDY } [11941.001504] ata3: hard resetting link [11941.320297] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [11941.321252] ata3.00: configured for UDMA/133 [11941.321379] ata3.00: device reported invalid CHS sector 0 [11941.321553] ata3: EH complete [11980.001746] ata3.00: exception Emask 0x0 SAct 0x11fff SErr 0x0 action 0x6 frozen [11980.002070] ata3.00: failed command: WRITE FPDMA QUEUED [11980.002255] ata3.00: cmd 61/18:00:28:23:59/00:00:00:00:00/40 tag 0 ncq 12288 out [11980.002265] res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) ------------------- There are many other errors, like "lost write page", "Journal has aborted", "Buffer I/O error", "hard resetting link" and many others. After this, remount cause: mount / -o remount,rw mount: cannot remount block device /dev/sda1 read-write, is write-protected because WHOLE device sda keeping rootfs sda1 is READONLY. In my experience this occurs in situations: HDD is really damaged. Returned writing problems are depended on HDD condition Host machine is overloaded, then linux guest virtual HDD writings are timeouted FC cable or SAN device (array disks over Fibre Channel) is overloaded Momentary lost connection over FC or FCoE. Maybe lost/timeouted FC packet At this situations device is really read-write, but linux kernel marks this device internally as read-only and is used as read-only. This is kernel functionality maked for damage prevention, but it is useable only at 1. point. Question is. How to manually tell to kernel, hdd block device operates normally? Witiout this, kernel serve device as read-only, like 'CD-ROM', and no other command has chance to works properly, including mount/remount -o read-write , fsck and others. Unusable ansvers, really qualified as spam from people who wants to help, but doesn't understand about problem nature: Try remount as read-write (impossible, device is R-O) fsck this (what for? device is R-O, no repair is possible) 'I don't know' (first with sense, but unusable) 'Replace your device' *(usually the problem is something else) Has anybody any formula for question above? Switch flag for writeable block device that reverts it from read-only to read-write state ? At this time it seems that no-one know how. It is some workarounds, but usually semiusable or unusable: Remove module supports access to specified hdd or storage array. Unfortunately usually damaged device keeps rootfs, or driver keeps both damaged device and device that keeps rootfs Remove FC access to device and join this again (fctools), not allways possible, not allways works. Restart WHOLE machine. Usually only this is allways possible and we allways forced to. At points 1. and 2. we tell to kernel that we completly disconnect device and connect to it again. Kernel recognized this as joining new properly operatings device. We can simulate this using USB device and momentary remove power. Point 3. is last chance and usually works. But why we should restart all? Unfortunately at all points we lost all journals updates and dirty buffers. Notice, at the same situations I have no problems with Windows (desktop and server).
try with blockdev --setrw or hdparm -r 0
Linux, how to change HDD state from ReadOnly after temporarly crash?
1,344,351,634,000
I've been building a Linux distro, and I've stripped the binaries, etc. The system won't use GCC or development tools, as it will be a Chrome kiosk, so it would greatly help if I could strip down the system... I was wondering, is there a way that I can delete all of the unused system files (like binaries, etc.) by watching what files/libraries are used during runtime? Maybe another method is preferred, but is there a way to accomplish something like this?
There are programs like Bootchart that can be used to show what programs you ran during startup - you can probably keep it going after boot to see what's been invoked during a session. A better solution may be to use remastering tools. There are remastering tools for Fedora, Ubuntu, and others; you can use these to customize a distribution. You might want to look at Tiny Core Linux. There is a guy working on a remaster script for that as well.
How to strip a Linux system?
1,344,351,634,000
I have a nearly common Linux machine here. So, it has a PCI (*-X, etc) bus, on that some USB controllers, and I have USB devices on these USB controllers. Similar to this: $ lspci|grep USB 00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.5 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:16.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:16.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller 02:00.0 USB controller: VIA Technologies, Inc. VL805 USB 3.0 Host Controller (rev 01) And, there is also an usb device tree, as this: $ lsusb -t /: Bus 07.Port 1: Dev 1, Class=root_hub, Driver=ohci-pci/4p, 12M /: Bus 06.Port 1: Dev 1, Class=root_hub, Driver=ohci-pci/2p, 12M /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ohci-pci/5p, 12M |__ Port 5: Dev 4, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=ohci-pci/5p, 12M |__ Port 2: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M |__ Port 2: Dev 2, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M |__ Port 3: Dev 12, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M |__ Port 4: Dev 4, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M |__ Port 4: Dev 4, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/4p, 480M /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/5p, 480M |__ Port 2: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 1: Dev 4, If 0, Class=Wireless, Driver=btusb, 12M |__ Port 1: Dev 4, If 1, Class=Wireless, Driver=btusb, 12M |__ Port 4: Dev 11, If 0, Class=Vendor Specific Class, Driver=r8712u, 480M |__ Port 3: Dev 3, If 0, Class=Vendor Specific Class, Driver=MOSCHIP usb-ethernet driver, 480M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/5p, 480M |__ Port 1: Dev 6, If 0, Class=Video, Driver=uvcvideo, 480M |__ Port 1: Dev 6, If 1, Class=Video, Driver=uvcvideo, 480M |__ Port 1: Dev 6, If 2, Class=Audio, Driver=snd-usb-audio, 480M |__ Port 1: Dev 6, If 3, Class=Audio, Driver=snd-usb-audio, 480M So, I see my USB controllers on the PCI bus, and also my USB devices on the USB controllers. But I don't know, which USB controller number (on the USB bus) belongs to which PCI bus number! How to find that?
This information can be retrieved from the iSerial entry of the verbose output of the lsusb. Easiest is to pass the output to the the less viewer and search manually with /, or for example with grep: $ lsusb -v 2>/dev/null | grep '^Bus\|iSerial' Bus 001 Device 029: ID 12d1:1506 Huawei Technologies Co., Ltd. Modem/Networkcard iSerial 0 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub iSerial 1 0000:00:1d.7 Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub iSerial 1 0000:00:1d.3 Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub iSerial 1 0000:00:1d.2 ...
How to find the pci slot of an usb controller in Linux?
1,344,351,634,000
Specifically, I am trying test something on my build server by switching to the "jenkins" user: sudo su - jenkins No passwd entry for user 'jenkins'
The error message is pretty much self-explanatory. It says that the user jenkins has no entry in the /etc/passwd file i.e. the user does not exist in the system. When you do any user related operations that requires username, password, home directory, shell information, the /etc/passwd file is consulted first. No entry in that file leading to the very error you are getting. So you need to create the user first (useradd/adduser). As a side note, unless necessary you should create any service specific user (non-human) e.g. jenkins as system user.
Using "su - " to change user gives "No passwd entry for user"
1,344,351,634,000
I found a similar question but still it doesn't answer my questions Do the virtual address spaces of all the processes have the same content in their "Kernel" parts? First off, considering user processes don't have access to this part and I guess if they try to access it, it would lead to an error, then why even include this part in the user process virtual space? Can you guys give me a real life scenario of this part being essential and useful? Also, one more question is I always thought the kernel part of memory is dynamic, meaning it might grow for example when we use dynamic libraries in our programs, so is it true? If so, then how can the OS determine how big the size of kernel is in the virtual space of our processes? And when our kernel in physical memory grows or changes, does the same effect happens in the kernel part of virtual memory for all the processes? Is the mapping of this virtual kernel to real kernel a one to one mapping?
The kernel mapping exists primarily for the kernel’s purposes, not user processes’. From the CPU’s perspective, any physical memory address which isn’t mapped as a linear address might as well not exist. But the CPU does need to be able to call into the kernel: to service interrupts, to handle exceptions... It also needs to be able to call into the kernel when a user process issues a system call (there are various ways this can happen so I won’t go into details). On most if not all architectures, this happens without the opportunity to switch page tables — see for example SYSENTER. So at minimum, entry points into the kernel have to be mapped into the current address space at all times. Kernel allocations are dynamic, but the address space isn’t. On 32-bit x86, various splits are available, such as the 3/1 GiB split shown in your diagram; on 64-bit x86, the top half of the address space is reserved for the kernel (see the memory map in the kernel documentation). That split can’t move. (Note that libraries are loaded into user space. Kernel modules are loaded into kernel space, but again that only changes the allocations, not the address space split.) In user mode, there is a single mapping for the kernel, shared across all processes. When a kernel-side page mapping changes, that change is reflected everywhere. When KPTI is enabled, the kernel has its own private mappings, which aren’t exposed when running user-space code; so with KPTI there are two mappings, and changes to the kernel-private one won’t be visible to user-space (which is the whole point of KPTI). The kernel memory map always maps all the kernel (in kernel mode when running KPTI), but it’s not necessarily one-to-one — on 64-bit x86 for example it includes a full map of physical memory, so all kernel physical addresses are mapped at least twice.
What's the use of having a kernel part in the virtual memory space of Linux processes?
1,344,351,634,000
I would like to install another distribution but keep my home directory. Is there a way to move the home directory to a separate partition? I don't have an external hard drive available to back up my data. I would like to set up my partitions as suggested here.
Your question is distro-neutral, so if I mention anything specific that you don't have, just use the equivalent on your side. I really recommend you buy an external for backups, trust me, losing your data is the worst. Proceed at your own risk - But if you can't get one, here's what you can do. What you need the size of your /home directory free space, more than the size of your /home directory disk partitioning tool, I recommend gparted What to do Check the size of your /home directory (the last result will be home total): du -h /home Check if you have enough free space for the new partition: df -h Install gparted sudo apt-get install gparted You need more free space than the size of your /home directory. If you don't have the free space, then you won't be able to create that new partition, and need to move your data onto an external anyway. If you have the space, use gparted to shrink your existing partition, and then create a new partition with the freed unallocated space. Once your new partition is ready, note it's /dev/sdax (use sudo fdisk -l to see this), and copy your /home files to it. Using the partition in a new distro You mentioned installing another distro, if you plan to override your current distro, then during installation you should be asked to setup partitions. At that point you can specify this partition as /home, choose not to format it, and all will be well, you can skip this next section. If however you want your current distro to work with the new /home partition, follow this section: Mount the partition in an existing distro We have to tell your OS to use the partition as your new /home, we do this in fstab, but first let us find the UUID of this new partition: ls -l /dev/disk/by-uuid Cross reference your new partition's /sdax and copy the UUID of it, mine looks like 3d866059-4b4c-4c71-a69c-213f0e4fbf32. Backup fstab: sudo cp /etc/fstab /etc/fstab.bak Edit fstab: sudoedit /etc/fstab The idea is to add a new line that mounts the partition at /home. Use your own UUID, not the one I post here ;) # <file system> <mount point> <type> <options> <dump> <pass> UUID=3d866059.. /home auto default 0 1 Save and restart, and test if the new partition mounts to /home. Run df -h to list all mounted partitions, /home should now be in that list. Notes It might be a good idea to familiarize yourself with fstab if you don't know it well. Just take your time and think about each step. If you install a new distro, and use the same login name, your old /home files will automatically fall under your ownership. This is not a trivial topic to cover in one post, but I think I got most of it. :)
How can I move the home directory to a separate partition?
1,344,351,634,000
For a socket file likes this: # ls -alti socket 14112 srw------- 1 root root 0 Nov 15 20:03 socket # cat socket cat: socket: No such device or address Since cat command is useless here, is there any method to get more info about the socket file? Such as which port it is listening on? etc.
A socket is a file for processes to exchange data. You can see more data about it using the netstat, lsof, and fuser commands. From Wikipedia: A Unix domain socket or IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing on the same host operating system. Like named pipes, Unix domain sockets support transmission of a reliable stream of bytes (SOCK_STREAM, compare to TCP).
How to get more info about socket file?
1,344,351,634,000
How can I find out that my CPU supports 64bit operating systems under Linux, e.g.: Ubuntu, Fedora?
Execute: grep flags /proc/cpuinfo Find 'lm' flag. If it's present, it means your CPU is 64bit and it supports 64bit OS. 'lm' stands for long mode. Alternatively, execute: grep flags /proc/cpuinfo | grep " lm " Note the spaces in " lm ". If it gives any output at all, your CPU is 64bit. Update: You can use the following in terminal too: lshw -C processor | grep width This works on Ubuntu, not sure if you need to install additional packages for Fedora.
How do I know that my CPU supports 64bit operating systems under Linux?
1,344,351,634,000
I purchased a Human Machine Interface (Exor Esmart04). Running on Linux 3.10.12, however this Linux is stripped down and does not have a C compiler. Another problem is the disk space: I've tried to install GCC on it but I do not have enough disk space for this, does anyone have other solutions or other C compilers which require less disk space?
Usually, for an embedded device, one doesn't compile software directly on it. It's more comfortable to do what is called cross-compilation which is, in short, compiling using your regular PC to another architecture than x86. You said you're new to Linux; just for your information, you're facing a huge problem: cross-compiling to embedded devices is not an easy job. I researched your HMI system and noticed some results that are talking about Yocto. Yocto is, in short, a whole framework to build firmware for embedded devices. Since your HMI massively uses Open Source projects (Linux, probably busybox, etc.) the manufacturer must provide you a way to rebuild all the open source components by yourself. Usually, what you need to do that is the BSP (Board Support Package). Hardware manufacturer usually ship it: Using buildroot project that allows you to rebuild your whole firmware from scratch. Using yocto meta that, added to a fresh copy of the corresponding yocto project, will allow you to rebuild your whole firmware too. More rarely, a bunch of crappy scripts and pre-built compiler. So, if I was you, I would: Contact the manufacturer support to ask for the stuff to rebuild the firmware as implied by the use of Open Source. In parallel, search Google for "your HMI + yocto", "your HMI + buildroot", etc. After Googling even more, I found out a Yocto meta on github. You can check the machines implemented by this meta upon the directory conf/machine of the meta. There's currently five machines defined under the following codenames: us01-kit us02-kit us03-kit usom01 usom02 So I suggest that you dig into this. This is probably the way you can build software by yourself. You can also check this page on the github account that may give you some more clues.
How do I compile something for Linux if I don't have enough space for installing GCC?
1,344,351,634,000
I'm using CentOS 7. I want to get the PID (if one exists) of the process running on port 3000. I would like to get this PID for the purposes of saving it to a variable in a shell script. So far I have [rails@server proddir]$ sudo ss -lptn 'sport = :3000' State Recv-Q Send-Q Local Address:Port Peer Address:Port Cannot open netlink socket: Protocol not supported LISTEN 0 0 *:3000 *:* users:(("ruby",pid=4861,fd=7),("ruby",pid=4857,fd=7),("ruby",pid=4855,fd=7),("ruby",pid=4851,fd=7),("ruby",pid=4843,fd=7)) but I can't figure out how to isolate the PID all by itself without all this extra information.
Another possible solution: lsof -t -i :<port> -s <PROTO>:LISTEN For example: # lsof -i :22 -s TCP:LISTEN COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 1392 root 3u IPv4 19944 0t0 TCP *:ssh (LISTEN) sshd 1392 root 4u IPv6 19946 0t0 TCP *:ssh (LISTEN) # lsof -t -i :22 -s TCP:LISTEN 1392
How do I get only the PID, without any extra information, of a process running on port 3000?
1,344,351,634,000
Is it possible to get current umask of a process? From /proc/<pid>/... for example?
Beginning with Linux kernel 4.7 (commit), the umask is available in /proc/<pid>/status. $ grep '^Umask:' "/proc/$$/status" Umask: 0022
Current umask of a process with <pid>
1,344,351,634,000
I'd like to know if it's possible to just repeat part of a command. I.e. if I do ls /path/to/somewhere -a, I only want to remove ls and -a. I know that if I do !! it repeats the previous command (appending the last command to whichever command you write before it) and that if I do !$ it includes the last part of the string, but I'd like to know if it's possible to re-use only the e.g. middle part of the previous command.
Sure, use !^ e.g. $ ls /path/to/somewhere -a ls: cannot access '/path/to/somewhere': No such file or directory $ echo !^ echo /path/to/somewhere /path/to/somewhere $ Alternatively (incurring an extra keystroke) you could use !:1. $ ls /path/to/somewhere -a ls: cannot access '/path/to/somewhere': No such file or directory $ echo !:1 echo /path/to/somewhere /path/to/somewhere $ This is fully documented in the Event Designators and Word Designators sections of the bash man page.
How can I repeat only a part of a command in bash?
1,344,351,634,000
I was wondering what is the fastest way to run a script , I've been reading that there is a difference in speed between showing the output of the script on the terminal, redirecting it to a file or perhaps /dev/null. So if the output is not important , what is the fastest way to get the script to work faster , even if it's minim . bash ./myscript.sh -or- bash ./myscript.sh > myfile.log -or- bash ./myscript.sh > /dev/null
Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved. Consequently ./script.sh is slower than ./script.sh >script.log, which in turn is slower than /script.sh >/dev/null, because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections. Edit: Some quick (and completely broken) benchmarks: In a Linux console, 240x75: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done) real 3m52.053s user 0m0.617s sys 3m51.442s In an xterm, 260x78: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done) real 0m1.367s user 0m0.507s sys 0m0.104s Redirect to a file, on a Samsung SSD 850 PRO 512GB disk: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file) real 0m0.532s user 0m0.464s sys 0m0.068s Redirect to /dev/null: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null) real 0m0.448s user 0m0.432s sys 0m0.016s
What is the fastest way to run a script?
1,344,351,634,000
I've put together a small system with busybox, a Linux kernel, and a small file system, putting stuff in as it seemed necessary -- I don't know if I've been learning much from this, but I started out pretty clueless, so it sure hasn't been a smooth ride. So I suspect I might be missing some stuff in my filesystem, but I'm really not sure what I might need to add next. I can boot into my system by typing in the following grub commands: Once the boot messages stop, I'm left with this (I'm not sure if it's related but there's a line there that says: VFS: Mounted root (ext3 filesystem) readonly on device 8:1): I can't modify the filesystem: It's funny because I can manually mount /proc just fine: Why is my file system read-only? What would I need to set up to get it to work?
Try to search in dmesg | less. If you would like remount it to read-write, use mount -o remount,rw /
Why is my file system mounted as read-only?
1,344,351,634,000
There's a bzip2 process running in the background and I have no idea where it came from. It's eating up a lot of resources. Can I do a reverse lsof to see which files are being accessed by this process? I've suspended the process for the time being.
I'm not sure why that'd be a "reverse lsof" -- lsof does exactly that. You can pass it the -p flag to specify which PIDs to include/exclude in the results: $ lsof -p $(pidof bzip2)
lsof for a specific process?
1,344,351,634,000
How can i view the priority of a specific process ?
The top command lists the priority of running processes under the PR heading. If you have it installed, you can also search for a process and sort by priority in htop.
What is a command to find priority of process in Linux?
1,344,351,634,000
I need to view the members of a group related to an oracle installation.
You can use getent to display the group's information. getent uses library calls to fetch the group information, so it will honour settings in /etc/nsswitch.conf as to the sources of group data. Example: $ getent group simpsons simpsons:x:742:homer,marge,bart,lisa,maggie The fields, separated by :, are— Group name Encrypted password (not normally used) Numerical group ID Comma-separated list of members
How do I view the members of a group? [closed]
1,344,351,634,000
I'm in an operating systems class. Coming up, we have to do some work modifying kernel code. We have been advised not to use personal machines to test (I suppose this means install it) as we could write bad code and write over somewhere we shouldn't. We are given access to a machine in a lab to be safe. If I were to test using a VM, would that protect the host system from potentially unsafe code? I really want to not have to be stuck to a system at school and snapshots will be useful. If it is still high risk, any suggestions on what I need to consider to test safely? We will be using something like linuxmint to start with. If anyone wants to see what will be in the current project: http://www.cs.fsu.edu/~cop4610t/assignments/project2/writeup/specification.pdf
The main risks developing kernel modules are that you can crash your system much more easily than with regular code, and you'll probably find that you sometimes create modules that can't be unloaded which means you'll have to reboot to re-load them after you fix what's wrong. Yes, a VM is fine for this kind of development and it's what I use when I'm working on kernel modules. The VM nicely isolates your test environment from your running system. If you're going to take and restore snapshots, you should keep your source code checked in to a version control repository outside the VM so you don't accidentally lose your latest code when you discard the VM's current state.
Is developing/testing a linux module safe using a virtual machine?
1,344,351,634,000
when I try: $ ip -6 addr I get something like: inet6 fe80::d773:9cf0:b0fd:572d/64 scope link if I try to ping that from the machine itself: $ ping6 fe80::d773:9cf0:b0fd:572d/64 unknown host $ ping6 fe80::d773:9cf0:b0fd:572d connect: Invalid argument What am I doing wrong?
From man ping6, you must tell ping which interface you are using: -I interface address Set source address to specified interface address. Argument may be numeric IP address or name of device. When pinging IPv6 link-local address this option is required. For example, if your interface is eth0: ping6 -I eth0 fe80::xxxxxx or, without the -I option: ping6 fe80:xxxxxx%eth0
How do I get the pingable IPv6 address of my machine?
1,344,351,634,000
From this answer to Linux: Difference between /dev/console , /dev/tty and /dev/tty0 From the documentation: /dev/tty Current TTY device /dev/console System console /dev/tty0 Current virtual console In the good old days /dev/console was System Administrator console. And TTYs were users' serial devices attached to a server. Now /dev/console and /dev/tty0 represent current display and usually are the same. You can override it for example by adding console=ttyS0 to grub.conf. After that your /dev/tty0 is a monitor and /dev/console is /dev/ttyS0. By "System console", /dev/console seems like the device file of a text physical terminal, just like /dev/tty{1..63} are device files for the virtual consoles. By "/dev/console and /dev/tty0 represent current display and usually are the same", /dev/console seems to me that it can also be the device file of a virtual console. /dev/console seems more like /dev/tty0 than like /dev/tty{1..63} (/dev/tty0 is the currently active virtual console, and can be any of /dev/tty{1..63}). What is /dev/console? What is it used for? Does /dev/console play the same role for Linux kernel as /dev/tty for a process? (/dev/tty is the controlling terminal of the process session of the process, and can be a pts, /dev/ttyn where n is from 1 to 63, or more?) The other reply mentions: The kernel documentation specifies /dev/console as a character device numbered 5:1. Opening this character device opens the "main" console, which is the last tty in the list of consoles. Does "the list of consoles" mean all the console='s in the boot option? By "/dev/console as a character device numbered 5:1", does it mean that /dev/console is the device file of a physical text terminal i.e. a system console? (But again, the first reply I quoted above says /dev/console can be the same as /dev/tty0 which is not a physical text terminal, but a virtual console) Thanks.
/dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console, is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0, a specific virtual console such as /dev/tty1, or to a serial port primary (tty*, not cu*) device, depending on the configuration of the system. /dev/console, the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console: devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console. As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles. /dev/console does indeed provide access to the last of these: You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console.
What is `/dev/console` used for?
1,344,351,634,000
It is written in the linux kernel Makefile that clean - Remove most generated files but keep the config and enough build support to build external modules mrproper - Remove all generated files + config + various backup files And it is stated on the arch docs that To finalise the preparation, ensure that the kernel tree is absolutely clean; $ make clean && make mrproper So if make mrproper does a more thorough remove, why is the make clean used?
Cleaning is done on three levels, as described in a comment in the Linux kernel Makefile: ### # Cleaning is done on three levels. # make clean Delete most generated files # Leave enough to build external modules # make mrproper Delete the current configuration, and all generated files # make distclean Remove editor backup files, patch leftover files and the like According to the Makefile, the mrproper target depends on the clean target (see line 1421). Additionally, the distclean target depends on mrproper. Executing make mrproper will therefore be enough as it would also remove the same things as what the clean target would do (and more). The mrproper target was added in 1993 (Linux 0.97.7) and has always depended on the clean target. This means that it was never necessary to use both targets as in make clean && make mrproper. Historic reference: https://archive.org/details/git-history-of-linux
Why both `make clean` and `make mrproper` are used?
1,344,351,634,000
Is there any way to make a log file for maintaining some data in /var/log/ with the help of some library function or system call in c language in linux. And I also want to know the standards that we should follow to write and process log. Thanks
The standard way to log from a C program is syslog. Start by including the header file: #include <syslog.h> Then early in your program, you should configure syslog by calling openlog: openlog("programname", 0, LOG_USER); The first argument is the identification or the tag, which is automatically added at the start of each message. Put your program's name here. The second argument is the options you want to use, or 0 for the normal behavior. The full list of options is in man 3 syslog. One you might find useful is LOG_PID, which makes syslog also record the process id in the log message. Then, each time you want to write a log message, you call syslog: syslog(LOG_INFO, "%s", "Message"); The first argument is the priority. The priority ranges from DEBUG (least important) to EMERG (only for emergencies) with DEBUG, INFO, and ERR being the most commonly used. See man 3 syslog for your options. The second and third arguments are a format and a message, just like printf. Which log file this appears in depends on your syslog settings. With a default setup, it probably goes into /var/log/messages. You can set up a custom log file by using one of the facilities in the range LOG_LOCAL0 to LOG_LOCAL7. You use them by changing: openlog("programname", 0, LOG_USER); to openlog("programname", 0, LOG_LOCAL0); or openlog("programname", 0, LOG_LOCAL1); etc. and adding a corresponding entry to /etc/syslog.conf, e.g. local1.info /var/log/programname.log and restarting the syslog server, e.g. pkill -HUP syslogd The .info part of local1.info above means that all messages that are INFO or more important will be logged, including INFO, NOTICE, ERR (error), CRIT (critical), etc., but not DEBUG. Or, if you have rsyslog, you could try a property-based filter, e.g. :syslogtag, isequal, "programname:" /var/log/programname.log The syslogtag should contain a ":". Or, if you are planning on distributing your software to other people, it's probably not a good idea to rely on using LOG_LOCAL or an rsyslog filter. In that case, you should use LOG_USER (if it's a normal program) or LOG_DAEMON (if it's a server), write your startup messages and error messages using syslog, but write all of your log messages to a file outside of syslog. For example, Apache HTTPd logs to /var/log/apache2/* or /var/log/httpd/*, I assume using regular open/fopen and write/printf calls.
make a log file
1,344,351,634,000
Executing kill -l on linux gives: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX What happened to 32 and 33? Why is it not listed? They could have started at 1 and ended at 62 instead of skipping 2 in the middle?
It is because of NPTL. Since it is part of the GNU C library nearly every modern linux distribution doesn't use the first two real time signals anymore. NPTL is an implementation of the POSIX Threads. NPTL makes internal use of the first two real-time signals. This part of the signal manpage is very interesting: The Linux kernel supports a range of 33 different real-time signals, numbered 32 to 64. However, the glibc POSIX threads implementation internally uses two (for NPTL) or three (for LinuxThreads) real-time signals (see pthreads(7)), and adjusts the value of SIGRTMIN suitably (to 34 or 35). Because the range of available real-time signals varies according to the glibc threading implementation (and this variation can occur at run time according to the available kernel and glibc), and indeed the range of real-time signals varies across UNIX systems, programs should never refer to real-time signals using hard-coded numbers, but instead should always refer to real-time signals using the notation SIGRTMIN+n, and include suitable (run-time) checks that SIGRTMIN+n does not exceed SIGRTMAX. I also checked the source code for glibc; see line 22. __SIGRTMIN is increased +2, so the first two real time signals are excluded from the range of real time signals.
Why does `kill -l` not list signal numbers of 32 and 33?
1,344,351,634,000
I need to test aspects of my software that only happen at certain times of the day. Rather than waiting whole days (and getting here at 2:00 AM), I'd like to change the time. But I'd rather not change it permanently. I know I can change the time using date, and then change it back again, but is there a better way? OS in question is RHEL6 running in a VM.
There's a library called libfaketime (also on GitHub) which allows you to make the system report a given time to your application. You can either have the system report a fixed time for the duration of the program execution, or start the clock at some specific time (for example, 01:59:30). Basically, you hook the faketime library into your program's in-memory image through the library loader, and it captures and handles in its own way all system calls which relate to system time. It doesn't exactly change the system time, but it changes what time is reported to your specific application without affecting anything else that is running, which is probably what you really are after (otherwise, I see no reason to not just change the global system time). There's a number of possible variants on how to use it, but it looks like Changing what time a process thinks it is with libfaketime has a pretty thorough listing along with sample code to try them out. Google should also be able to unearth some examples given that you know what to search for. It would appear that it isn't available prepackaged through the RHEL repositories, but for example Debian provides it under the package name faketime. It also looks straight forward to build from source code (it apparently doesn't even need a configure step or anything like that).
Temporarily change time
1,344,351,634,000
Does the OS reserve the fixed amount of valid virtual space for stack or something else? Am I able to produce a stack overflow just by using big local variables? I've wrote a small C program to test my assumption. It's running on X86-64 CentOS 6.5. #include <string.h> #include <stdio.h> int main() { int n = 10240 * 1024; char a[n]; memset(a, 'x', n); printf("%x\n%x\n", &a[0], &a[n-1]); getchar(); return 0; } Running the program gives &a[0] = f0ceabe0 and &a[n-1] = f16eabdf The proc maps shows the stack: 7ffff0cea000-7ffff16ec000. (10248 * 1024B) Then I tried to increase n = 11240 * 1024 Running the program gives &a[0] = b6b36690 and &a[n-1] = b763068f The proc maps shows the stack: 7fffb6b35000-7fffb7633000. (11256 * 1024B) ulimit -s prints 10240 in my PC. As you can see, in both case the stack size is bigger than which ulimit -s gives. And the stack grows with bigger local variable. The top of stack is somehow 3-5kB more off &a[0] (AFAIK the red zone is 128B). So how does this stack map get allocated?
It appears that the stack memory limit is not allocated (anyway, it couldn't with unlimited stack). https://www.kernel.org/doc/Documentation/vm/overcommit-accounting says: The C language stack growth does an implicit mremap. If you want absolute guarantees and run close to the edge you MUST mmap your stack for the largest size you think you will need. For typical stack usage this does not matter much but it's a corner case if you really really care However mmapping the stack would be the goal of a compiler (if it has an option for that). EDIT: After some tests on an x84_64 Debian machine, I've found that the stack grows without any system call (according to strace). So, this means that the kernel grows it automatically (this is what the "implicit" means above), i.e. without explicit mmap/mremap from the process. It was quite hard to find detailed information confirming this. I recommend Understanding The Linux Virtual Memory Manager by Mel Gorman. I suppose that the answer is in Section 4.6.1 Handling a Page Fault, with the exception "Region not valid but is beside an expandable region like the stack" and the corresponding action "Expand the region and allocate a page". See also D.5.2 Expanding the Stack. Other references about Linux memory management (but with almost nothing about the stack): Memory FAQ What every programmer should know about memory by Ulrich Drepper EDIT 2: This implementation has a drawback: in corner cases, a stack-heap collision may not be detected, even in the case where the stack would be larger than the limit! The reason is that a write in a variable in the stack may end up in allocated heap memory, in which case there is no page fault and the kernel cannot know that the stack needed to be extended. See my example in the discussion Silent stack-heap collision under GNU/Linux I started in the gcc-help list. To avoid that, the compiler needs to add some code at function call; this can be done with -fstack-check for GCC (see Ian Lance Taylor's reply and the GCC man page for details).
How does stack allocation work in Linux?
1,344,351,634,000
I'm trying to diagnose some random segfaults on a headless server and one thing that seems curious is that they only seem to happen under memory pressure and my swap size will not go above 0. How can I force my machine to swap to make sure that it is working properly? orca ~ # free total used free shared buffers cached Mem: 1551140 1472392 78748 0 333920 1046368 -/+ buffers/cache: 92104 1459036 Swap: 1060280 0 1060280 orca ~ # swapon -s Filename Type Size Used Priority /dev/sdb2 partition 1060280 0 -1
Is this Linux? If so, you could try the following: # sysctl vm.swappiness=100 (You might want to use sysctl vm.swappiness first to see the default value, on my system it was 10) And then either use a program(s) that uses lots of RAM or write a small application that just eats up RAM. The following will do that (source: Experiments and fun with the Linux disk cache): #include <stdlib.h> #include <stdio.h> #include <string.h> #include <unistd.h> int main(int argc, char** argv) { int max = -1; int mb = 0; int multiplier = 1; // allocate 1 MB every time unit. Increase this to e.g.100 to allocate 100 MB every time unit. char* buffer; if(argc > 1) max = atoi(argv[1]); while((buffer=malloc(multiplier * 1024*1024)) != NULL && mb != max) { memset(buffer, 1, multiplier * 1024*1024); mb++; printf("Allocated %d MB\n", multiplier * mb); sleep(1); // time unit: 1 second } return 0; } Coded the memset line to initialise blocks with 1s rather than 0s, because the Linux virtual memory manager may be smart enough not to actually allocate any RAM otherwise.  I added the sleep(1) in order to give you more time to watch the processes as it gobbles up ram and swap. The OOM killer should kill this once you are out of RAM and SWAP to give to the program. You can compile it with gcc filename.c -o memeater where filename.c is the file you save the above program in. Then you can run it with ./memeater. I wouldn't do this on a production machine.
How to test swap partition
1,344,351,634,000
Using ps -aux or top, I can list other users running processes, but I'm neither running as root nor making use of sudo, why?
By default, you can always list other users processes in Linux. To change that, you need to mount proc in /etc/fstab with hidepid=2: proc /proc proc defaults,hidepid=2 This functionality is supported from the kernel v3.2 onwards. It hides /proc and consequentially ps activity from all users except root. Taken from this article about hidepid: hidepid=2 - It means hidepid=1 plus all /proc/PID/ will be invisible to other users. It compicates intruder's task of gathering info about running processes, whether some daemon runs with elevated privileges, whether another user runs some sensitive program, whether other users run any program at all, etc.
Why can I list other users processes without root permission?
1,344,351,634,000
sched_setscheduler says : All scheduling is preemptive: if a process with a higher static priority becomes ready to run, the currently running process will be preempted and returned to the wait list for its static priority level. while setpriority says This causes very low nice values (+19) to truly provide little CPU to a process whenever there is any other higher priority load on the system, and makes high nice values (-20) deliver most of the CPU to applications that require it So, how is changing the nice value going to influence the execution of programs? Is it similar to RT scheduling (where a program with higher nice value is going to interrupt program with lower nice value)? All information on internet is how to use nice, and how to change priority of a process. No link explains how exactly process with different priority works. I couldn't even find the source code.
The proportion of the processor time a particular process receives is determined by the relative difference in niceness between it and other runnable processes. The Linux Completely Fair Scheduler (CFS) calculates a weight based on the niceness. The weight is roughly equivalent to 1024 / (1.25 ^ nice_value). As the nice value decreases the weight increases exponentially. The timeslice allocated for the process is proportional to the weight of the process divided by the total weight of all runnable processes. The implementation of the CFS is in kernel/sched/fair.c. The CFS has a target latency for the scheduling duration. Smaller target latencies yield better interactivity, but as the target latency decreases, the switching overhead increases, thus decreasing the overall throughput. Given for instance a target latency of 20 milliseconds and two runnable processes of equal niceness, then both processes will run for 10 milliseconds each before being pre-empted in favour of the the other process. If there are 10 processes of equal niceness, each runs for 2 milliseconds each. Now consider two processes, one with a niceness of 0 (the default), the other with a niceness of 5. The proportional difference between the corresponding weights is roughly 1/3, meaning that the higher priority process receives a timeslice of approximately 15 milliseconds while the lower priority process receives a timeslice of 5 milliseconds. Lastly consider two processes with the niceness values of 5 and 10 respectively. While the absolute niceness is larger in this case, the relative differences between the niceness values is the same as in the previous example, yielding an identical timeslice division.
How is nice working?
1,344,351,634,000
/dev/log is the default entry for system logging. In the case of a systemd implementation (this case) it's a symlink to whatever /run/systemd/journal/dev-log. It used to be a receiving end of a unix socket handled by syslog daemon. ~$ echo "hello" > /dev/log bash: /dev/log: No such device or address ~$ fuser /dev/log ~$ ls -la /dev/log lrwxrwxrwx 1 root root 28 Aug 23 07:13 /dev/log -> /run/systemd/journal/dev-log What is the clarification of the error that pops when you try to write to it and why isn't there a process holding that file (output from fuser /dev/log empty? The logging does work normally on the system. ~$ logger test ~$ journalctl --since=-1m -- Logs begin at Thu 2018-05-24 04:23:46 CEST, end at Thu 2018-08-23 13:07:25 CEST. -- Aug 23 13:07:24 alan-N551JM alan[12962]: test Extending with comment suggestions ~$ sudo fuser /dev/log /run/systemd/journal/dev-log: 1 311 ~$ ls -lL /dev/log srw-rw-rw- 1 root root 0 Aug 23 07:13 /dev/log
To add some additional info to the accepted (correct) answer, you can see the extent to which /dev/log is simply a UNIX socket by writing to it as such: lmassa@lmassa-dev:~$ echo 'This is a test!!' | nc -u -U /dev/log lmassa@lmassa-dev:~$ sudo tail -1 /var/log/messages Sep 5 16:50:33 lmassa-dev journal: This is a test!! On my system, you can see that the journald process is listening to this socket: lmassa@lmassa-dev:~$ sudo lsof | grep '/dev/log' systemd 1 root 29u unix 0xffff89cdf7dd3740 0t0 1445 /dev/log systemd-j 564 root 5u unix 0xffff89cdf7dd3740 0t0 1445 /dev/log It got my message and did its thing with it: (i.e. appending to the /var/log/messages file). Note that because the syslog protocol that journald is speaking expects datagrams (think UDP), not streams (think TCP), if you simply try writing into the socket directly with nc you'll see an error in the syscall (and no log show up). Compare: lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -u -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)' socket(AF_UNIX, SOCK_DGRAM, 0) = 4 connect(4, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = 0 lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)' socket(AF_UNIX, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = -1 EPROTOTYPE (Protocol wrong type for socket) Note I elided some syscalls for clarity. The important point here is that the first call specified the SOCK_DGRAM, which is what the /dev/log socket expects (since this is how the socket /dev/log was created originally), whereas the second did not so we got an error.
Examining /dev/log
1,344,351,634,000
I understand that Linux uses shebang line to determine what interpreter to use for scripting languages, but how does it work for binaries? I mean I can run Linux binaries, and having installed both wine and mono, Windows native and .NET binaries. And for all of them it's just ./binary-name (if not in PATH) to run it. How does Linux determine that a given binary must be run as a Linux native binary, as a Windows native binary (using wine facilities) or as a Windows .NET binary (using mono facilities)?
In a word: binfmt_misc. It's a Linux-specific, non-portable, facility. There are a couple of formats that are recognized by the kernel with built-in logic. Namely, these are the ELF format (for normal binaries) and the shebang convention (for scripts). (thanks to zwol for the following part of the answer). In addition, Linux recognizes a couple of esoteric or obsolete or compatibility builtin formats. You probably won't encounter them. They are a.out, "em86", "flat", and "elf_fdpic". Everything else must be registered through the binfmt_misc system. This system allows you to register with the kernel a simple pattern check based on a magic number, and the corresponding interpreter.
How does Linux determine what facilities to use to run a (non-text) binary?
1,344,351,634,000
$ touch dir/{{1..8},{a..p}} $ tar cJvf file.tar.xz dir/ dir/ dir/o dir/k dir/b dir/3 dir/1 dir/i dir/7 dir/4 dir/e dir/a dir/g dir/2 dir/d dir/5 dir/8 dir/c dir/n dir/f dir/h dir/6 dir/l dir/m dir/j dir/p I would have expected it to be alphabetical. But apparently it's not. What's the formula, here?
As @samiam has stated the list is returned to you in a semi-random order via readdir(). I'll just add the following. The list returned is what I would call the directory order. On older filesystems, the order is often the creation order that the file entries in the directory's table were added. There is of course a caveat to this, when a directory entry is deleted, this entry is then recycled, so any subsequent files that are stored will replace the previous entry, so the order will no longer by based solely on creation time. On modern filesystems where directory data structures are based on a search tree or hash table, the order is practically unpredictable. Examples Poking at the files created when you run your touch command reveals the following inodes were assigned. $ touch dir/{{1..8},{a..p}} $ stat --printf="%n -- %i\n" dir/* dir/1 -- 10883235 dir/2 -- 10883236 dir/3 -- 10883242 dir/4 -- 10883243 dir/5 -- 10883244 dir/6 -- 10883245 dir/7 -- 10883246 dir/8 -- 10883247 dir/a -- 10883248 dir/b -- 10883249 dir/c -- 10883250 dir/d -- 10883251 dir/e -- 10883252 dir/f -- 10883253 dir/g -- 10883254 dir/h -- 10883255 dir/i -- 10883256 dir/j -- 10883299 dir/k -- 10883302 dir/l -- 10883303 dir/m -- 10883311 dir/n -- 10883424 dir/o -- 10883426 dir/p -- 10883427 So we can see that the brace expansion used by touch creates the filenames in alphabetical order and so they're assigned sequential inode numbers when written to the HDD. (That however does not influence the order in the directory.) Running your tar command multiple times would seem to indicate that there is an order to the list, since running it multiple times yields the same list each time. Here I've run it 100 times and then compared the runs and they're all identical. $ for i in {1..100};do tar cJvf file.tar.xz dir/ > run${i};done $ for i in {1..100};do cmp run1 run${i};done $ If we strategically delete say dir/e and then add a new file dir/ee we can see that this new file has taken the place that dir/e occupied prior in the directories entries table. $ rm dir/e $ touch dir/ee Now let's keep the output from one of the for loop above, just the 1st one. $ mv run1 r1A Now if we re-run the for loop that will run the tar command 100 times again, and compare this second run with the previous one: $ sdiff r1A run1 dir/ dir/ ... dir/c dir/c dir/f dir/f dir/e | dir/ee dir/o dir/o dir/2 dir/2 ... We notice that dir/ee has taken dir/e's place in the directories table.
How is the order in which tar works on files determined?
1,344,351,634,000
On Linux 3.11.0-13-generic running on top of a dual socket Xeon X5650 hexa core board, htop shows different kworker threads. Sorted by names (I tweaked the result I am showing here a little bit to have the threads on core 2 before the ones on core 10), here is the result: kworker/0:0H kworker/0:1 kworker/0:2 kworker/1:0 kworker/1:0H kworker/1:1 kworker/2:0 kworker/2:0H kworker/2:1 ..... kworker/11:0 kworker/11:0H kworker/11:1 kworker/u48:0 kworker/u49:4 kworker/u49:5 kworker/u50:1 kworker/u50:2 ....... The threads whose names start with a number are pinned to the core with the same number. So the first number is the core running the thread and I am wondering what the symbol after : (0 or 0H or 1) is for these threads? I am also wondering what is the meaning of the uXX:Y symbols? I have only a vague knowledge of what kworker threads do: they handle asynchronous events caused by system calls performing I/O. Are they documented somewhere?
According to kernel.org, the syntax is kworker/%u:%d%s (cpu, id, priority). The u designates a special CPU, the unbound cpu, meaning that the kthread is currently unbound. The workqueue workers which have negative nice value have 'H' postfixed to their names. (source)
How interpret kworker threads names?
1,344,351,634,000
I'm running a Ubuntu 12.04 LTS as a home NAS server, without X. Recently I got into tuning it to serve as a video playing media device too. It might've been easier at this point to install X, but I decided to try mplayer with framebuffer playback. It worked, and everything was fine and good. However, for curiosity and maybe for practical consequences too, I can't stop thinking about framebuffers. There seems to be only one framebuffer device, /dev/fb0. (Btw. I'm using vesafs driver) If I run multiple programs that use framebuffers, chaos ensues. For example, running mplayer from fbterm just crashes it. Curiously, fbi image viewer manages to view images somehow. Obviously the programs can't share the device, there's no windowing system after all. So, is the number of (vesa) fb devices limited to hardware display devices? Or could there be more in principle, like there are multiple ttys? Would adding some more help running simultaneously software that uses them? How could I add more? Also the logic how the framebuffers are connected to ttys isn't quite clear to me... for example, mplayer shows it's video frame on every tty, but fbi doesn't. Furthermore, Ubuntu default console (fbcon?) shows behind the video overlay, which srikes me odd. What is this all about?
Since nobody's answered yet, and after tedious hours of googling and testing, I got some grasp of the subject, I'm going to answer it... Since framebuffer device interface is a quite general one, there could be more fb devices in principle. However, as the VESA driver I used provides a direct connection between a certain hardware device and the framebuffer device file, it doesn't make sense to have more of them than one has real devices. There's a driver for virtual framebuffer devices, vfb. (Note: different from xvfb which is a virtual framebuffer for X) I haven't tested this myself, but one could have as many fb devices as one wants using the virtual device. I also think that nothing in principle prevents one from piping a virtual device to hardware framebuffer device, allowing to build a framebuffer multiplexer About the connection between framebuffers and tty's: there is none. The framebuffer is simply drawn to the screen, disregarding anything. What got me originally confused is the behavior of fbi image viewer. It turns out that it cleverly checks whether the tty it's running in is open or not, and draws to the framebuffer or not according to that. (That's why it refuses to run over SSH, unlike mplayer – it doesn't accept a pseudo terminal.) But the multiplexer-like functionality has got NOTHING to do with the framebuffer itself. If there's multiple processes writing to framebuffer, they do not block each other. It turns out that my earlier problems (crashes and such) using multiple fb programs simultaneously were not even about the framebuffer at all. Take fbterm terminal and run mplayer from it: no problem. The fbterm and fbcon terminals and the fbi image viewer draw to buffer only when something is updated, so the mplayer dominates the screen virtually 100% of the time. But if you try to run two mplayers, you are going to get a view that flickers showing frames of the one and the other, as they try to draw to the buffer having a race condition. Some useful links: http://moi.vonos.net/linux/framebuffer-drivers/ https://www.kernel.org/doc/Documentation/fb/framebuffer.txt
How can I add an additional framebuffer device in Linux?
1,344,351,634,000
I typed man sudoers but got man: can't set the locale; make sure $LC_* and $LANG are correct No manual entry for sudoers What does this mean?
Your locale isn't set. In Debian-Base you should use dpkg-reconfigure locales to set it. Some of packages depend on locales package and its variable such as LC_* series ...! It means $LANG is empty.
what do I need to do with "man: can't set the locale; make sure $LC_* and $LANG are correct"
1,344,351,634,000
avg-cpu: %user %nice %system %iowait %steal %idle 11.50 0.02 5.38 0.07 0.00 83.04 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdc 0.01 89.92 0.26 41.59 3.36 457.19 22.01 0.23 5.60 0.09 0.38 sdb 0.10 15.59 0.40 14.55 8.96 120.57 17.33 0.04 2.91 0.07 0.11 sda 0.13 45.37 0.96 8.09 20.06 213.56 51.63 0.02 2.64 0.16 0.14 sde 0.01 31.83 0.09 11.34 0.94 103.56 18.29 0.04 3.52 0.14 0.16 sdd 0.01 48.01 0.13 19.81 1.58 202.16 20.44 0.11 5.62 0.13 0.25 Is there a way to know what files are being written? 457 kB/s Also this other linux machine have this same problem. avg-cpu: %user %nice %system %iowait %steal %idle 20.50 0.00 46.48 20.74 0.00 12.28 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.17 11.61 0.99 3.51 36.65 59.43 42.70 0.10 23.20 3.84 1.73 sdb 0.55 224.18 24.30 97.45 246.48 1287.12 25.19 3.96 32.53 7.88 95.91 sdd 0.53 226.75 25.56 90.96 283.50 1271.69 26.69 3.43 29.44 8.22 95.75 sdc 0.00 1.76 0.28 0.06 4.73 7.26 70.41 0.00 12.00 2.12 0.07 dm-0 0.00 0.00 1.11 14.77 36.41 58.92 12.01 1.00 62.86 1.09 1.74 dm-1 0.00 0.00 0.04 0.12 0.17 0.49 8.00 0.00 21.79 2.47 0.04 dm-2 0.00 0.00 0.01 0.00 0.05 0.01 8.50 0.00 7.90 2.08 0.00 1200 write request per second for a server that host nothing
Well, you could try the following commands which worked for me in RHEL6: Whatever device you see in "iostat" output performing more I/O, use it with fuser command (from the psmisc package) as follows: fuser -uvm device You will get a list of processes with the user name causing more I/O. Select those PIDS and use it in the lsof command as follows: lsof -p PID | more You will get a list of files/directories along with the user performing maximum I/O.
I have high io stat. High writes. But what files are being written?
1,344,351,634,000
Using Fedora 24, I had configured in /etc/fstab an external usb drive: UUID=6826692e-79f4-4423-8467-cef4d5e840c5 /backup/external ext4 defaults 0 0 When I unplugged the usb disk and reboot, it does not boot That is the error message: [ TIME ] Timed out waiting for device dev-disk-by\x2duuid-6826692e\x2d79f4\x2d4423\x2d8467\x2dcef4d5e840c5.device. [DEPEND] Dependency failed for /backup/external. [DEPEND] Dependency failed for Local File Systems. [DEPEND] Dependency failed for Relabel all filesystems, if necessary. [DEPEND] Dependency failed for Mark the need to relabel after reboot. Why does not boot? is it a bug? a feature? of systemd? I know that was a mistake of me, I had to set options to "noauto", but anyway Why booting process stops because a non-critical directory of FHS is missing?
Using the nofail mount option will ignore missing drives during boot. See man pages fstab(5) and mount(8). nofail Do not report errors for this device if it does not exist. So your fstab line should instead look like: UUID=6826692e-79f4-4423-8467-cef4d5e840c5 /backup/external ext4 defaults,nofail 0 0
Cannot boot because missing external disk
1,344,351,634,000
If I do watch cat /proc/sys/kernel/random/entropy_avail I see that my systems entropy slowly increases over time, until it reaches the 180-190 range at which point it drops down to around 120-130. The drops in entropy seem to occur about every twenty seconds. I observe this even when lsof says that no process has /dev/random or /dev/urandom open. What is draining away the entropy? Does the kernel need entropy as well, or maybe it is reprocessing the larger pool into a smaller, better quality pool? This is on a bare-metal machine, with no SSL/SSH/WPA connections.
Entropy is not only lost via /dev/{,u}random, the kernel also takes some. For example, new processes have randomized addresses (ASLR) and network packets need random sequence numbers. Even the filesystem module may remove some entropy. See the comments in drivers/char/random.c. Also note that entropy_avail refers to the input pool, not the output pools (basically the non-blocking /dev/urandom and the blocking /dev/random). If you need to watch the entropy pool, do not use watch cat, that will consume entropy at every invocation of cat. In the past I also wanted to watch this pool as GPG was very slow at generating keys, therefore I wrote a C program with the sole purpose to watch the entropy pool: https://git.lekensteyn.nl/c-files/tree/entropy-watcher.c. Note that there may be background processes which also consume entropy. Using tracepoints on an appropriate kernel you can see the processes that modify the entropy pool. Example usage that records all tracepoints related to the random subsystem including the callchain (-g) on all CPUs (-a) starting measuring after 1 second to ignore its own process (-D 1000) and including timestamps (-T): sudo perf record -e random:\* -g -a -D 1000 -T sleep 60 Read it with either of these commands (change owner of perf.data as needed): perf report # opens an interactive overview perf script # outputs events after each other with traces The perf script output gives an interesting insight and shows when about 8 bytes (64 bits) of entropy is periodically drained on my machine: kworker/0:2 193 [000] 3292.235908: random:extract_entropy: ffffffff8173e956 pool: nbytes 8 entropy_count 921 caller _xfer_secondary_pool 5eb857 extract_entropy (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5eb984 _xfer_secondary_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5ebae6 push_to_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 293a05 process_one_work (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 293ce8 worker_thread (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 299998 kthread (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 7c7482 ret_from_fork (/lib/modules/4.6.2-1-ARCH/build/vmlinux) kworker/0:2 193 [000] 3292.235911: random:debit_entropy: ffffffff8173e956: debit_bits 64 5eb3e8 account.part.12 (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5eb770 extract_entropy (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5eb984 _xfer_secondary_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5ebae6 push_to_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 293a05 process_one_work (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 293ce8 worker_thread (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 299998 kthread (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 7c7482 ret_from_fork (/lib/modules/4.6.2-1-ARCH/build/vmlinux) ... swapper 0 [002] 3292.507720: random:credit_entropy_bits: ffffffff8173e956 pool: bits 2 entropy_count 859 entropy_total 2 caller add_interrupt_randomness 5eaab6 credit_entropy_bits (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 5ec644 add_interrupt_randomness (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2d5729 handle_irq_event_percpu (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2d58b9 handle_irq_event (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2d8d1b handle_edge_irq (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 230e6a handle_irq (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 7c9abb do_IRQ (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 7c7bc2 ret_from_intr (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 6756c7 cpuidle_enter (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2bd9fa call_cpuidle (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2bde18 cpu_startup_entry (/lib/modules/4.6.2-1-ARCH/build/vmlinux) 2510e5 start_secondary (/lib/modules/4.6.2-1-ARCH/build/vmlinux) Apparently this happens to prevent waste of entropy by transferring entropy from the input pool to outputs pools: /* * Credit (or debit) the entropy store with n bits of entropy. * Use credit_entropy_bits_safe() if the value comes from userspace * or otherwise should be checked for extreme values. */ static void credit_entropy_bits(struct entropy_store *r, int nbits) { ... /* If the input pool is getting full, send some * entropy to the two output pools, flipping back and * forth between them, until the output pools are 75% * full. */ ... schedule_work(&last->push_work); } /* * Used as a workqueue function so that when the input pool is getting * full, we can "spill over" some entropy to the output pools. That * way the output pools can store some of the excess entropy instead * of letting it go to waste. */ static void push_to_pool(struct work_struct *work) { ... }
What keeps draining entropy?
1,344,351,634,000
I'm trying to find the speed of a network interface using a file descriptor. It's easy to do it for ethX, just calling cat /sys/class/net/eth0/speed. Unfortunately, this method doesn't work with wireless interfaces. When I call /sys/class/net/wlan0/speed I get an error: invalid argument. So, do you know any /sys/class/net/eth0/speed like analog for WLAN interfaces?
You can use the iwconfig tool to find this info out: $ iwconfig wlan0 wlan0 IEEE 802.11bg ESSID:"SECRETSSID" Mode:Managed Frequency:2.462 GHz Access Point: 00:10:7A:93:AE:BF Bit Rate=48 Mb/s Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=55/70 Signal level=-55 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 If you want the bit rate from /sys, directly try this: $ cat /sys/class/net/wlan0/wireless/link 51 Or from /proc: $ cat /proc/net/wireless Inter-| sta-| Quality | Discarded packets | Missed | WE face | tus | link level noise | nwid crypt frag retry misc | beacon | 22 wlan0: 0000 56. -54. -256 0 0 0 0 0 0 NOTE: The value for the link in the 2nd example is 56, for e.g. I believe the MB/s is a calculated value, so it won't be stored anywhere specifically for the wlan0 device. I think it's taking the aggregate bits transferred over the interface and dividing it by the time it took said data to be transferred. One additional way to get this information is using the tool iw. This tool is a nl80211 based CLI configuration utility for wireless devices. It should be on any recent Linux distro. $ iw dev wlan0 link Connected to 00:10:7A:93:AE:BF (on wlan0) SSID: SECRETSSID freq: 2462 RX: 89045514 bytes (194863 packets) TX: 34783321 bytes (164504 packets) signal: -54 dBm tx bitrate: 48.0 MBit/s This also shows the amount of sent and received packets (RX/TX).
How to find speed of WLAN interface?
1,409,341,045,000
I'm trying to determine if, in Linux, environment variables for a process are observable by other (non-root) users. The immediate use case is putting secrets into environment variables. This is discussed in many places throughout the web as being insecure, but I haven't been able to zero in on the exact exposure point in Linux. Note that I am not talking about putting cleartext secrets into files. Also note that I am not talking about exposure to the root account (I view attempting to hide secrets from an adversary with root as a nonstarter). This question appears to address mine, with comments that classify environment variables as being completely without security, or only simply being obfuscated, but how does one access them? In my tests one unprivileged user can't observe environment variables for another user through the process table ('ps auxwwe'). The commands that set environment variables (e.g. export) are shell builtins which don't make it onto the process table and by extension aren't in /proc/$pid/cmdline. /proc/$pid/environ is only readable by the UID of the process owner. Perhaps the confusion is between different operating systems or versions. Various (recent) sources across the web decry the insecurity of environment variables, but my spot-checking of different linux versions seems to indicate that this isn't possible going back at least to 2007 (probably further but I don't have boxes on hand to test). In Linux, how can a non-privileged user observe environment variables for another's processes?
As Gilles explained in a very comprehensive answer to a similar question on security.stackexchange.com, process environments are only accessible to the user that owns the process (and root of course).
Are environment variables visible to unprivileged users on Linux?
1,409,341,045,000
I changed my MAC address with macchanger -A wlp68s0b1 at boot with crontab, Here is what happens when I disconnect and reconnect: While connecting after boot: rahman@debian:~$ macchanger -s wlp68s0b1 Current MAC: 00:22:31:c6:38:45 (SMT&C Co., Ltd.) Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) After disconnecting: rahman@debian:~$ macchanger -s wlp68s0b1 Current MAC: 16:7b:e7:3c:d3:cd (unknown) Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) After reconnecting: rahman@debian:~$ macchanger -s wlp68s0b1 Current MAC: 00:00:00:00:00:00 (FAKE CORPORATION) Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) And so on. With every disconnect I get a different random MAC address which fades on reconnecting, giving me my real MAC address. What causes that and how to stop it? Some outputs: rahman@debian:~$ lspci -nn |grep 14e4 44:00.0 Network controller [0280]: Broadcom Limited BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) rahman@debian:~$ uname -a Linux debian 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux rahman@debian:~$ sudo ifconfig enp0s25: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 00:24:c0:7b:a8:8b txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xd4800000-d4820000 enp0s25:avahi: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 169.254.9.109 netmask 255.255.0.0 broadcast 169.254.255.255 ether 00:24:c0:7b:a8:8b txqueuelen 1000 (Ethernet) device interrupt 20 memory 0xd4800000-d4820000 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 9436 bytes 6584515 (6.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9436 bytes 6584515 (6.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlp68s0b1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::6711:9875:eb78:24fc prefixlen 64 scopeid 0x20<link> inet6 fd9c:c172:b03b:ce00:f1e0:695e:7da0:91a prefixlen 64 scopeid 0x0<global> ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet) RX packets 484346 bytes 641850809 (612.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 368394 bytes 44259668 (42.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 rahman@debian:~$ sudo iwconfig lo no wireless extensions. enp0s25 no wireless extensions. wlp68s0b1 IEEE 802.11 ESSID:"3bdo" Mode:Managed Frequency:2.447 GHz Access Point: 9C:C1:72:B0:3B:D4 Bit Rate=65 Mb/s Tx-Power=30 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=54/70 Signal level=-56 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:4 Invalid misc:183 Missed beacon:0
NetworkManager will reset your MAC address during the Wi-Fi scanning. To permanently change your MAC address: Edit your /etc/NetworkManager/NetworkManager.conf as follows: [main] plugins=ifupdown,keyfile [ifupdown] managed=false [device] wifi.scan-rand-mac-address=no [keyfile] Edit your /etc/network/interfaces by adding the following line: pre-up ifconfig wlp68s0b1 hw ether xx:xx:xx:yy:yy:yy The xx:xx:xx:yy:yy:yy is the new MAC address obtained from the output of macchanger -A wlp68s0b1. Reboot and verify your settings. From Configuring MAC Address Randomization in the Arch Linux wiki: Randomization during Wi-Fi scanning is enabled by default, but it may be disabled by adding the following lines to /etc/NetworkManager/NetworkManager.conf or a dedicated configuration file under /etc/NetworkManager/conf.d: [device] wifi.scan-rand-mac-address=no Setting it to yes results in a randomly generated MAC address being used when probing for wireless networks.
How to stop MAC address from changing after disconnecting?
1,409,341,045,000
Is it enough to see getfacl giving no error, or do I have to check some other place to see whether or not ACLs are supported by the file systems?
If you're talking about a mounted filesystem, I don't know of any intrinsic way to tell whether ACL are possible. Note that “are ACL supported?” isn't a very precise question since there are several types of ACL around (Solaris/Linux/not-POSIX-after-all, NFSv4, OSX, …). Note that getfacl is useless as a test since it will happily report Unix permissions if that's all there is: you need to try setting an ACL to test. Still on mounted filesystem, you can check for the presence of acl in the mount options (which you can find in /proc/mount). Note that this isn't enough: you also need to take the kernel version and the filesystem type in consideration. Some filesystem types always have ACL available, regardless of mount options; this is the case for tmpfs, xfs and zfs. Some filesystems have ACL unless explicitly excluded; this is the case for ext4 since kernel 2.6.39.
How do I know ACLs are supported on my file system?
1,409,341,045,000
According to the Filesystem Hierarchy Standard the /bin directory should contain utilities needed in single user mode. In practice, many Linux distributions make the directory a symbolic link to /usr/bin. Similarly, /sbin is nowadays often a symbolic link to /usr/bin as well. What's the rationale behind the symlinks?
Short summary of the page suggested by don_crissti: Scattering utilities over different directories is no longer necessary and storing them all in /usr/bin simplifies the file system hierarchy. Also, the change makes Unix and Linux scripts / programmes more compatible. Historically the utilities in the /bin and /sbin directories were used to mount the usr partition. This job is nowadays done by initramfs, and splitting the directories therefore no longer serves any purpose. The simplified file system hierarchy means, for instance, that distributions no longer need to fix paths to binaries (as they're now all in /usr/bin).
Why is /bin a symbolic link to /usr/bin?
1,409,341,045,000
I have a long-term running script and I forgot to redirect its output to a file. I can see it in a terminal, but can I save it to a file? I'm not asking for tee, output redirection (e.g. >, >>) etc - the command has started, and I can't run it again. I need to save already generated output. If I can see it on my display, it is somewhere stored/cached/buffered. Where? screendump, /dev/vcsX and so on allows me to save only last screen on terminal output (not the current! - scrolling terminal doesn't help). This is on a Linux virtual console, not a X11 terminal emulator like gnome-terminal with mouse and other goodies.
/dev/vcs[a]<n> will only get you the last screen-full even if you've scrolled up, but the selection ioctl()s as used by gpm will allow you to dump the currently displayed screen even when you've scrolled up. So you can can do: sleep 3; perl -e ' require "sys/ioctl.ph"; # copy: ioctl(STDIN, &TIOCLINUX, $arg = pack("CS5", 2, 1, 1, 80, 25, 2)); # paste: ioctl(STDIN, &TIOCLINUX, $arg = "\3")'; cat > file Adjust the 80 and 25 to your actual screen width and height. The sleep 3 gives you time to scroll up (with Shift+PageUP) to the actual screen you want to dump. cat > file redirects the paste to file. Finish it with Ctrl+D. See console_ioctl(4) for details. If you have gpm installed and running, you can do that selection with the mouse. The Linux virtual console scrollback and selection are very limited and quite annoying (in that when you switch console, you lose the whole scrollback). Going forward, I'd suggest you use things like GNU screen or tmux within it (I personally use them in even more capable terminals). With them, you can have larger searchable scrollbacks and easily dump them to files (and even log all the terminal output, plus all the other goodies that come with those terminal multiplexers). As to automating the process to dump the whole scrollback buffer, it should be possible under some conditions, but quite difficult as the API is very limited. There is an undocumented ioctl (TIOCLINUX, subcode=13) to scroll the current virtual console by some offset (negative for scrolling up, positive for scrolling down). There is however no way (that I know) to know the current size of the scrollback buffer. So it's hard to know when you've reached the top of that buffer. If you attempt to scroll past it, the screen will not be shifted by as much and there's no reliable way to know by how much the screen has actually scrolled. I also find the behaviour of the scrolling ioctl erratic (at least with the VGA console), where scrolling by less than 4 lines works only occasionally. The script below seems to work for me on frame buffer consoles (and occasionally on VGA ones) provided the scrollback buffer doesn't contain sequences of identical lines longer than one screen plus one line. It's quite slow because it scrolls one line at a time, and needs to wait 10ms for eof when reading each screen dump. To be used as that-script > file from within the virtual console. #! /usr/bin/perl require "sys/ioctl.ph"; ($rows,$cols) = split " ", `stty size`; $stty = `stty -g`; chomp $stty; system(qw(stty raw -echo icrnl min 0 time 1)); sub scroll { ioctl(STDIN, &TIOCLINUX, $arg = pack("Cx3l", 13, $_[0])) or die "scroll: $!"; } sub grab { ioctl(STDIN, &TIOCLINUX, $arg = pack("CS5", 2, 1, 1, $cols, $rows, 2)) or die "copy: $!"; ioctl(STDIN, &TIOCLINUX, $arg = "\3") or die "paste: $!"; return <STDIN>; } for ($s = 0;;$s--) { scroll $s if $s; @lines = grab; if ($s) { last if "@lines" eq "@lastlines"; unshift @output, $lines[0]; } else { @output = @lines; } @lastlines = @lines; } print @output; exec("stty", $stty);
Is it possible to save Linux virtual console content and scrollback in a file?
1,409,341,045,000
ls --hide and ls --ignore provides the possibility leave out files defined through regular expressions set after the --ignore= part. The latter makes sure that this option isn't turned off via -a, -A. The command's man and info page mention Regular Expressions. Question: Which wildcards or Regular Expressions are supported in ls --hide= and ls --ignore=. I found out that * $ ? seem to be supported, as well as POSIX Bracket Expressions. But this doesn't seem to work properly all the time and is more a game of trial and error for me. Did I miss something important here?
From the manual: -I pattern, --ignore=pattern In directories, ignore files whose names match the shell pattern (not regular expression) pattern. As in the shell, an initial . in a file name does not match a wildcard at the start of pattern. Sometimes it is useful to give this option several times. For example, $ ls --ignore='.??*' --ignore='.[^.]' --ignore='#*' The first option ignores names of length 3 or more that start with ., the second ignores all two-character names that start with . except .., and the third ignores names that start with #. You can use only shell glob patterns: * matches any number of characters, ? matches any one character, […] matches the characters within the brackets and \ quotes the next character. The character $ stands for itself (make sure it's within single quotes or preceded by a \ to protect it from shell expansion).
syntax of ls --hide= and ls --ignore=
1,409,341,045,000
As I was investigating a server that is rebooting in a regular fashion, I started looking through the "last" utility but the problem is that I am unable to find what the columns mean exactly. I have, of course, looked through the man but it does not contain this information. root@webservice1:/etc# last reboot reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:44 - 09:58 (00:13) reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:34 - 09:43 (00:08) reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:19 - 09:33 (00:13) reboot system boot 3.2.13-grsec-xxx Thu Apr 12 08:51 - 09:17 (00:25) reboot system boot 3.2.13-grsec-xxx Thu Apr 12 00:11 - 09:17 (09:05) reboot system boot 3.2.13-grsec-xxx Wed Apr 11 19:40 - 09:17 (13:36) reboot system boot 3.2.13-grsec-xxx Sun Apr 8 22:06 - 09:17 (3+11:10) reboot system boot 3.2.13-grsec-xxx Sat Apr 7 14:31 - 09:17 (4+18:45) reboot system boot 3.2.13-grsec-xxx Fri Apr 6 10:20 - 09:17 (5+22:56) reboot system boot 3.2.13-grsec-xxx Thu Apr 5 00:16 - 09:17 (7+09:01) reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42) reboot system boot 3.2.13-grsec-xxx Tue Apr 3 02:31 - 09:17 (9+06:45) reboot system boot 3.2.13-grsec-xxx Mon Apr 2 23:17 - 09:17 (9+09:59) The first columns makes sense up to the kernel versions included. What do these times represent exactly ? The last one seems to be the uptime. Secondly, this is supposed to be a server on 24/7 except the times don't seem to match which could mean that it is experiencing downtime or somthing similar. For example, if we look at the two last lines, does it mean that my server was off from Apr 2 09:17 until Apr3 02:31 ? As for the background information, this is a Debian Squeeze server. EDIT If the last colums are start time, stop time and uptime, how can you interpret these two lines : reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42) reboot system boot 3.2.13-grsec-xxx Tue Apr 3 02:31 - 09:17 (9+06:45) The second session seems to end after the first one starts which doesn't make sense to me.
I guess this is a three year old post, but I'll respond anyway, for the benefit of anyone else who happens across it in the future, like I just did recently. From reading other posts and monitoring the output myself over a period of time, it looks like each line lists the start date and time of the session, the end time of the session (but not the end date), and the duration of the session (how long they were logged in) in a format like (days+hours:minutes) The reboot user appears to be noted as having logged in whenever the system is started, and off when the system was rebooted or shutdown, and on those lines, the "session duration" information is the length of time (days+hours:minutes) that "session" lasted, that is, how long the system was up before it was shutdown. For me, the most recent reboot entry shows the current time as the "logged off" time, and the session duration data for that entry matches the current uptime output. So on this line: reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42) The system was started on Tuesday, April 3rd, at 7:34 am, and it was shutdown 9 days and 1 hour and 42 minutes later (on April 12th), at 9:17 in the morning. (Or, this output was gathered at that time, and this is the most recent reboot entry, and "reboot" hasn't actually "logged off" yet. In which case the output will change if you run the last command again.) Why you would have 2 entries for the reboot user, on April 3rd, that were both 9 days long, is a mystery to me; my systems don't do that.
Meanings of the columns in "last" command
1,409,341,045,000
I'm getting these error messages every single time I reboot my desktop (and a couple of more I don't know how to retain when it's shutting down, but those are not relevant to this question so far): [gorre@uplink ~]$ journalctl -p err..alert ... -- Reboot -- May 11 21:47:03 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP04.PXSX._SB.PCI0.RP05.PXSX], AE_NOT_FOUND (20180105/dswload2-194) May 11 21:47:03 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252) May 11 21:47:03 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20180105/psparse-550) May 11 21:47:03 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP08.PXSX._SB.PCI0.RP09.PXSX], AE_NOT_FOUND (20180105/dswload2-194) May 11 21:47:03 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252) May 11 21:47:03 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20180105/psparse-550) May 12 07:09:30 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future -- Reboot -- May 12 07:10:32 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP04.PXSX._SB.PCI0.RP05.PXSX], AE_NOT_FOUND (20180105/dswload2-194) May 12 07:10:32 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252) May 12 07:10:32 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20180105/psparse-550) May 12 07:10:32 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP08.PXSX._SB.PCI0.RP09.PXSX], AE_NOT_FOUND (20180105/dswload2-194) May 12 07:10:32 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252) May 12 07:10:32 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20180105/psparse-550) I found this article that states someone can add this line: echo "disable" > /sys/firmware/acpi/interrupts/gpe6F to /etc/rc.local, but I'm not sure if that's the correct solution...moreover, if that's only "patching" the error messages, but not fixing the underlying problem ‒ if any. Or maybe should I wait for an upgrade? I'm using: [gorre@uplink ~]$ uname -a Linux uplink 4.16.8-1-ARCH #1 SMP PREEMPT Wed May 9 11:25:02 UTC 2018 x86_64 GNU/Linux ...and this is my hardware: Corsair RMX750 (750 Watt) 80+ Gold Fully Modular Power Supply Intel Core i7-8700 (BX80684I78700) Processor Asus Prime Z370-P Corsair Force MP500 M.2 2280 240GB NVMe PCI-Express 3.0 x4 MLC SSD Corsair Vengeance LPX 32GB (2 x 16GB) 288-Pin DDR4 SDRAM DDR4 2666 (PC4 21300) UPDATE New kernel 4.19.13-1-lts update: $ uname -a Linux uplink 4.19.13-1-lts #1 SMP Sun Dec 30 07:38:47 CET 2018 x86_64 GNU/Linux ...and the error/warning messages are finally gone! -- Reboot -- Dec 28 09:40:42 uplink kernel: ACPI Error: [_SB_.PCI0.RP05.PXSX] Namespace lookup failure, AE_NOT_FOUND (20170728/dswload2-191) Dec 28 09:40:42 uplink kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170728/psobject-252) Dec 28 09:40:42 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20170728/psparse-550) Dec 28 09:40:42 uplink kernel: ACPI Error: [_SB_.PCI0.RP09.PXSX] Namespace lookup failure, AE_NOT_FOUND (20170728/dswload2-191) Dec 28 09:40:42 uplink kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170728/psobject-252) Dec 28 09:40:42 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20170728/psparse-550) Dec 28 09:41:08 uplink gnome-session-binary[712]: Unrecoverable failure in required component org.gnome.Shell.desktop Dec 28 11:48:13 uplink flatpak[7192]: libostree HTTP error from remote flathub for <https://dl.flathub.org/repo/objects/3d/b5370c04103b9acd46bca2f315fb4855649926120b099a> Dec 28 11:48:16 uplink flatpak[7192]: libostree HTTP error from remote flathub for <https://dl.flathub.org/repo/objects/e0/a43c4cbae106fc801d3c7bcc004b8222e9bf0528beef04> Dec 29 12:19:37 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future Dec 30 09:03:02 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future Dec 30 19:07:11 uplink kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=952715 end=952716) time 142 us, min 1073, max 1079, scan> Dec 31 08:11:28 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future -- Reboot -- Jan 01 10:23:42 uplink gnome-session-binary[516]: Unrecoverable failure in required component org.gnome.Shell.desktop
Your hardware is too new sort of speak. The bugs you are seeing are harmless and may persist for some time. You could try upgrading your BIOS, that is utmost priority. Then, you could try installing intel-microcode non-free package. See if these two options work for you first. Today, I have assembled a computer with the very same CPU and seeing the same bugs. On just another motherboard. Update 2018-Dec-1 The error on my Dell laptop with very recent UEFI BIOS update is still persistent as per log: Dec 01 06:27:07 dell-7577 kernel: ACPI Error: [\_SB_.PCI0.XHC_.RHUB.HS11] Namespace lookup failure, AE_NOT_FOUND (20170831/dswload-210) Dec 01 06:27:07 dell-7577 kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170831/psobject-252) Dec 01 06:27:07 dell-7577 kernel: ACPI Exception: AE_NOT_FOUND, (SSDT:xh_OEMBD) while loading table (20170831/tbxfload-228) Dec 01 06:27:07 dell-7577 kernel: ACPI Error: 1 table load failures, 13 successful (20170831/tbxfload-246)
ACPI BIOS Error / AE_NOT_FOUND
1,409,341,045,000
I tested this on different GNU/Linux installations: perl -e 'while(1){open($a{$b++}, "<" ,"/dev/null") or die $b;print " $b"}' System A and D The first limit I hit is 1024. It is easily raised by putting this into /etc/security/limits.conf: * hard nofile 1048576 and then run: ulimit -n 1048576 echo 99999999 | sudo tee /proc/sys/fs/file-max Now the test goes to 1048576. However, it seems I cannot raise it above 1048576. If I put 1048577 in limits.conf it is simply ignored. What is causing that? System B On system B I cannot even get to 1048576: echo 99999999 | sudo tee /proc/sys/fs/file-max /etc/security/limits.conf: * hard nofile 1048576 Here I get: $ ulimit -n 65537 bash: ulimit: open files: cannot modify limit: Operation not permitted $ ulimit -n 65536 #OK Where did that limit come from? System C This system also has the 1048576 limit in limits.conf and 99999999 in /proc/sys/fs/file-max. But here the limit is 4096: $ ulimit -n 4097 -bash: ulimit: open files: cannot modify limit: Operation not permitted $ ulimit -n 4096 # OK How do I raise that to (at least) 1048576? (Note to self: Don't do: echo 18446744073709551616 | sudo tee /proc/sys/fs/file-max)
Check that /etc/ssh/sshd_config contains: UsePAM=yes and that /etc/pam.d/sshd contains: session required pam_limits.so In the comment below @venimus states the 1M limit is hardcoded: The kernel 2.6.x source states ./fs/file.c:30:int sysctl_nr_open __read_mostly = 1024*1024; which is 1048676 The 1048576 is per process. So by having multiple processes this limit can be overcome.
Fixing ulimit: open files: cannot modify limit: Operation not permitted
1,409,341,045,000
A month ago I wrote a Python script to map MAC and IP addresses from stdin. And two days ago I remembered it and used to filter output of tcpdump but it went wrong because of a typo. I typed tcpdump -ne > ./mac_ip.py and the output is nothing. But the output should be "Unknown" if it can't parse the input, so I did cat ./mac_ip.py and found all the tcpdump data instead of the program. Then I realized that I should use tcpdump -ne | ./mac_ip.py Is there any way to get my program back? Anyways I can write my program again, but if it happens again with more important program I should be able to do something. OR is there any way to tell output redirection to check for the file and warn if it is an executable?
Sadly I suspect you'll need to rewrite it. (If you have backups, this is the time to get them out. If not, I would strongly recommend you set up a backup regime for the future. Lots of options available, but off topic for this answer.) I find that putting executables in a separate directory, and adding that directory to the PATH is helpful. This way I don't need to reference the executables by explicit path. My preferred programs directory for personal (private) scripts is "$HOME"/bin and it can be added to the program search path with PATH="$HOME/bin:$PATH". Typically this would be added to the shell startup scripts .bash_profile and/or .bashrc. Finally, there's nothing stopping you removing write permission for yourself on all executable programs: touch some_executable.py chmod a+x,a-w some_executable.py # chmod 555, if you prefer ls -l some_executable.py -r-xr-xr-x+ 1 roaima roaima 0 Jun 25 18:33 some_executable.py echo "The hunting of the Snark" > ./some_executable.py -bash: ./some_executable.py: Permission denied
Accidentally used the output redirection > instead of a pipe |
1,409,341,045,000
One program created lots of nested sub-folders. I tried to use command rm -fr * to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all?
The fastest way to remove them from that directory is to move them out of there, after that just remove them in the background: mkdir ../.tmp_to_remove mv -- * ../.tmp_to_remove rm -rf ../.tmp_to_remove & This assumes that your current directory is not the toplevel of some mounted partition (i.e. that ../.tmp_to_remove is on the same filesystem). The -- after mv (as edited in by Stéphane) is necessary if you have any file/directory names starting with a -. The above removes the files from your current directory in a fraction of a second, as it doesn't have to recursively handle the subdirectories. The actual removal of the tree from the filesystem takes longer, but since it is out of the way, its actual efficiency shouldn't matter that much.
What's the fastest way to remove all files & subfolders in a directory? [duplicate]
1,409,341,045,000
It's said that compiling GNU tools and Linux kernel with -O3 gcc optimization option will produce weird and funky bugs. Is it true? Has anyone tried it or is it just a hoax?
It's used in Gentoo, and I didn't notice anything unusual.
Compiling GNU/Linux with -O3 optimization
1,409,341,045,000
I can use ls -l to get the logical size of a file, but is there a way to get the physical size of a file?
ls -l will give you the apparent size of the file, which is the number of bytes a program would read if it read the file from start to finish. du would give you the size of the file "on disk". By default, du gives you the size of the file in number of disk blocks, but you may use -h to get a human readable unit instead. See also the manual for du on your system. Note that with GNU coreutil's du (which is probably what you have on Linux), using -b to get bytes implies the --apparent-size option. This is not what you want to use to get number of bytes actually used on disk. Instead, use --block-size=1 or -B 1. With GNU ls, you may also do ls -s --block-size=1 on the file. This will give the same number as du -B 1 for the file. Example: $ ls -l file -rw-r--r-- 1 myself wheel 536870912 Apr 8 11:44 file $ ls -lh file -rw-r--r-- 1 myself wheel 512M Apr 8 11:44 file $ du -h file 24K file $ du -B 1 file 24576 file $ ls -s --block-size=1 file 24576 file This means that this is a 512 MB file that takes about 24 KB on disk. It is a sparse file (mostly zeros that are not actually written to disk but represented as logical "holes" in the file). Sparse files are common when working with pre-allocated large files, e.g. disk images for virtual machines or swap files etc. Creating a sparse file is quick, while filling it with zeros is slow (and unnecessary). See also the manual for fallocate on your Linux system.
How to get the physical size of a file in Linux?
1,409,341,045,000
In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses? if a hostname can be resolved in /etc/hosts, does DNS apply after /etc/hosts to resolve the hostname or treat the resolved IP address by /etc/hosts as a "hostname" to resolve recursively? In my browser (firefox and google chrome), when I add to /etc/hosts: 127.0.0.1 google.com www.google.com typing www.google.com into the address bar of the browsers and hitting entering won't connect to the website. After I remove that line from /etc/hosts, I can connect to the website. Does it mean that /etc/hosts overrides DNS for resolving hostnames? After I re-add the line to /etc/hosts, I can still connect to the website, even after refreshing the webpage. Why doesn't /etc/hosts apply again, so that I can't connect to the website? Thanks.
This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system: hosts: files mdns4_minimal [NOTFOUND=return] dns Here, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins. Also, see man 5 nsswitch.conf to get more idea on this. As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.: getent hosts example.com
How do `/etc/hosts` and DNS work together to resolve hostnames to IP addresses?
1,409,341,045,000
I have a personal folder /a/b on the server with permission 700. I don't want others to list the contents in /a/b. The owner of /a is root. Now I need to open the full authorities of directory /a/b/c to all users. I changed the permission of /a/b/c to 777 but it is still inaccessible for others.
You can. You just have to set the executable bit on the /a/b directory. That will prevent being able to see anything in b, but you can still do everything if you go directly to a/b/c. % mkdir -p a/b/c % chmod 711 a/b % sudo chown root a/b % ll a/b ls: cannot open directory a/b: Permission denied % touch a/b/c/this.txt % ls a/b/c this.txt Beware that while others cannot list the contents of /a/b, they can access files in that directory if they guess the name of the file. % echo hello | sudo tee a/b/f % cat a/b/f hello % cat a/b/doesntexist cat: a/b/doesntexist: No such file or directory So be sure to maintain proper permissions (no group/world) on all other files/directories within the b directory, as this will avoid this caveat.
Can I make a public directory under a private directory?
1,409,341,045,000
I'd like to price some new RAM for our in-house VMware testing server. It's a consumer box we use for testing our software on and running business VMs. I've forgotten what kind of RAM it has and I'd rather not reboot the machine and fire up memtest86+ just to get the specs of the RAM. Is there any way I can know what kind of RAM to buy without shutting down Linux and kicking everyone off? For example, is the information somewhere in /proc?
You could try running (as root) dmidecode -t memory. I believe that's what lshw uses (as described in the other Answer), but it provides information in another form, and lshw isn't available on every linux distro. Also, in my case, dmidecode produces the Asset number, useful for plugging into Dell's support web site.
Can I identify my RAM without shutting down Linux?
1,409,341,045,000
If my desktop run out of memory and swaps a lot then I free or kill the application wasting my RAM. But, after that, all my desktop/applications have been swapped and are horribly slow, do you know a way to "unswap" (reload from swap space into RAM) my desktop/applications?
If you really have enough RAM available again you can use this sequence (as root): $ swapoff -a $ swapon -a (to force the explicit swap-in of all your applications) (assuming that you are using linux)
How to re-load all running applications from swap space into RAM?
1,409,341,045,000
How I could go about creating my own "custom" Linux distro that will run just one program, pretty much exactly the same way as XBMCbuntu.
I would not start messing with LFS, that is a garden path leading to some dark woods. Start with a distro where you have a lot of control over the initial install, such as Arch, or a headless edition such as Ubuntu server. The point of this is not so much to save space as to delimit the complexity of the init configuration; starting from a headless distro, if the application you want to run requires a GUI, you can add what's required for that without having to end up with a GUI login (aka. the display manager or DM) started by init, and a fullblown desktop environment to go with it. You then want to learn how to configure the init system to your purposes -- note that you cannot do without init, and it may be the best means of accomplishing your goal. The init system used on most linux distros now is systemd. The point here is to minimize what init does at boot, and that is how you can create a system that will run a minimal amount of software to support the application you want to focus on -- this is essentially how a server is set up, BTW, so it is a common task (note that you can't literally have "just one" userland process running, at least not usefully). If the application you want to run is a GUI program (a good example of why you can't literally just run one application, since GUI apps require an X server), you can have an ~/.xinitrc that looks like this; #!/bin/sh myprogram When you then startx, your program will be the only thing running, and it will be impossible to change desktops or start anything else, partially because there is no window manager or desktop environment (hence, there will be no window frame or titlebar either).
How to create a custom Linux distro that runs just one program and nothing else?
1,409,341,045,000
I think I rather understand how file permissions work in Linux. However, I don't really understand why they are split into three levels and not into two. I'd like the following issues answered: Is this deliberate design or a patch? That is - was the owner/group permissions designed and created together with some rationale or did they come one after another to answer a need? Is there a scenario where the user/group/other scheme is useful but a group/other scheme will not suffice? Answers to the first should quote either textbooks or official discussion boards. Use cases I have considered are: private files - very easily obtainable by making a group per-user, something that is often done as is in many systems. allowing only the owner (e.g. system service) to write to a file, allowing only a certain group to read, and deny all other access - the problem with this example is that once the requirement is for a group to have write access, the user/group/other fails with that. The answer for both is using ACLs, and doesn't justify, IMHO, the existence of owner permissions. NB I have refined this question after having the question closed in superuser.com. EDIT corrected "but a group/owner scheme will not suffice" to "...group/other...".
History Originally, Unix only had permissions for the owning user, and for other users: there were no groups. See the documentation of Unix version 1, in particular chmod(1). So backward compatibility, if nothing else, requires permissions for the owning user. Groups came later. ACLs allowing involving more than one group in the permissions of a file came much later. Expressive power Having three permissions for a file allows finer-grained permissions than having just two, at a very low cost (a lot lower than ACLs). For example, a file can have mode rw-r-----: writable only by the owning user, readable by a group. Another use case is setuid executables that are only executable by one group. For example, a program with mode rwsr-x--- owned by root:admin allows only users in the admin group to run that program as root. “There are permissions that this scheme cannot express” is a terrible argument against it. The applicable criterion is, are there enough common expressible cases that justify the cost? In this instance, the cost is minimal, especially given the other reasons for the user/group/other triptych. Simplicity Having one group per user has a small but not insignificant management overhead. It is good that the extremely common case of a private file does not depend on this. An application that creates a private file (e.g. an email delivery program) knows that all it needs to do is give the file the mode 600. It doesn't need to traverse the group database looking for the group that only contains the user — and what to do if there is no such group or more than one? Coming from another direction, suppose you see a file and you want to audit its permissions (i.e. check that they are what they should be). It's a lot easier when you can go “only accessible to the user, fine, next” than when you need to trace through group definitions. (Such complexity is the bane of systems that make heavy use of advanced features such as ACLs or capabilities.) Orthogonality Each process performs filesystem accesses as a particular user and a particular group (with more complicated rules on modern unices, which support supplementary groups). The user is used for a lot of things, including testing for root (uid 0) and signal delivery permission (user-based). There is a natural symmetry between distinguishing users and groups in process permissions and distinguishing users and groups in filesystem permissions.
Is there a reason why 'owner' permissions exist? Aren't group permissions enough?
1,409,341,045,000
Here is the top output which I gathered: I noticed that top shows VLC's CPU usage at more than 100%. Can anybody please explain why top is showing those numbers? Is this a bug in top or something else?
You are in a multi-core/multi-CPU environment and top is working in Irix mode. That means that your process (vlc) is performing a computation that keeps 1.2 CPUs/cores busy. That could mean 100%+20%, 60%+60%, etc. Press 'I' (capital i, Shift+i) to switch to Solaris mode. You get the same value divided by the number of cores/CPUs.
top output: CPU usage > 100%
1,409,341,045,000
I have the following script to launch a MySQL process: if [ "${1:0:1}" = '-' ]; then set -- mysqld_safe "$@" fi if [ "$1" = 'mysqld_safe' ]; then DATADIR="/var/lib/mysql" ... What does 1:0:1 mean in this context?
It's a test for a - dashed argument option, apparently. It's a little strange, really. It uses a non-standard bash expansion in an attempt to extract the first and only the first character from $1. The 0 is the head character index and the 1 is string length. In a [ test like that it might also be: [ " -${1#?}" = " $1" ] Neither comparison is particularly suited to test though, as it interprets - dashed arguments as well - which is why I use the leading space there. The best way to do this kind of thing - and the way it is usually done - is : case $1 in -*) mysqld_safe "$@"; esac
Meaning of [ "${1:0:1}" = '-' ]
1,409,341,045,000
I am on CentOS 6, trying to enable core dumps for an application I am developing. I have put: ulimit -H -c unlimited >/dev/null ulimit -S -c unlimited >/dev/null in to my bash profile, but a core dump still did not generate (in a new terminal). I have also changed my /etc/security/limits.conf so that the soft limits is zero for all users. How do I set the location of the core files to be output? I wanted to specify the location and append the time the dump was generated, as part of the file name?
To set location of core dumps in CentOS 6 you can edit /etc/sysctl.conf. For example if you want core dumps in /var/crash: kernel.core_pattern=/var/crash/core-%e-%s-%u-%g-%p-%t #corrected spaces before and after = Where variables are: %e is the filename %g is the gid the process was running under %p is the pid of the process %s is the signal that caused the dump %t is the time the dump occurred %u is the uid the process was running under Also you have to add /etc/sysconfig/init DAEMON_COREFILE_LIMIT='unlimited' Now apply new changes: $ sysctl -p But there is a caveat whit this way. If the kernel parameter kernel.core_pattern is always reset and overwritten at reboot to the following configuration even when a value is manually specified in /etc/sysctl.conf: |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e In short when abrtd.service starts kernel.core_pattern is overwritten automatically by the system installed abrt-addon-ccpp. There are two ways to resolve this: Setting DumpLocation option in the /etc/abrt/abrt.conf configuration file. The destination directory can be specified by setting DumpLocation = /var/crash in the /etc/abrt/abrt.conf configuration file, and sysctl kernel.core_pattern's displayed value is a same but actually core file will be created to the directory under /var/crash. Also if you have SELinux enabled you have to run: $ semanage fcontext -a -t public_content_rw_t "/var/crash(/.*)?" $ setsebool -P abrt_anon_write 1 And finally restart abrtd.service: $ service abrtd.service restart Stop abrtd service. kernel.core_pattern will not be overwritten. - (I've never tested).
How to set the core dump file location (and name)?
1,409,341,045,000
Ubuntu 14.04 on a desktop Source Drive: /dev/sda1: 5TB ext4 single drive volume Target Volume: /dev/mapper/archive-lvarchive: raid6 (mdadm) 18TB volume with lvm partition and ext4 There are roughly 15 million files to move, and some may be duplicates (I do not want to overwrite duplicates). Command used (from source directory) was: ls -U |xargs -i -t mv -n {} /mnt/archive/targetDir/{} This has been going on for a few days as expected, but I am getting the error in increasing frequency. When it started the target drive was about 70% full, now its about 90%. It used to be about 1/200 of the moves would state and error, now its about 1/5. None of the files are over 100Mb, most are around 100k Some info: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb3 155G 5.5G 142G 4% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.9G 4.0K 3.9G 1% /dev tmpfs 797M 2.9M 794M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.9G 0 3.9G 0% /run/shm none 100M 0 100M 0% /run/user /dev/sdb1 19G 78M 18G 1% /boot /dev/mapper/archive-lvarchive 18T 15T 1.8T 90% /mnt/archive /dev/sda1 4.6T 1.1T 3.3T 25% /mnt/tmp $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb3 10297344 222248 10075096 3% / none 1019711 4 1019707 1% /sys/fs/cgroup udev 1016768 500 1016268 1% /dev tmpfs 1019711 1022 1018689 1% /run none 1019711 5 1019706 1% /run/lock none 1019711 1 1019710 1% /run/shm none 1019711 2 1019709 1% /run/user /dev/sdb1 4940000 582 4939418 1% /boot /dev/mapper/archive-lvarchive 289966080 44899541 245066539 16% /mnt/archive /dev/sda1 152621056 5391544 147229512 4% /mnt/tmp Here's my output: mv -n 747265521.pdf /mnt/archive/targetDir/747265521.pdf mv -n 61078318.pdf /mnt/archive/targetDir/61078318.pdf mv -n 709099107.pdf /mnt/archive/targetDir/709099107.pdf mv -n 75286077.pdf /mnt/archive/targetDir/75286077.pdf mv: cannot create regular file ‘/mnt/archive/targetDir/75286077.pdf’: No space left on device mv -n 796522548.pdf /mnt/archive/targetDir/796522548.pdf mv: cannot create regular file ‘/mnt/archive/targetDir/796522548.pdf’: No space left on device mv -n 685163563.pdf /mnt/archive/targetDir/685163563.pdf mv -n 701433025.pdf /mnt/archive/targetDir/701433025.pd I've found LOTS of postings on this error, but the prognosis doesn't fit. Such issues as "your drive is actually full" or "you've run out of inodes" or even "your /boot volume is full". Mostly, though, they deal with 3rd party software causing an issue because of how it handles the files, and they are all constant, meaning EVERY move fails. Thanks. EDIT: here is a sample failed and succeeded file: FAILED (still on source drive) ls -lhs 702637545.pdf 16K -rw-rw-r-- 1 myUser myUser 16K Jul 24 20:52 702637545.pdf SUCCEEDED (On target volume) ls -lhs /mnt/archive/targetDir/704886680.pdf 104K -rw-rw-r-- 1 myUser myUser 103K Jul 25 01:22 /mnt/archive/targetDir/704886680.pdf Also, while not all files fail, a file which fails will ALWAYS fail. If I retry it over and over it is consistent. EDIT: Some additional commands per request by @mjturner $ ls -ld /mnt/archive/targetDir drwxrwxr-x 2 myUser myUser 1064583168 Aug 10 05:07 /mnt/archive/targetDir $ tune2fs -l /dev/mapper/archive-lvarchive tune2fs 1.42.10 (18-May-2014) Filesystem volume name: <none> Last mounted on: /mnt/archive Filesystem UUID: af7e7b38-f12a-498b-b127-0ccd29459376 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 289966080 Block count: 4639456256 Reserved block count: 231972812 Free blocks: 1274786115 Free inodes: 256343444 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 2048 Inode blocks per group: 128 RAID stride: 128 RAID stripe width: 512 Flex block group size: 16 Filesystem created: Thu Jun 25 12:05:12 2015 Last mount time: Mon Aug 3 18:49:29 2015 Last write time: Mon Aug 3 18:49:29 2015 Mount count: 8 Maximum mount count: -1 Last checked: Thu Jun 25 12:05:12 2015 Check interval: 0 (<none>) Lifetime writes: 24 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 3ea3edc4-7638-45cd-8db8-36ab3669e868 Journal backup: inode blocks $ tune2fs -l /dev/sda1 tune2fs 1.42.10 (18-May-2014) Filesystem volume name: <none> Last mounted on: /mnt/tmp Filesystem UUID: 10df1bea-64fc-468e-8ea0-10f3a4cb9a79 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 152621056 Block count: 1220942336 Reserved block count: 61047116 Free blocks: 367343926 Free inodes: 135953194 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 732 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 4096 Inode blocks per group: 256 Flex block group size: 16 Filesystem created: Thu Jul 23 13:54:13 2015 Last mount time: Tue Aug 4 04:35:06 2015 Last write time: Tue Aug 4 04:35:06 2015 Mount count: 3 Maximum mount count: -1 Last checked: Thu Jul 23 13:54:13 2015 Check interval: 0 (<none>) Lifetime writes: 150 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: a266fec5-bc86-402b-9fa0-61e2ad9b5b50 Journal backup: inode blocks
Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem. Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution. (get a really good backup made of the filesystem) (unmount the filesystem) tune2fs -O ^dir_index /dev/foo e2fsck -fDvy /dev/foo (mount the filesystem) ext4: Mysterious “No space left on device”-errors ext4 has a feature called dir_index enabled by default, which is quite susceptible to hash-collisions. ...... ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”.
How to fix intermittant "No space left on device" errors during mv when device has plenty of space?
1,409,341,045,000
I have a card reader attached on /dev/sdb. What I do is giving all permissions to owner, group and the rest of the world, using: sudo chmod 777 /dev/sdb Can I just use another combination, allowing only the owner (me) to use the card reader? There is only one user account.
There are multiple ways of accomplishing this. 1. Add your user to the group that owns the device Generally in most distros, block devices are owned by a specific group. All you need to do is add your user to that group. For example, on my system: # ls -l /dev/sdb brw-rw---- 1 root disk 8, 16 2014/07/07-21:32:25 /dev/sdb Thus I need to add my user to the disk group. # usermod -a -G disk patrick   2. Change the permissions of the device The idea is to create a udev rule to run a command when the device is detected. First you need to find a way to identify the device. You use udevadm for this. For example: # udevadm info -a -n /dev/sdb Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3/1-1.3:1.0/host6/target6:0:0/6:0:0:0/block/sdb': KERNEL=="sdb" SUBSYSTEM=="block" DRIVER=="" ATTR{ro}=="0" ATTR{size}=="31116288" ATTR{stat}==" 279 219 3984 1182 0 0 0 0 0 391 1182" ATTR{range}=="16" ATTR{discard_alignment}=="0" ATTR{events}=="media_change" ATTR{ext_range}=="256" ATTR{events_poll_msecs}=="-1" ATTR{alignment_offset}=="0" ATTR{inflight}==" 0 0" ATTR{removable}=="1" ATTR{capability}=="51" ATTR{events_async}=="" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3/1-1.3:1.0/host6/target6:0:0/6:0:0:0': KERNELS=="6:0:0:0" SUBSYSTEMS=="scsi" DRIVERS=="sd" ATTRS{rev}=="0207" ATTRS{type}=="0" ATTRS{scsi_level}=="0" ATTRS{model}=="STORAGE DEVICE " ATTRS{state}=="running" ATTRS{queue_type}=="none" ATTRS{iodone_cnt}=="0x184" ATTRS{iorequest_cnt}=="0x184" ATTRS{device_busy}=="0" ATTRS{evt_capacity_change_reported}=="0" ATTRS{timeout}=="30" ATTRS{evt_media_change}=="0" ATTRS{max_sectors}=="240" ATTRS{ioerr_cnt}=="0x2" ATTRS{queue_depth}=="1" ATTRS{vendor}=="Generic " ATTRS{evt_soft_threshold_reached}=="0" ATTRS{device_blocked}=="0" ATTRS{evt_mode_parameter_change_reported}=="0" ATTRS{evt_lun_change_reported}=="0" ATTRS{evt_inquiry_change_reported}=="0" ATTRS{iocounterbits}=="32" ATTRS{eh_timeout}=="10" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3/1-1.3:1.0/host6/target6:0:0': KERNELS=="target6:0:0" SUBSYSTEMS=="scsi" DRIVERS=="" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3/1-1.3:1.0/host6': KERNELS=="host6" SUBSYSTEMS=="scsi" DRIVERS=="" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3/1-1.3:1.0': KERNELS=="1-1.3:1.0" SUBSYSTEMS=="usb" DRIVERS=="usb-storage" ATTRS{bInterfaceClass}=="08" ATTRS{bInterfaceSubClass}=="06" ATTRS{bInterfaceProtocol}=="50" ATTRS{bNumEndpoints}=="02" ATTRS{supports_autosuspend}=="1" ATTRS{bAlternateSetting}==" 0" ATTRS{bInterfaceNumber}=="00" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1/1-1.3': KERNELS=="1-1.3" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{devpath}=="1.3" ATTRS{idVendor}=="05e3" ATTRS{speed}=="480" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{busnum}=="1" ATTRS{devnum}=="5" ATTRS{configuration}=="" ATTRS{bMaxPower}=="500mA" ATTRS{authorized}=="1" ATTRS{bmAttributes}=="80" ATTRS{bNumConfigurations}=="1" ATTRS{maxchild}=="0" ATTRS{bcdDevice}=="0207" ATTRS{avoid_reset_quirk}=="0" ATTRS{quirks}=="0x0" ATTRS{serial}=="000000000207" ATTRS{version}==" 2.00" ATTRS{urbnum}=="1115" ATTRS{ltm_capable}=="no" ATTRS{manufacturer}=="Generic" ATTRS{removable}=="unknown" ATTRS{idProduct}=="0727" ATTRS{bDeviceClass}=="00" ATTRS{product}=="USB Storage" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1/1-1': KERNELS=="1-1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="01" ATTRS{devpath}=="1" ATTRS{idVendor}=="8087" ATTRS{speed}=="480" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{busnum}=="1" ATTRS{devnum}=="2" ATTRS{configuration}=="" ATTRS{bMaxPower}=="0mA" ATTRS{authorized}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bNumConfigurations}=="1" ATTRS{maxchild}=="6" ATTRS{bcdDevice}=="0000" ATTRS{avoid_reset_quirk}=="0" ATTRS{quirks}=="0x0" ATTRS{version}==" 2.00" ATTRS{urbnum}=="61" ATTRS{ltm_capable}=="no" ATTRS{removable}=="unknown" ATTRS{idProduct}=="0024" ATTRS{bDeviceClass}=="09" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb1': KERNELS=="usb1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{devpath}=="0" ATTRS{idVendor}=="1d6b" ATTRS{speed}=="480" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{authorized_default}=="1" ATTRS{busnum}=="1" ATTRS{devnum}=="1" ATTRS{configuration}=="" ATTRS{bMaxPower}=="0mA" ATTRS{authorized}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bNumConfigurations}=="1" ATTRS{maxchild}=="3" ATTRS{bcdDevice}=="0313" ATTRS{avoid_reset_quirk}=="0" ATTRS{quirks}=="0x0" ATTRS{serial}=="0000:00:1d.0" ATTRS{version}==" 2.00" ATTRS{urbnum}=="26" ATTRS{ltm_capable}=="no" ATTRS{manufacturer}=="Linux 3.13.6-gentoo ehci_hcd" ATTRS{removable}=="unknown" ATTRS{idProduct}=="0002" ATTRS{bDeviceClass}=="09" ATTRS{product}=="EHCI Host Controller" looking at parent device '/devices/pci0000:00/0000:00:1d.0': KERNELS=="0000:00:1d.0" SUBSYSTEMS=="pci" DRIVERS=="ehci-pci" ATTRS{irq}=="23" ATTRS{subsystem_vendor}=="0x144d" ATTRS{broken_parity_status}=="0" ATTRS{class}=="0x0c0320" ATTRS{companion}=="" ATTRS{enabled}=="1" ATTRS{consistent_dma_mask_bits}=="32" ATTRS{dma_mask_bits}=="32" ATTRS{local_cpus}=="0f" ATTRS{device}=="0x1e26" ATTRS{uframe_periodic_max}=="100" ATTRS{msi_bus}=="" ATTRS{local_cpulist}=="0-3" ATTRS{vendor}=="0x8086" ATTRS{subsystem_device}=="0xc0d3" ATTRS{numa_node}=="-1" ATTRS{d3cold_allowed}=="1" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS=="" Then create a new file in /etc/udev/rules.d, such as 99-cardreader.rules: SUBSYSTEM=="block", ATTRS{idProduct}=="0727", ATTRS{serial}=="000000000207", ACTION=="add", RUN+="/bin/chmod 777 /dev/$name" Here I used the output from the udevadm info command to find some identifying information for the device. I used the SUBSYSTEM="block" entry for the very first entry, and then the ATTRS values from the 6th entry. This will basically find the USB device with that product & serial number, and then find the block device that results from that USB device. The RUN command will change the permissions on the device to 777. However I don't consider this a very good solution as this opens the device up to the world. Instead a better solution might be: SUBSYSTEM=="block", ATTRS{idProduct}=="0727", ATTRS{serial}=="000000000207", ACTION=="add", RUN+="/bin/setfacl -m u:patrick:rw- /dev/$name" This will grant the user patrick read/write access to the device. Note: It is important to remember that when writing udev rules, you can only use parameters from the top device, and one other device in the chain. Thus I can use the SUBSYSTEM="block" parameter, and the ATTRS parameters. But I could not use any parameters from any other device in the chain, or the rule would fail to match.
Give a specific user permissions to a device without giving access to other users
1,409,341,045,000
Recently, I started to use i3wm and fell in love with it. However, one thing bothers me: controlling more than 10 workspaces. In my config $mod+1 to $mod+9 switches between the workspaces 1 to 9 (and $mod+0 for 10), but sometimes 10 workspaces just aren't enough. At the moment I reach out to workspace 11 to 20 with $mod+mod1+1 to $mod+mod1+0, i.e. hitting mod+alt+number. Of course this works without any problems, but it is quite a hassle to switch workspaces like that, since the keys aren't hit easily. Additionally, moving applications between workspaces 11 to 20 requires to mod+shift+alt+number -> ugly. In my Vim bindings (I have lots of plugins) I started to use double modifier shortcuts, like modkey + r for Plugin 1 and modkey + modkey + r for Plugin 2. This way I can bind every key twice and hitting the mod key twice is easy and fast. Can I do something similar in i3wm? How do you make use of more than 10 workspaces in i3wm? Any other solutions?
i3 does not really support key sequences like vim. Any key binding consists of a single key preceded by an optional list of distinct (so no Shift+Shift) modifiers. And all of the modifiers need to be pressed down at the time the main key is pressed. That being said, there are two main ways to have a lot of workspaces without having to bind them to long lists of modifiers: 1. Dynamically create and access workspaces with external programs You can do not have to define a shortcut for every single workspace, you can just create them on the fly by sending a workspace NEW_WS to i3, for example with the i3-msg program: i3-msg workspace NEW_WS i3-msg move container to workspace NEW_WS i3 also comes with the i3-input command, which opens a small input field then runs a command with the given input as parameter i3-input -F 'workspace %s' -P 'go to workspace: ' i3-input -F 'move container to workspace %s' -P 'move to workspace: ' Bind these these two commands to shortcuts and you can access an arbitrary number of workspaces by just pressing the shortcut and then entering the name (or number) of the workspace you want. (If you only work with numbered workspaces, you might want to use workspace number %s instead of just workspace %s) 2. Statically bind workspaces to simple Shortcuts within key binding modes Alternatively, for a more static approach, you could use modes in your i3 configuration. You could have separate modes for focusing and moving to workspaces: set $mode_workspace "goto_ws" mode $mode_workspace { bindsym 1 workspace 1; mode "default" bindsym 2 workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym b workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+w mode $mode_workspace set $mode_move_to_workspace "moveto_ws" mode $mode_move_to_workspace { bindsym 1 move container to workspace 1; mode "default" bindsym 2 move container to workspace 2; mode "default" # […] bindsym a move container to workspace a; mode "default" bindsym b move container to workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+shift+w mode $mode_move_to_workspace Or you could have separate bindings for focusing and moving within a single mode: set $mode_ws "workspaces" mode $mode_ws { bindsym 1 workspace 1; mode "default" bindsym Shift+1 move container to workspace 1; mode "default" bindsym 2 workspace 2; mode "default" bindsym Shift+2 move container to workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym Shift+a move container to workspace a; mode "default" bindsym b workspace b; mode "default" bindsym Shift+b move container to workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+shift+w mode $mode_move_to_workspace In both examples the workspace or move commands are chained with mode "default", so that i3 automatically returns back to the default key binding map after each command.
i3wm: more than 10 workspaces with double modifier key?
1,409,341,045,000
I am using nginx as a reverse proxy and when I login in my web interface I am redirected to the proxied URL. I would like to avoid it and always keep the "server_name" as the URL. Is it possible? This is my /etc/nginx/conf.d/my_app.conf: server { listen 443 ssl; server_name my-app.net; ssl_certificate /etc/pki/tls/certs/my-app.cer; ssl_certificate_key /etc/pki/tls/private/my-app.key; ssl_protocols TLSv1.1 TLSv1.2; access_log /var/log/nginx/my-app.access.log main; location / { proxy_pass http://ip_of_the_app:7180/; proxy_redirect off; } } I connect on http://my-app.net, enter login information, I am then redirected to http://ip_of_the_app:7180 at the same login page, and I have to login again. Can this double login be avoided?
Do not set proxy_redirect to off, that is not doing what you think it is doing. proxy_redirect performs something similar to URL rewriting, for example: location /sales/ { proxy_pass http://ip_of_the_app:7180/; proxy_redirect http://ip_of_the_app:7180/ http://$host/sales/; } This allows you to host the /sales/ path somewhere else. But even then, the default parameters for proxy_redirect do exactly that for you for free. The default is to redirect the location into whatever is present in proxy_pass (and the default parameters are used when you do not set proxy_redirect at all, or use proxy_redirect default;). You do not need to set proxy_redirect. What you're missing are headers that need to be sent to the app. The most important of them is HOST. This shall perform the proxying as desired and shall keep the correct URL in the browser. location / { proxy_pass http://ip_of_the_app:7180/; proxy_set_header HOST $host; } Note that the app at http://ip_of_the_app:7180/ will now receive the request with the Host: my-app.net header. You should also consider using a couple more headers: proxy_set_header Referer $http_referer; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; This will allow for better logging inside the app at http://ip_of_the_app:7180/. X-Forwarded-For giving the IP of the actual client (as opposed to nginxs IP) and X-Forwarded-Proto to check whether the client connected to the nginx through HTTP or HTTPS.
Nginx reverse proxy redirection