date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,380,795,325,000
I am trying to set up a VPN (using OpenVPN) such that all of the traffic, and only the traffic, to/from specific processes goes through the VPN; other processes should continue to use the physical device directly. It is my understanding that the way to do this in Linux is with network namespaces. If I use OpenVPN normally (i.e. funnelling all traffic from the client through the VPN), it works fine. Specifically, I start OpenVPN like this: # openvpn --config destination.ovpn --auth-user-pass credentials.txt (A redacted version of destination.ovpn is at the end of this question.) I'm stuck on the next step, writing scripts that restrict the tunnel device to namespaces. I have tried: Putting the tunnel device directly in the namespace with # ip netns add tns0 # ip link set dev tun0 netns tns0 # ip netns exec tns0 ( ... commands to bring up tun0 as usual ... ) These commands execute successfully, but traffic generated inside the namespace (e.g. with ip netns exec tns0 traceroute -n 8.8.8.8) falls into a black hole. On the assumption that "you can [still] only assign virtual Ethernet (veth) interfaces to a network namespace" (which, if true, takes this year's award for most ridiculously unnecessary API restriction), creating a veth pair and a bridge, and putting one end of the veth pair in the namespace. This doesn't even get as far as dropping traffic on the floor: it won't let me put the tunnel into the bridge! [EDIT: This appears to be because only tap devices can be put into bridges. Unlike the inability to put arbitrary devices into a network namespace, that actually makes sense, what with bridges being an Ethernet-layer concept; unfortunately, my VPN provider does not support OpenVPN in tap mode, so I need a workaround.] # ip addr add dev tun0 local 0.0.0.0/0 scope link # ip link set tun0 up # ip link add name teo0 type veth peer name tei0 # ip link set teo0 up # brctl addbr tbr0 # brctl addif tbr0 teo0 # brctl addif tbr0 tun0 can't add tun0 to bridge tbr0: Invalid argument The scripts at the end of this question are for the veth approach. The scripts for the direct approach may be found in the edit history. Variables in the scripts that appear to be used without setting them first are set in the environment by the openvpn program -- yes, it's sloppy and uses lowercase names. Please offer specific advice on how to get this to work. I'm painfully aware that I'm programming by cargo cult here -- has anyone written comprehensive documentation for this stuff? I can't find any -- so general code review of the scripts is also appreciated. In case it matters: # uname -srvm Linux 3.14.5-x86_64-linode42 #1 SMP Thu Jun 5 15:22:13 EDT 2014 x86_64 # openvpn --version | head -1 OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Mar 17 2014 # ip -V ip utility, iproute2-ss140804 # brctl --version bridge-utils, 1.5 The kernel was built by my virtual hosting provider (Linode) and, although compiled with CONFIG_MODULES=y, has no actual modules -- the only CONFIG_* variable set to m according to /proc/config.gz was CONFIG_XEN_TMEM, and I do not actually have that module (the kernel is stored outside my filesystem; /lib/modules is empty, and /proc/modules indicates that it was not magically loaded somehow). Excerpts from /proc/config.gz provided on request, but I don't want to paste the entire thing here. netns-up.sh #! /bin/sh mask2cidr () { local nbits dec nbits=0 for dec in $(echo $1 | sed 's/\./ /g') ; do case "$dec" in (255) nbits=$(($nbits + 8)) ;; (254) nbits=$(($nbits + 7)) ;; (252) nbits=$(($nbits + 6)) ;; (248) nbits=$(($nbits + 5)) ;; (240) nbits=$(($nbits + 4)) ;; (224) nbits=$(($nbits + 3)) ;; (192) nbits=$(($nbits + 2)) ;; (128) nbits=$(($nbits + 1)) ;; (0) ;; (*) echo "Error: $dec is not a valid netmask component" >&2 exit 1 ;; esac done echo "$nbits" } mask2network () { local host mask h m result host="$1." mask="$2." result="" while [ -n "$host" ]; do h="${host%%.*}" m="${mask%%.*}" host="${host#*.}" mask="${mask#*.}" result="$result.$(($h & $m))" done echo "${result#.}" } maybe_config_dns () { local n option servers n=1 servers="" while [ $n -lt 100 ]; do eval option="\$foreign_option_$n" [ -n "$option" ] || break case "$option" in (*DNS*) set -- $option servers="$servers nameserver $3" ;; (*) ;; esac n=$(($n + 1)) done if [ -n "$servers" ]; then cat > /etc/netns/$tun_netns/resolv.conf <<EOF # name servers for $tun_netns $servers EOF fi } config_inside_netns () { local ifconfig_cidr ifconfig_network ifconfig_cidr=$(mask2cidr $ifconfig_netmask) ifconfig_network=$(mask2network $ifconfig_local $ifconfig_netmask) ip link set dev lo up ip addr add dev $tun_vethI \ local $ifconfig_local/$ifconfig_cidr \ broadcast $ifconfig_broadcast \ scope link ip route add default via $route_vpn_gateway dev $tun_vethI ip link set dev $tun_vethI mtu $tun_mtu up } PATH=/sbin:/bin:/usr/sbin:/usr/bin export PATH set -ex # For no good reason, we can't just put the tunnel device in the # subsidiary namespace; we have to create a "virtual Ethernet" # device pair, put one of its ends in the subsidiary namespace, # and put the other end in a "bridge" with the tunnel device. tun_tundv=$dev tun_netns=tns${dev#tun} tun_bridg=tbr${dev#tun} tun_vethI=tei${dev#tun} tun_vethO=teo${dev#tun} case "$tun_netns" in (tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;; (*) exit 1;; esac if [ $# -eq 1 ] && [ $1 = "INSIDE_NETNS" ]; then [ $(ip netns identify $$) = $tun_netns ] || exit 1 config_inside_netns else trap "rm -rf /etc/netns/$tun_netns ||: ip netns del $tun_netns ||: ip link del $tun_vethO ||: ip link set $tun_tundv down ||: brctl delbr $tun_bridg ||: " 0 mkdir /etc/netns/$tun_netns maybe_config_dns ip addr add dev $tun_tundv local 0.0.0.0/0 scope link ip link set $tun_tundv mtu $tun_mtu up ip link add name $tun_vethO type veth peer name $tun_vethI ip link set $tun_vethO mtu $tun_mtu up brctl addbr $tun_bridg brctl setfd $tun_bridg 0 #brctl sethello $tun_bridg 0 brctl stp $tun_bridg off brctl addif $tun_bridg $tun_vethO brctl addif $tun_bridg $tun_tundv ip link set $tun_bridg up ip netns add $tun_netns ip link set dev $tun_vethI netns $tun_netns ip netns exec $tun_netns $0 INSIDE_NETNS trap "" 0 fi netns-down.sh #! /bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin export PATH set -ex tun_netns=tns${dev#tun} tun_bridg=tbr${dev#tun} case "$tun_netns" in (tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;; (*) exit 1;; esac [ -d /etc/netns/$tun_netns ] || exit 1 pids=$(ip netns pids $tun_netns) if [ -n "$pids" ]; then kill $pids sleep 5 pids=$(ip netns pids $tun_netns) if [ -n "$pids" ]; then kill -9 $pids fi fi # this automatically cleans up the the routes and the veth device pair ip netns delete "$tun_netns" rm -rf /etc/netns/$tun_netns # the bridge and the tunnel device must be torn down separately ip link set $dev down brctl delbr $tun_bridg destination.ovpn client auth-user-pass ping 5 dev tun resolv-retry infinite nobind persist-key persist-tun ns-cert-type server verb 3 route-metric 1 proto tcp ping-exit 90 remote [REDACTED] <ca> [REDACTED] </ca> <cert> [REDACTED] </cert> <key> [REDACTED] </key>
It turns out that you can put a tunnel interface into a network namespace. My entire problem was down to a mistake in bringing up the interface: ip addr add dev $tun_tundv \ local $ifconfig_local/$ifconfig_cidr \ broadcast $ifconfig_broadcast \ scope link The problem is "scope link", which I misunderstood as only affecting routing. It causes the kernel to set the source address of all packets sent into the tunnel to 0.0.0.0; presumably the OpenVPN server would then discard them as invalid per RFC1122; even if it didn't, the destination would obviously be unable to reply. Everything worked correctly in the absence of network namespaces because openvpn's built-in network configuration script did not make this mistake. And without "scope link", my original script works as well. (How did I discover this, you ask? By running strace on the openvpn process, set to hexdump everything it read from the tunnel descriptor, and then manually decoding the packet headers.)
Feed all traffic through OpenVPN for a specific network namespace only
1,380,795,325,000
I tried finding this on here, but couldn't so sorry if it's a duplicate. Say I have 2 groups and a user: group1, group2, user1 with the following structure: group1 is a member of group 2, user1 is a member of group1 Now say I have the following files with relevant permissions file1 root:group1 660 file2 root:group2 660 Now when I log into user1, I'm able to edit file1, but not edit file2. Short of adding user1 to group2, is there any way of doing this? or is there no way? I'm using Ubuntu btw.
There is no such thing as a group being a member of a group. A group, by definition, has a set of user members. I've never heard of a feature that would let you specify “subgroups” where members of subgroups are automatically granted membership into the supergroup on login. If /etc/group lists group1 as a member of group2, it designates the user called group1 (if such a user exists, which is possible: user names and group names live in different name spaces). If you want user1 to have access to file2, you have several solutions: Make file2 world-accessible (you probably don't want this) Make user1 the owner of file2: chown user1 file2 Add user1 to group2: adduser user1 group2 Add an ACL to file2 that grants access to either user1 or group`: setfacl -m user:user1:rw file2 setfacl -m group:group1:rw file2 See Make all new files in a directory accessible to a group on enabling ACLs.
Group within group file permissions
1,380,795,325,000
I want to write a linux shell script which will capture specific multicast traffic. Specific as in, I want to create a pcap file that has all the traffic for one specific multicast group/port. Here is the command line I am using to view traffic: tcpdump -nnXs 0 -i eth1 udp port 22001 and dst 233.54.12.234 This works fine so long as I have a multicast subscription to that group already established. For example, if I run this in another console: mdump 233.54.12.234 22001 10.13.252.51 tcpdump will see packets. If mdump is not running, tcpdump sees nothing. Is there a standard linux-y way to establish these multicast joins before starting the captures? I could use mdump to establish these joins, but that seems wasteful since mdump will process all that data on the group, bbut I'm just going to throw it away. Note that because my specific environment, I have been discouraged from putting the interface in to promiscuous mode. It may, in fact, be prohibited.
TL;DR - Pick one: sudo ip addr add 233.54.12.234/32 dev eth1 autojoin socat STDIO UDP4-RECV:22001,ip-add-membership=233.54.12.234:eth1 > /dev/null At first I was going to say "just use ip maddress add and be done with it". The problem is ip maddress only affects link layer multicast addresses not protocol multicast addresses (man 8 ip-maddress). That being said using the autojoin flag with the address verb does the trick just nicely. This raises some subsequent questions though. I assume since you'll be running tcpdump or tshark that you have root permission. In the event that you do not 22001 is a high numbered port and other utilities like socat will also get things done. Don't take my word for it though. Just to test this out we can generate multicast UDP packets with socat or ncat (generally packaged via nmap/nmap-ncat). On some number of hosts run one of the following two combinations: Option 1: sudo ip addr add 233.54.12.234/32 dev eth1 autojoin Option 2: socat -u UDP4-RECV:22001,ip-add-membership=233.54.12.234:eth1 /dev/null & The first option will require either root, or at least the capability CAP_NET_ADMIN. The second option doesn't require root, but also expects to run in the foreground and thus may be less conducive to scripting (though tracking the child process ID and cleaning it up with a trap in BASH may be just what you're looking for. Once that's done (but before we go nuts testing our tcpdump/tshark command) make sure that the kernel recognizes the interface having joined the correct IGMP group. If you're feeling super fancy you can go nuts parsing the hex out of /proc/net/igmp, but I'd suggest just running netstat -gn. Once you've verified that you see the interface subscribed to the correct group fire up your tcpdump command: tcpdump -nnXs 0 -i eth1 udp port 22001 and dst 233.54.12.234 Alternatively, if you don't want to fully go the route of tcpdump (or stumbled upon this answer and are just curious to see multicast in action) you can use socat command above to join and echo the content to STDOUT by replacing /dev/null with STDOUT: socat -u UDP4-RECV:22001,ip-add-membership=233.54.12.234:eth1 Then, from another machine use one of the following two options to send some simple test data: Option 1: socat STDIO UDP-DATAGRAM:233.54.12.234:22001 Option 2: ncat -u 233.54.12.234 22001 When you run either of those commands it will then interactively wait for input. Just type some things in, hit enter to send, then CTRL+D when you're done to send an EOF message. At this point you should have seen an end to end test and with a few commands built the worst, most insecure chat system in the world. N.B. If you want to leave the multicast group joined using ip addr add ... (option 1), you can do this: sudo ip addr del 233.54.12.234/32 dev eth1 autojoin
Creating multicast join for tcpdump captures
1,380,795,325,000
Yesterday I was trying to compile the ROOT package from source. Since I was compiling it on a 6 core monster machine, I decided to go ahead and build using multiple cores using make -j 6. The compiling went smooth and really fast at first, but at some point make hung using 100% CPU on just one core. I did some googling and found this post on the ROOT message boards. Since I built this computer myself, I was worried that I hadn't properly applied the heatsink and the CPU was overheating or something. Unfortunately, I don't have a fridge here at work that I can stick it in. ;-) I installed the lm-sensors package and ran make -j 6 again, this time monitoring the CPU temperature. Although it got high (close to 60 C), it never went past the high or critical temperature. I tried running make -j 4 but again make hung sometime during the compile, this time at a different spot. In the end, I compiled just running make and it worked fine. My question is: Why was it hanging? Due to the fact that it stopped at two different spots, I would guess it was due to some sort of race condition, but I would think make should be clever enough to get everything in the right order since it offers the -j option.
I don't have an answer to this precise issue, but I can try to give you a hint of what may be happening: Missing dependencies in Makefiles. Example: target: a.bytecode b.bytecode link a.bytecode b.bytecode -o target a.bytecode: a.source compile a.source -o a.bytecode b.bytecode: b.source compile b.source a.bytecode -o a.bytecode If you call make target everything will compile correctly. Compilation of a.source is performed (arbitrarily, but deterministically) first. Then compilation of b.source is performed. But if you make -j2 target both compile commands will be run in parallel. And you'll actually notice that your Makefile's dependencies are broken. The second compile assumes a.bytecode is already compiled, but it does not appear in dependencies. So an error is likely to happen. The correct dependency line for b.bytecode should be: b.bytecode: b.source a.bytecode To come back to your problem, if you are not lucky it's possible that a command hang in a 100% CPU loop, because of a missing dependency. That's probably what is happening here, the missing dependency couldn't be revealed by a sequential build, but it has been revealed by your parallel build.
What could be causing make to hang when compiling on multiple cores?
1,380,795,325,000
I have created a systemd service file and placed it in /etc/systemd/system/anfragen-3dkonfig-mapper.service. I ran systemctl daemon-reload, systemctl daemon-reexec and rebooted the system. systemctl enable anfragen-3dkonfig-mapper results in Failed to enable unit: Unit file anfragen-3dkonfig-mapper.service does not exist. systemctl start anfragen-3dkonfig-mapper results in Failed to start anfragen-3dkonfig-mapper.service: Unit anfragen-3dkonfig-mapper.service not found. ls -lh /etc/systemd/system/anfragen-3dkonfig-mapper.service outputs -rw-r--r--. 1 root root 440 Mar 19 12:08 /etc/systemd/system/anfragen-3dkonfig-mapper.service cd /root && systemd-analyze verify anfragen-3dkonfig-mapper.service has an exit code of 0 and prints no output. mount shows /dev/sda2 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota) There are no other mounts touching /usr or /etc. The contents of the service file are: [Unit] Description=Anfragen 3D Konfigurations Mapper Service After=network.target [Service] Restart=always ExecStartPre=-/usr/bin/podman stop anfragen-3dkonfig-mapper ExecStartPre=-/usr/bin/podman rm anfragen-3dkonfig-mapper ExecStart=/usr/bin/podman run --rm --name anfragen-3dkonfig-mapper-app -p 10010:10000 anfragen-3dkonfig-mapper-app:0.0.1 ExecStop=/usr/bin/podman stop anfragen-3dkonfig-mapper [Install] WantedBy=multi-user.target All above commands were run as the root user. Operating System: CentOS Linux release 8.0.1905 (Core) Systemd version: 239 Linux kernel: Linux version 4.18.0-80.11.2.el8_0.x86_64 ([email protected]) (gcc version 8.2.1 20180905 (Red Hat 8.2.1-3) (GCC)) I vaguely remember having a similar problem with another service file some months ago which just magically started working after a few hours of poking around and renaming the service file back and forth. I'm interested in two things: How does one debug such a problem? What is wrong?
As hinted at by @JdeBP wrong SELinux file labels are the reason for the behavior. The . character in the output of ls indicates that there is a security context set for the file. So be attentive to the . in the ls output! cd /etc/systemd/system && ls -lhZ some-other-service.service anfragen-3dkonfig-mapper.service prints -rw-r--r--. 1 root root unconfined_u:object_r:admin_home_t:s0 440 Mar 19 12:08 anfragen-3dkonfig-mapper.service -rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 457 Feb 24 11:42 some-other-service.service It can be seen that the other service file has the systemd_unit_file_t label, while the broken service doesn't. This can be fixed with restorecon anfragen-3dkonfig-mapper.service. After this the labels look as follows: -rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 440 Mar 19 12:08 anfragen-3dkonfig-mapper.service -rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 457 Feb 24 11:42 some-other-service.service systemd now behaves as expected.
Service file exists but is not found by systemd
1,380,795,325,000
The Armis Lab has discovered a new vector attack affecting all devices with Bluetooth enabled including Linux and IoT systems. BlueBorne attack on Linux Armis has disclosed two vulnerabilities in the Linux operating system which allow attackers to take complete control over infected devices. The first is an information leak vulnerability, which can help the attacker determine the exact version used by the targeted device and adjust his exploit accordingly. The second is a stack overflow with can lead to full control of a device. For instance all devices with Bluetooth enabled should be marked as malicious. The infected devices will create a malicious network allowing the attacker to take control of all device out of its Bluetooth range. Using the Bluetooth on Linux system to connect a peripheral devices (keyboards, mice, headphones, etc.) put the Linux under a various risks. This attack does not require any user interaction, authentication or pairing, making it also practically invisible. All Linux devices running BlueZ are affected by the information leak vulnerability (CVE-2017-1000250). All my Linux OS with Bluetooth enabled are marked as vulnerable after a check with the BlueBorne Vulnerability Scanner (Android application by Armis to discover the vulnerable device require to enable the device discovery, but the attack just require only the Bluetooth to be enabled). Is there a way to mitigate the BlueBorne attack when using Bluetooth on a Linux system?
The coordinated disclosure date for the BlueBorne vulnerabilities was September 12, 2017; you should see distribution updates with fixes for the issues shortly thereafter. For example: RHEL Debian CVE-2017-1000250 and CVE-2017-1000251 Until you can update the kernel and BlueZ on affected systems, you can mitigate the issue by disabling Bluetooth (which might have adverse effects of course, especially if you use a Bluetooth keyboard or mouse): blacklist the core Bluetooth modules printf "install %s /bin/true\n" bnep bluetooth btusb >> /etc/modprobe.d/disable-bluetooth.conf disable and stop the Bluetooth service systemctl disable bluetooth.service systemctl mask bluetooth.service systemctl stop bluetooth.service remove the Bluetooth modules rmmod bnep rmmod bluetooth rmmod btusb (this will probably fail at first with an error indicating other modules are using these; you’ll need to remove those modules and repeat the above commands). If you want to patch and rebuild BlueZ and the kernel yourself, the appropriate fixes are available here for BlueZ and here for the kernel.
How do I secure Linux systems against the BlueBorne remote attack?
1,380,795,325,000
I have a Linux CentOS server, the OS+packages used around 5GB. Then, I transferred 97GB data from a Windows server to two folders on this Linux server, after calculated the disk usage, I see the total size of the two folders is larger than the disk used size. Run du -sh on each folder, one use 50GB, the other one use 47GB But run df -h, the used space is 96GB. (50GB + 47GB + 5GB) > 96GB Is there any problem? Those two folders contain lots of files (1 million+). Thanks.
This page gives some insight on why they have different values, however it seems to suggest that your du size should be the smaller of the two. df uses total allocated blocks, while du only looks at files themselves, excluding metadata such as inodes, which still require blocks on the disk. Additionally, if a file is deleted while an application has it opened, du will report it as free space but df does not until the application exits.
Why is there a discrepancy in disk usage reported by df and du? [duplicate]
1,380,795,325,000
Why are most Linux distributions not POSIX-compliant? I've seen in lots of places that they're not (e.g. Mostly POSIX-compliant) but there's been no real explanation to back this up. Is there something the C library and/or tools could do to get around this (i.e. no modifications to the kernel itself)? What needs to be done? The supposed duplicate is asking which Linux distribution is POSIX-compliant; this is asking why most Linux distributions aren't POSIX-compliant. I'm asking for specific details (i.e. some function or command isn't compliant), not the reasons the specific distributions don't (try to) get certified. This comment from @PhilipCouling (thanks!) explains it well: Compliance and certification are different subjects. The answers point to cost of (re)certification which is irrelevant to the subject of (non)compliance.
POSIX does not specify a kernel interface, so Linux is largely irrelevant. It does specify the system interface, various tools, and extensions to the C standard, which could exist on top of any kernel. It is not POSIX-compliant in the sense that it's not mentioned, or it is POSIX-compliant in the sense that it's not mentioned, at your option. There are UNIX®-Certified Linux distributions, so it is certainly possible to have fully POSIX-compliant operating systems using Linux. Huawei's EulerOS is one that has and that you can buy if you'd like. Most of the rest haven't paid their money and so don't have access to the test suite to check conformance. Whether they would satisfy it in practice is not clear, but some do try harder than others. I suspect that some of the BSDs are closer than most Linux distributions, but that's a guess: for example, I know that execlp("cd", "/", NULL) fails on most Linux distributions, but works on many BSDs and is required by POSIX.
Why are most Linux distributions not POSIX-compliant?
1,380,795,325,000
is it currently possible to setup LXC containers with X11 capabilities? I'm looking forward for the lightest available X11 container (memory-wise), hardware acceleration a plus but not essential. If it is not currently possible, or readily available, is it known what functionality needs to be yet implemented in order to support it?
yes it is possible to run a complete X11 desktop environment inside a LXC container. Right now, I do this on Arch Linux. I won't say it's "light" as I haven't gone as far as trying to strip out stuff from the standard package manager install but I can confirm that it does work very well. You have to install any kernel drivers on the HOST as well as in the container. Such things as the graphics driver (I use nvidia). You have to make the device nodes in dev accessible inside the container by configuring your container.conf to allow it. You then need to make sure that those device nodes are created inside the container (i.e mknod). So, to answer you question: YES it does work. If I can help any further or provide more details please do let me know. --- additional infomation provided --- In my container... /etc/inittab starts in run level 5 and launches "slim" Slim is configured to use vt09: # Path, X server and arguments (if needed) # Note: -xauth $authfile is automatically appended default_path /bin:/usr/bin:/usr/local/bin default_xserver /usr/bin/X xserver_arguments -nolisten tcp vt09 I am not using a second X display on my current vt, but a completely different one (I can switch between many of thise using CTRL+ALT+Fn). If you aren't using slim, you can use a construct like this to start X on another vt: /usr/bin/startx -- :10 vt10 That will start X on display :10 and put it on vt10 (CTRL+ALT+F10). These don't need to match but I think it's neater if they do. You do need your container config to make the relevant devices available, like this: # XOrg Desktop lxc.cgroup.devices.allow = c 4:10 rwm # /dev/tty10 X Desktop lxc.cgroup.devices.allow = c 195:* rwm # /dev/nvidia Graphics card lxc.cgroup.devices.allow = c 13:* rwm # /dev/input/* input devices And you need to make the devices in your container: # display vt device mknod -m 666 /dev/tty10 c 4 10 # NVIDIA graphics card devices mknod -m 666 /dev/nvidia0 c 195 0 mknod -m 666 /dev/nvidiactl c 195 255 # input devices mkdir /dev/input # input devices chmod 755 /dev/input mknod -m 666 /dev/input/mice c 13 63 # mice I also manually configured input devices (since we don't have udev in container) Section "ServerFlags" Option "AutoAddDevices" "False" EndSection Section "ServerLayout" Identifier "Desktop" InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "gb" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection The above going in a file /etc/X11/xorg.conf.d/10-input.conf Not sure if any of that will help, but good luck!
Linux - LXC; deploying images with tiniest possible X11
1,380,795,325,000
I have been advised by many senior Unix/Linux Administrators to go through "The Linux Documentation Project" on the site www.tldp.org. Its undoubtedly a very rich site, but I saw many tutorials (as seen here and here) to be more than 3 to 5 years old. I do know and understand that it is definitely worth to go through them thoroughly at least once, but I just want to know that if for topics which are important for me as a learning linux administrator, should I also search for latest articles on same topic on internet? Hope I do not offend any one with this question.
A large amount of TLDP is obsolete. The howtos are usually good, but many of them are seriously out of date and contain advice that is now counterproductive. Check the date of each howto before deciding whether to read and trust it. Even back in the day, howtos were not to be followed blindly. For example, many howtos start with instructions on compiling some software which, by the time most readers read the document, was bundled in distributions. I'm not aware of any similar project that's more up-to-date. The current trend is towards community-edited guides, wiki-style. You get the benefit of more diverse experiences, but you lose the trust that you may (but then again, may not) put in a single author, and information is often spread over more pages and less easy to get at offline. Also remember that information in wikis can get obsolete too. I don't think TLDP is a worthwhile resource for learning Linux administration nowadays. 10 years ago, yes, but not now. I recommend starting with a book, then exploring the wiki for your distribution (most have one), exploring /etc, reading Unix & Linux Stack Exchange, skimming newgroups... Refer to howtos if you find recent ones on a subject that interests you.
How updated and relevant is "The Linux Documentation Project"?
1,380,795,325,000
The rootfs is a squashfs image and my bootloader is loading it into some address in SDRAM. What parameters do I need to pass to the kernel so It can mount the rootfs from there? Squashfs support is built-in and it already works with root=/dev/mtdblock2 rootfstype=squashfs for booting from the flash. EDIT: This is a MIPS based embedded device, using a custom bootloader. Normally, the bootloader extracts the compressed kernel from the flash into the SDRAM, and then kernel mounts /dev/mtdblock2 as the rootfs. I am trying to improve the bootloader so it can download an image to its RAM and boot without writing to the flash. I cannot figure out how to make Linux mount a filesystem image in the RAM as the rootfs.
I would use an initramfs. (http://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt) Many Linux distributions use an initramfs (not to be confused with an initrd, they are different) during the boot process, mostly to be able to start userspace programs very early in the boot process. However, you can use it for whatever you want. The benefit of an initramfs over an initrd is that an initramfs uses a tmpfs filesystem while an initrd uses a ram block device. The key difference here is that for an initrd, you must preallocate all the space for the filesystem, even if you're not going to use all that space. So if you don't use the filesystem space, you waste ram, which on an embedded device, is often a scarce resource. Tmpfs is a filesystem which runs out of ram, but only uses as much ram as is currently in use on the filesystem. So if you delete a file from a tmpfs, that ram is immediately freed up. Now normally an initramfs is temporary, only used to run some programs extremely early in the boot process. After those programs run, control is turned over to the real filesystem running on a physical disk. However, you do not have to do that. There is nothing stopping you from running out of the initramfs indefinitely.
How do I have Linux boot with a rootfs in RAM?
1,380,795,325,000
I am trying to run MongoDB on a Debian 8.5 machine. When I installed the package (pre-built from percona.com), I noticed the following files: /etc/init.d/mongod (1) /lib/systemd/system/mongod.service (2) I understand that /etc/init.d/mongod is called at boot, or in other particular system states, as long as it is registered via update-rc.d. This is perfectly fine for me. The script initializes and launches the mongo daemon. It seems to have “triggers” for start, stop, restart, etc., and as far as I understand I can trigger those with sudo service mongod <action>. /lib/systemd/system/mongod.service seems to do the same thing (i.e. run mongo), but with less configuration - just one line in the ExecStart parameter: [Unit] Description=MongoDB (High-performance, schema-free document-oriented database) After=time-sync.target network.target [Service] Type=forking User=mongod Group=mongod PermissionsStartOnly=true EnvironmentFile=/etc/default/mongod ExecStart=/usr/bin/env bash -c "/usr/bin/mongod $OPTIONS > ${STDOUT} 2> ${STDERR}" PIDFile=/var/run/mongod.pid [Install] WantedBy=multi-user.target As far as I understand this can be triggered with sudo systemctl start mongod. I don’t understand if is called at boot or not. I don’t understand why the need for two of these ‘service’ files, and how can I get rid of one (possibly the one in /lib/systemd, since it is much simpler). I don’t understand if there’s any relation between the two. I have read that systemctl works on init.d scripts too, and in this case I don’t understand which of the two files will be triggered by systemctl mongod start. I think there’s some redundancy and I should choose just one of the two ways. And I want to be sure that it is called at boot callable by command (like service or systemctl). Could you help me clear my mind?
When you have both an init.d script, and a systemd .service file with the same name, systemd will use the service file for all operations. I believe the service command will just redirect to systemd. The init.d script will be ignored. Use systemd. It's new in Debian 8, but it's the default. Systemd service files are supposed to look simpler than init.d scripts. You didn't mention any specific feature you need that's not supported by the systemd service. If the service file was not included, systemd would happily use the init.d script. So the mongod package developer is telling you they think this systemd definition is better :). Look at the output of systemctl status mongod. If the service is enabled to be started at boot time, the Loaded: line will show "enabled". Otherwise you can use systemctl enable mongod. You can also include the option --now, and it will start mongod at the same time.
Confused about /etc/init.d vs. /lib/systemd/system services
1,380,795,325,000
From su's man page: For backward compatibility, su defaults to not change the current directory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). It is recommended to always use the --login option (instead of its shortcut -) to avoid side effects caused by mixing environments. ... -, -l, --login Start the shell as a login shell with an environment similar to a real login: o clears all the environment variables except TERM o initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH o changes to the target user's home directory o sets argv[0] of the shell to '-' in order to make the shell a login shell It's hard to tell if there's any difference between - and --login (or supposedly just -l). Namely, the man page says "instead of its shortcut -", but all these options are grouped together, and I don't see an explanation of the difference, if it exists at all. UPD I checked the question, which is supposed to solve my problem. The question is basically about difference between su and su -. And I'm asking about difference between su - and su --login. So no, it doesn't solve it at all.
Debian's manual entry seems to be more enlightening: -, -l, --login Provide an environment similar to what the user would expect had the user logged in directly. When - is used, it must be specified before any username. For portability it is recommended to use it as last option, before any username. The other forms (-l and --login) do not have this restriction.
What's the difference between `su -` and `su --login`?
1,380,795,325,000
I am developing an application and I would like it to print some runtime stats to the console on demand. kill and signals came to my mind immediately. Reading through Unix signals on Wiki, SIGINFO seems like the way to go because: It is intended for these purposes Does not terminate the process if the signal handler is not implemented (contrary to SIGUSRx - see here) However, by inspecting the output of kill -l, it seems my server does not have this signal implemented. My questions are: Why is SIGINFO missing on my system? Is it absent on all GNU Linux systems? Is there an easy (i.e. no kernel/glibc recompilation) way to enable this signal? If none, what would be the hard way? What alternative signal could I use for my purposes that would not cause any side-effects if not handled by the target process? (I already assume none since I could not find any other suitable signal on the glibc's manual) Linux metainfo: Linux whatever 3.18.2-2-ARCH #1 SMP PREEMPT Fri Jan 9 07:37:51 CET 2015 x86_64 GNU/Linux Update: I am still looking for more information as to why this signal is conditionally excluded from other systems than BSD (see comments below). The signal seems to be quite useful for many purposes so it is hard for me to believe it is just a matter of whim - so what's the real showstopper for this signal to be available on Linux?
There was talk (back in the linux 0.x-1.x days) of adding this (because it was useful on BSD systems) but if I recall correctly there were reasons it was harder to do right on Linux than BSD at the time. Note that what you're asking about is only a small part of the feature (namely, you're talking about an stty info entry for control-T causing the kernel to deliver SIGINFO to the tty's process group) - that part is "easy" - but having the kernel report information about the process status when it doesn't handle the signal (because at the time very few things had any support for that, the feature was mainly about "is this process spinning or hung" and "what process is it anyway") is harder - ISTR there even being security/trust issues about displaying that information accurately, and whether it should be associated with the Secure Attention Key path. That said, there might be some value in the "easy" version that only sends the signal... (From personal memory; a quick web search doesn't turn up anything obvious but I think one would have to dig into really old archives to find the discussion.)
SIGINFO on GNU Linux (Arch Linux) missing
1,380,795,325,000
I am wondering to ask the difference of these two commands (i.e. only the order of their options are different): tar -zxvf foo.tar.gz tar -zfxv foo.tar.gz The first one ran perfectly but the second one said: tar: You must specify one of the `-Acdtrux' or `--test-label' options Try `tar --help' or `tar --usage' for more information. And tar with --test-label and -zfxv said : tar (child): xv: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now Then I looked at tar manual and realised that all the example there are using switch -f in the end!! AFAICT there is no need for this restriction, or is there?! because in my view switches should be order free.
Looking at your error message, it is obvious that you did not use tar but rather gtar. In general this may help to understand things: tar normally always needs a file argument. If it is missing, it will read/write from/to the system's default real tape device. star changed this in 1982 to use stdin/stdout by default and some other tar implementations (e.g. gtar) followed this example recently. tar does not implement the leading - for options that are called key letters in the case of the tar command. Some implementations later added - as a no-op key letter for users' convenience but you cannot rely on this. The way tar parses its arguments (in particular the archive file argument) is highly risky. I have seen many tar archives that destroyed one of the files that should be in the archive because the related file argument was taken as tar archive file. star for this reason (if called natively as star) does not allow "f" to be concatenated with other options. If star is called tar it implements command line compatibility with tar, but still handles the argument for the "f" key letter differently: the argument is only permitted if it refers to a real device file or when (in write mode) the file does not yet exist. I recommend avoiding the risky original tar command line and rather using the modern safer command line syntax you get with star. Because of the problematic command line syntax of tar, there were the so called tar wars in the early 1990s. As a result, the program pax (Latin for "peace" in the tar wars) was created and standardized. pax however did not gain popularity as its syntax is less risky but also less intuitive than the tar syntax. Another problem may be that gpax is more or less unmaintained.
tar and its key letters: is it a bug or feature?
1,380,795,325,000
While I understand the greatness of udev and appreciate the developers' effort, I was simply wondering if there is an alternative to it. For instance, I might imagine there should be a way to make startup script that creates most of the device nodes which on my system (no changing hardware) are most the same anyway. The benefit or reason I would like to skip udev would be the same as for skipping dbus, namely reducing complexity and by that increasing my changes to setup the system more safely.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev. Another option would be to attempt to use udev's predecessor devfsd. Finally, you can always create all the device files you need with mknod. Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd (udev is now part of systemd, which suggests a strong dependency). My advice is stick with udev ;)
Are there alternatives to using `udev`?
1,380,795,325,000
Running Ubuntu 17.04, I was installing a software from non-repository distribution, I was supposed to move the software bin -folder contents to /usr/bin (which was already iffy advice) It's one of those days, so what I did instead: mv /bin/* /usr/bin So I screwed up and I accidentally moved all the files in bin to /usr/bin and /bin was empty. Since I take that /bin is system critical, for quick remedy, I copied /usr/bin contents to /bin. Now my /bin and /usr/bin contents identical and both contain the files originally in /bin and /usr/bin separated. Is my Ubuntu in a broken state now? (Did not try to reboot the computer yet, right now everything seems to still work) Is there a way to know which files have been moved/copied to /usr/bin most recently, so I could just manually take care of the situation? 2.1 Are there usually overlapping files in /bin and /usr/bin Is there other ways to undo what I did? I don't have Timeshift installed so restoring backups is not an option, but there's nothing critical on the computer currently, so I could just admit to screwup reinstall the whole linux partition.
Is my Ubuntu in a broken state now? Yes, your Ubuntu is broken You messed up something important to package management. So in practice, backup your important data (at least /etc and /home), perhaps also the list of installed packages e.g. output of dpkg -l, and reinstall Ubuntu. (a non-novice could try to manage - like in other answers -, but then he would not have done such a huge and basic mistake) I could just admit to screwup reinstall the whole linux partition. That is probably what would consume less of your time. Keeping your current system with the help of other answers is keeping it in a very messy state (which would give you future headaches). Since you are reformatting your disk, consider putting /home in a separate partition (so future such mistakes won't lose your data). Before doing that print on paper the output of df -h and df -hi and fdisk -l (they give information about disk space -both used and available- ...). Be wise to have a large enough system partition (the root file system); if you can afford it 100 Gbytes is more than enough. I was supposed to move the software bin -folder contents to /usr/bin (terminology: Unix have directories, not "folders"). That (moving to /usr/bin/) is very wrong. Either improve your $PATH (preferably) or at most add symlinks in /usr/bin/ and preferably move (or add symlinks) executables to /usr/local/bin/. The wise approach is to never change /usr/bin/, /bin, /sbin, /usr/sbin/ outside of package management tools (e.g. dpkg, apt-get, aptitude, etc...). Read the FHS.
Moved /bin contents to /usr/bin, possible to undo?
1,380,795,325,000
It's a question about user space applications, but hear me out! Three "applications", so to speak, are required to boot a functional distribution of Linux: Bootloader - For embedded typically that's U-Boot, although not a hard requirement. Kernel - That's pretty straightforward. Root Filesystem - Can't boot to a shell without it. Contains the filesystem the kernel boots to, and where init is called form. My question is in regard to #3. If someone wanted to build an extremely minimal rootfs (for this question let's say no GUI, shell only), what files/programs are required to boot to a shell?
That entirely depends on what services you want to have on your device. Programs You can make Linux boot directly into a shell. It isn't very useful in production — who'd just want to have a shell sitting there — but it's useful as an intervention mechanism when you have an interactive bootloader: pass init=/bin/sh to the kernel command line. All Linux systems (and all unix systems) have a Bourne/POSIX-style shell in /bin/sh. You'll need a set of shell utilities. BusyBox is a very common choice; it contains a shell and common utilities for file and text manipulation (cp, grep, …), networking setup (ping, ifconfig, …), process manipulation (ps, nice, …), and various other system tools (fdisk, mount, syslogd, …). BusyBox is extremely configurable: you can select which tools you want and even individual features at compile time, to get the right size/functionality compromise for your application. Apart from sh, the bare minimum that you can't really do anything without is mount, umount and halt, but it would be atypical to not have also cat, cp, mv, rm, mkdir, rmdir, ps, sync and a few more. BusyBox installs as a single binary called busybox, with a symbolic link for each utility. The first process on a normal unix system is called init. Its job is to start other services. BusyBox contains an init system. In addition to the init binary (usually located in /sbin), you'll need its configuration files (usually called /etc/inittab — some modern init replacement do away with that file but you won't find them on a small embedded system) that indicate what services to start and when. For BusyBox, /etc/inittab is optional; if it's missing, you get a root shell on the console and the script /etc/init.d/rcS (default location) is executed at boot time. That's all you need, beyond of course the programs that make your device do something useful. For example, on my home router running an OpenWrt variant, the only programs are BusyBox, nvram (to read and change settings in NVRAM), and networking utilities. Unless all your executables are statically linked, you will need the dynamic loader (ld.so, which may be called by different names depending on the choice of libc and on the processor architectures) and all the dynamic libraries (/lib/lib*.so, perhaps some of these in /usr/lib) required by these executables. Directory structure The Filesystem Hierarchy Standard describes the common directory structure of Linux systems. It is geared towards desktop and server installations: a lot of it can be omitted on an embedded system. Here is a typical minimum. /bin: executable programs (some may be in /usr/bin instead). /dev: device nodes (see below) /etc: configuration files /lib: shared libraries, including the dynamic loader (unless all executables are statically linked) /proc: mount point for the proc filesystem /sbin: executable programs. The distinction with /bin is that /sbin is for programs that are only useful to the system administrator, but this distinction isn't meaningful on embedded devices. You can make /sbin a symbolic link to /bin. /mnt: handy to have on read-only root filesystems as a scratch mount point during maintenance /sys: mount point for the sysfs filesystem /tmp: location for temporary files (often a tmpfs mount) /usr: contains subdirectories bin, lib and sbin. /usr exists for extra files that are not on the root filesystem. If you don't have that, you can make /usr a symbolic link to the root directory. Device files Here are some typical entries in a minimal /dev: console full (writing to it always reports “no space left on device”) log (a socket that programs use to send log entries), if you have a syslogd daemon (such as BusyBox's) reading from it null (acts like a file that's always empty) ptmx and a pts directory, if you want to use pseudo-terminals (i.e. any terminal other than the console) — e.g. if the device is networked and you want to telnet or ssh in random (returns random bytes, risks blocking) tty (always designates the program's terminal) urandom (returns random bytes, never blocks but may be non-random on a freshly-booted device) zero (contains an infinite sequence of null bytes) Beyond that you'll need entries for your hardware (except network interfaces, these don't get entries in /dev): serial ports, storage, etc. For embedded devices, you would normally create the device entries directly on the root filesystem. High-end systems have a script called MAKEDEV to create /dev entries, but on an embedded system the script is often not bundled into the image. If some hardware can be hotplugged (e.g. if the device has a USB host port), then /dev should be managed by udev (you may still have a minimal set on the root filesystem). Boot-time actions Beyond the root filesystem, you need to mount a few more for normal operation: procfs on /proc (pretty much indispensible) sysfs on /sys (pretty much indispensible) tmpfs filesystem on /tmp (to allow programs to create temporary files that will be in RAM, rather than on the root filesystem which may be in flash or read-only) tmpfs, devfs or devtmpfs on /dev if dynamic (see udev in “Device files” above) devpts on /dev/pts if you want to use [pseudo-terminals (see the remark about pts above) You can make an /etc/fstab file and call mount -a, or run mount manually. Start a syslog daemon (as well as klogd for kernel logs, if the syslogd program doesn't take care of it), if you have any place to write logs to. After this, the device is ready to start application-specific services. How to make a root filesystem This is a long and diverse story, so all I'll do here is give a few pointers. The root filesystem may be kept in RAM (loaded from a (usually compressed) image in ROM or flash), or on a disk-based filesystem (stored in ROM or flash), or loaded from the network (often over TFTP) if applicable. If the root filesystem is in RAM, make it the initramfs — a RAM filesystem whose content is created at boot time. Many frameworks exist for assembling root images for embedded systems. There are a few pointers in the BusyBox FAQ. Buildroot is a popular one, allowing you to build a whole root image with a setup similar to the Linux kernel and BusyBox. OpenEmbedded is another such framework. Wikipedia has an (incomplete) list of popular embedded Linux distributions. An example of embedded Linux you may have near you is the OpenWrt family of operating systems for network appliances (popular on tinkerers' home routers). If you want to learn by experience, you can try Linux from Scratch, but it's geared towards desktop systems for hobbyists rather than towards embedded devices. A note on Linux vs Linux kernel The only behavior that's baked into the Linux kernel is that the first program that's launched at boot time. (I won't get into initrd and initramfs subtleties here.) This program, traditionally called init, has process ID 1 and has certain privileges (immunity to KILL signals) and responsibilities (reaping orphans). You can run a system with a Linux kernel and start whatever you want as the first process, but then what you have is an operating system based on the Linux kernel, and not what is normally called “Linux” — Linux, in the common sense of the term, is a Unix-like operating system whose kernel is the Linux kernel. For example, Android is an operating system which is not Unix-like but based on the Linux kernel.
What are the minimum root filesystem applications that are required to fully boot linux?
1,380,795,325,000
The problem I am getting is, when I enter the command, su - root at the beginning of my shell script file, it prompts the user to enter the password and then does NOT continue with the rest of the shell script. I then have to manually locate and run the shell script via terminal. I want the script to make sure that the user logs in as root and then continue with the rest of the shell script. In other words, I want to run the script as any user but as soon as the script begins to execute, the user must change to root and then continue on with the rest of the script as root until it is done. Can this be done?
This is very easy to accomplish: #!/bin/sh [ "$(whoami)" != "root" ] && exec sudo -- "$0" "$@" When the current user isn't root, re-exec the script through sudo. Note that I am using sudo here instead of su. This is because it allows you to preserve arguments. If you use su, your command would have to be su -c "$0 $@" which would mangle your arguments if they have spaces or special shell characters. If your shell is bash, you can avoid the external call to whoami: (( EUID != 0 )) && exec sudo -- "$0" "$@"
Prompt user to login as root when running a shell script
1,380,795,325,000
How can I set sender name and email address using mail command in shell script.
Try this: mail -s 'Some Subject' -r 'First Last <[email protected]>' [email protected] This sets both From: and the envelope sender.
Set sender name in mail function
1,380,795,325,000
Possible Duplicate: Is it possible to install the linux kernel alone? I had watched the documentary Revolution OS and there is a basic operating System by GNU and the kernel by Linux. Then there come distributions which are modified versions of the Linux operating system. I want the Operating System which is the default Linux operating system and not any distribution. I have tried to look at the Linux website but there is information about distributions only. Is the default Linux OS not available for users?
Linux by itself is not very useful because there are no applications: it is purely a kernel. In fact, when the kernel finishes booting, the first thing it does is launch an application called init. If that application isn't there, you get a big error message, and you can't do anything with it*. Distributions are so named because they distribute the Linux kernel along with a set of applications. Likewise, the GNU utilities by themselves are not useful without a kernel. You could put them on a storage medium and turn on a computer, but there is nothing there to run those programs. Also, even if there were something that started init, init and all the other programs rely on the kernel for services. For instance, the first thing that the program that is usually called init does is open a file /etc/inittab; to open that file, it calles a function open(); that function is provided by the kernel. Now, you can build a distribution that has no (or few) GNU applications. See Alpine Linux for an example. This is why I do not call Linux GNU/Linux; when I say Linux, I am not referring to the subset of Linux systems that have GNU utilities. *Technically, there are some things you can do with just the kernel.
Linux without any distribution [duplicate]
1,312,416,838,000
I was wondering how to get information about the following things from the command line in Linux: word (i.e. the size that the CPU can process at one time, which may not be the OS bit-depth), address size (i.e. the number of bits in an actual address), address bus size (not sure if it is the same as address size by definition, but I think they are different and may not agree), data bus size, instruction size?
Do a cat /proc/cpuinfo and look at the results: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Genuine Intel(R) CPU U4100 @ 1.30GHz stepping : 10 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm bogomips : 2593.48 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: A lot of the information that you are looking for can be inferred from this.
How to get information about word, address size, address bus size, data bus size and instruction bus size?
1,312,416,838,000
When I set up my Debian 6, I was wondering, which users apart from root whose password I know can log into my system via SSH? When I install Apache 2 a user called www-data gets created. Does this user have the right to log into my system via SSH? But if there was some default password for www-data everyone could log in, seems unlikely to me. Where do I have a list which users are allowed to log into my system via SSH? Can't find anything in the ssh config files.
Paradeepchhetri isn't exactly correct. Debian's unmodified sshd_config has the following: PubkeyAuthentication yes PermitEmptyPasswords no UsePAM yes Thus, login via ssh would only work for users that have a populated password field in /etc/shadow or an ssh key in ~/.ssh/authorized_keys. Note that the default value for PubkeyAuthentication is yes and for PermitEmptyPasswords is no, so even if you remove them the behavior will be the same. In the question example, www-data by default won't be allowed to log in since Debian's installer neither assigns a password nor creates a key for www-data. pam_access, AllowUsers and AllowGroups in sshd_config can be used for finer control if that's needed. In Debian it's strongly encouraged to UsePAM.
Which users are allowed to log in via SSH by default?
1,312,416,838,000
I am using convert to create a PDF file from about 2,000 images: convert 0001.miff 0002.miff ... 2000.miff -compress jpeg -quality 80 out.pdf The process terminates reproducible when the output file has reached 2^31-1 bytes (2 GB −1) with the message convert: unknown `out.pdf'. The PDF file specification allows for ≈10 GB. I tried to pull more information from -debug all, but I didn’t see anything helpful in the logging output. The file system is ext3 which allows for files at least up to 16 GiB (may be more). As to ulimit, file size is unlimited. /etc/security/limits.conf only contains commented-out lines. What else can cause this and how can I increase the limit? ImageMagick version: 6.4.3 2016-08-05 Q16 OpenMP Distribution: SLES 11.4 (i586)
Your limitation does not stem indeed from the filesystem; or from package versions I think. Your 2GB limit is coming from you using a 32-bit version of your OS. The option to increase the file would be installing a 64-bit version if the hardware supports it. See Large file support Traditionally, many operating systems and their underlying file system implementations used 32-bit integers to represent file sizes and positions. Consequently, no file could be larger than 232 − 1 bytes (4 GB − 1). In many implementations, the problem was exacerbated by treating the sizes as signed numbers, which further lowered the limit to 231 − 1 bytes (2 GB − 1).
Get over 2 GB limit creating PDFs with ImageMagick
1,312,416,838,000
I use screen as my window manager through putty. Screen has been great, but I need a way to increase my buffer when I run commands. I have no buffer when I scroll up, no std out is saved beyond my window size on any terminal. How can I increase this I can't seem to find an option in the commands? Ctrl + a ? doesn't seem to have what I am looking for.
Do Ctrl + a : then enter scrollback 1234 sets your buffer to 1234 lines. You enter scrollback mode ("copy mode") with Ctrl + a Esc, then move in vi-style, leave copy mode with another Esc
Increase buffer size while running screen
1,312,416,838,000
I want to print all lines from file until the match word please advice how to do that with awk for example I want to print all lines until word PPP remark the first line chuld be diff from AAA ( any word ) cat file.txt AAA ( the first line/word chuld be any word !!!!! ) BBB JJJ OOO 345 211 BBB OOO OOO PPP MMM ((( &&& so I need to get this AAA BBB JJJ OOO 345 211 BBB OOO OOO PPP other example ( want to print until KJGFGHJ ) cat file.txt1 HG KJGFGHJ KKKK so I need to get HG KJGFGHJ
Try: $ awk '1;/PPP/{exit}' file AAA BBB JJJ OOO 345 211 BBB OOO OOO PPP
awk + print lines from the first line until match word
1,312,416,838,000
I'm running a custom built Linux machine, so not all Linux commands are available. I execute network related commands, so I need to set a default gateway right before I run my command, then remove that gateway immediately afterward. To do that I run all my commands in one line: /sbin/route add default gw 10.10.10.10;my command;/sbin/route del default gw 10.10.10.10; The problem is, for some reason I once found 2 default gateways on the same machine which caused all my commands to fail because even if I set my default gateway before running my test, it is still messed up and can't run my test. So is there a way to remove ALL default gateways in one command ? I have a large amount of machines that are increasing and it won't be practical to plant a script on every machine. I need a command as simple as the following: /sbin/route del all default;set my default gw;mycommand;/sbin/route del all default; All I have found so far is a command to remove a default gateway but not all of them /sbin/route del default which won't work for me. /sbin/route help displays the following /sbin/route --help Usage: route [{add|del|delete}] Edit the kernel's routing tables Options: -n Don't resolve names -e Display other/more information -A inet Select address family
All the answers are great but I resolved this problem using a different approach, I used the command to add only one default gateway, but fail if there is already one. And thus eventually remove the wrong gateway at the end of the command. This should work the second time inshallah. ip route add default via my-gateway ip route del default
How to remove all default gateways
1,312,416,838,000
I used to think deleting my bash history was enough to clear my bash history, but yesterday my cat was messing around the right side of my keyboard and when I got back into my computer I saw something I typed a month ago, then I started to press all the keys like crazy looking for what could've triggered it. Turns out UPARROW key shows my bash history even after deleting .bash_history. How can I delete my bash history for real?
In some cases (some bash versions), doing a: $ history -c; history -w Or simply $ history -cw Will clear history in memory (up and down arrow will have no commands to list) and then write that to the $HISTFILE file (if the $HISTFILE gets truncated by the running bash instance). Sometimes bash choose to not truncate the $HISTFILE file even with histappend option unset and $HISFILEZIZE set to 0. In such cases, the nuke option always works: history -c; >$HISTFILE That clear the history list of commands recorded in memory and all commands previously recorded to file. That will ensure that the running shell has no recorded history either in memory or disk, however, other running instances of bash (where history is active) may have a full copy of commands read from $HISTFILE when bash was started (or when a history -r is executed). If it is also required that nothing else (no new commands) of the present session would be written to the history file, then, unset HISTFILE will prevent any such logging.
why deleting bash history is not enough?
1,312,416,838,000
Input: 201103 1 /mnt/hdd/PUB/SOMETHING 201102 7 /mnt/hdd/PUB/SOMETH ING 201103 11 /mnt/hdd/PUB/SO METHING 201104 3 /mnt/hdd/PUB/SOMET HING 201106 1 /mnt/hdd/PUB/SOMETHI NG Desired output: 201103 01 /mnt/hdd/PUB/SOMETHING 201102 07 /mnt/hdd/PUB/SOMETH ING 201103 11 /mnt/hdd/PUB/SO METHING 201104 03 /mnt/hdd/PUB/SOMET HING 201106 01 /mnt/hdd/PUB/SOMETHI NG How can I add a 0 if there is only a single digit, e.g. 1 in the "day" part? I need this date format: YYYYMM DD.
$ sed 's/\<[0-9]\>/0&/' ./infile 201103 01 /mnt/hdd/PUB/SOMETHING 201102 07 /mnt/hdd/PUB/SOMETH ING 201103 11 /mnt/hdd/PUB/SO METHING 201104 03 /mnt/hdd/PUB/SOMET HING 201106 01 /mnt/hdd/PUB/SOMETHI NG
Zero-fill numbers to 2 digits with sed
1,312,416,838,000
I am trying to extract an SFX file under Linux Mint 15 (64 bit) but it's not working. I've done chmod +x on the file and tried to run it like a script with no luck (it gives me an error that there's no such file or directory. What's interesting is that this worked for me when I was running Linux Mint 14 (64 bit). I found an article that mentions glibc support and how newer distributions have removed 32 bit glibc binaries but I'm not quite sure if this is accurate in my case since I'm not running RHEL. EDIT: I forgot to mention that I tried the solution posted on that article but it did not fix my problem. I've also tried using 7z, 7za, unzip, and unzipsfx with no success. unzipsfx gives me the error "unzipsfx: cannot find myself! [unzipsfx]" which I find rather strange. A quick note: The sfx relies on six other archives in the rar format. I'm not dealing with zip, 7z, or any other format like that. Am I doing something wrong? Something must have changed between distributions since extracting worked fine for me before...
Use unrar to extract files from RAR SFX archives. Like this: unrar x filename.sfx
Extracting SFX files in Linux
1,312,416,838,000
I am using Manjaro OS (Arch Linux based distro) on my HP Notebook 15 with Pentium IV processor. I have tried changing timezones using "Orange Globaltime" built into the distro but despite being connected to the internet, the time is not updated. Furthermore, I have used $sudo date +%T -s "14:26:00" to set the time of my laptop. But after I login in again, the time is back to being wrong again. How can I fix my laptop's time?
It seems as though Network time Protocol is either not installed or not working on your laptop. I suggest using the following commands to install it: Step 1: Install NTP sudo pacman -S ntp Step 2: Turn on NTP sudo timedatectl set-ntp true Source: https://wiki.manjaro.org/index.php?title=System_Time_Setting
Manjaro OS shows wrong time
1,312,416,838,000
How can I rename all the files in a specific directory where the files contains blanks spaces and special characters ($ and @) in their names? I tried the rename command as follows to replace all the spaces and special characters with a _: $ ls -lrt total 464 -rwxr-xr-x. 1 pmautoamtion pmautoamtion 471106 Jul 17 13:14 Bharti Blocked TRX Report [email protected] $ rename -n 's/ |\$|@/_/g' * $ ls -lrt total 464 -rwxr-xr-x. 1 pmautoamtion pmautoamtion 471106 Jul 17 13:14 Bharti Blocked TRX Report [email protected] $ The command works, but won't make any changes in the file names and won't return any error either. How can in fix this and are there other ways as well?
Since the rename command didn't work for me for unknown reasons and i do not get any other answers for my question, i myself tried to make an effort to make the rename possible. This might not be the best approach to rename the files but it worked for me and this is why i would like to post it as an answer so that if anyone else reads this might get some help to change the file names the way i did. Now for me, i know that all the files will have a specific text in their names which is the word "Block". Following are the file names before their renaming was done: anks@anks:~/anks$ ls -lrt total 4 -rw-r--r-- 1 anks anks 0 Jul 25 14:47 Bharti TRX Block [email protected] -rw-r--r-- 1 anks anks 0 Jul 25 14:47 Bharti TRX Block [email protected] -rw-r--r-- 1 anks anks 0 Jul 25 14:47 Bharti TRX Block [email protected] -rw-r--r-- 1 anks anks 0 Jul 25 14:47 Bharti TRX Block [email protected] -rw-r--r-- 1 anks anks 0 Jul 25 14:48 Bharti TRX Block [email protected] Now i have written a small shell script to make this possible. Following is the code: #!/bin/bash PATH="/home/ebfijjk/anks" # Put the old filenames in a file. ls $PATH | grep Block >> oldValues # Put the new names without " " or "@" or "$" in another file cat oldValues | sed 's/\$/_/g' | sed 's/\@/_/g' | sed 's/ /_/g' >> newValues # Create a new file with Old names and New names seperated by a #. paste -d'#' oldValues newValues >> oldAndNew # Read the file with both old and new names and rename them with the new names. while IFS='#'; read oldValue newValue do mv "$oldValue" "$newValue" done < oldAndNew rm oldValues newValues oldandNew And that's it, when i run the script, it renames all the file names having blank spaces () or $ or @ with _ instead of these characters.
How to rename all files with special characters and spaces in a directory?
1,312,416,838,000
Are the commands in /etc/rc.local ran by su by default? Do I need to specific sudo before each command or will they be ran by su regardless?
su is not a user it's program to run subsequent commands/programs under an alternate identity of another user than the one executing the command. It is very similar to sudo in that regard. Unless another user is specified both commands will default to running the command under the alternate identity of the root user, the superuser/administrator. The main difference between su and sudo is that: su requires you to know the password of that alternate user, where sudo will prompt for the password of the user running the sudo command and requires setup so that the user is allowed to run the requested commands/programs. (When root runs either su or sudo no password is required.) Like any init script, the /etc/rc.local script is executed by the root user and you do not need to prepend either su or sudo to the commands/programs that need to run as root. You may still need to use su or sudo in your init scripts if those commands need to be executed not as root but another user/service-account... su -oracle/do/something/as/oracle/user
What user runs the commands defined in /etc/rc.local?
1,312,416,838,000
Given a 2.6.x or newer Linux kernel and existing userland that is capable of running both ELF32 and ELF64 binaries (i.e. well past How do I know that my CPU supports 64bit operating systems under Linux?) how can I determine if a given process (by PID) is running in 32- or 64-bit mode? The naive solution would be to run: file -L /proc/pid/exe | grep -o 'ELF ..-bit [LM]SB' but is that information exposed directly in /proc without relying on libmagic?
If you want to limit yourself to ELF detection, you can read the ELF header of /proc/$PID/exe yourself. It's quite trivial: if the 5th byte in the file is 1, it's a 32-bit binary. If it's 2, it's 64-bit. For added sanity checking: If the first 5 bytes are 0x7f, "ELF", 1: it's a 32 bit ELF binary. If the first 5 bytes are 0x7f, "ELF", 2: it's a 64 bit ELF binary. Otherwise: it's inconclusive. You could also use objdump, but that takes away your libmagic dependency and replaces it with a libelf one. Another way: you can also parse the /proc/$PID/auxv file. According to proc(5): This contains the contents of the ELF interpreter information passed to the process at exec time. The format is one unsigned long ID plus one unsigned long value for each entry. The last entry contains two zeros. The meanings of the unsigned long keys are in /usr/include/linux/auxvec.h. You want AT_PLATFORM, which is 0x00000f. Don't quote me on that, but it appears the value should be interpreted as a char * to get the string description of the platform. You may find this StackOverflow question useful. Yet another way: you can instruct the dynamic linker (man ld) to dump information about the executable. It prints out to standard output the decoded AUXV structure. Warning: this is a hack, but it works. LD_SHOW_AUXV=1 ldd /proc/$SOME_PID/exe | grep AT_PLATFORM | tail -1 This will show something like: AT_PLATFORM: x86_64 I tried it on a 32-bit binary and got i686 instead. How this works: LD_SHOW_AUXV=1 instructs the Dynamic Linker to dump the decoded AUXV structure before running the executable. Unless you really like to make your life interesting, you want to avoid actually running said executable. One way to load and dynamically link it without actually calling its main() function is to run ldd(1) on it. The downside: LD_SHOW_AUXV is enabled by the shell, so you'll get dumps of the AUXV structures for: the subshell, ldd, and your target binary. So we grep for AT_PLATFORM, but only keep the last line. Parsing auxv: if you parse the auxv structure yourself (not relying on the dynamic loader), then there's a bit of a conundrum: the auxv structure follows the rule of the process it describes, so sizeof(unsigned long) will be 4 for 32-bit processes and 8 for 64-bit processes. We can make this work for us. In order for this to work on 32-bit systems, all key codes must be 0xffffffff or less. On a 64-bit system, the most significant 32 bits will be zero. Intel machines are little endians, so these 32 bits follow the least significant ones in memory. As such, all you need to do is: 1. Read 16 bytes from the `auxv` file. 2. Is this the end of the file? 3. Then it's a 64-bit process. 4. Done. 5. Is buf[4], buf[5], buf[6] or buf[7] non-zero? 6. Then it's a 32-bit process. 7. Done. 8. Go to 1. Parsing the maps file: this was suggested by Gilles, but didn't quite work. Here's a modified version that does. It relies on reading the /proc/$PID/maps file. If the file lists 64-bit addresses, the process is 64 bits. Otherwise, it's 32 bits. The problem lies in that the kernel will simplify the output by stripping leading zeroes from hex addresses in groups of 4, so the length hack can't quite work. awk to the rescue: if ! [ -e /proc/$pid/maps ]; then   echo "No such process" else case $(awk </proc/$pid/maps -- 'END { print substr($1, 0, 9); }') in *-) echo "32 bit process";; *[0-9A-Fa-f]) echo "64 bit process";; *) echo "Insufficient permissions.";; esac fi This works by checking the starting address of the last memory map of the process. They're listed like 12345678-deadbeef. So, if the process is a 32-bit one, that address will be eight hex digits long, and the ninth will be a hyphen. If it's a 64-bit one, the highest address will be longer than that. The ninth character will be a hex digit. Be aware: all but the first and last methods need Linux kernel 2.6.0 or newer, since the auxv file wasn't there before.
Determine if a specific process is 32- or 64-Bit
1,312,416,838,000
I followed these instructions to build Shadow, which provides the groupadd command. I am now getting an error when trying this: $ groupadd automake1.10 groupadd: 'automake1.10' is not a valid group name I checked alphanumeric names, and they work okay.
See the source code, specifically libmisc/chkname.c. Shadow is pretty conservative: names must match the regexp [_a-z][-0-9_a-z]*\$? and may be at most GROUP_NAME_MAX_LENGTH characters long (configure option, default 16; user names can usually go up to 32 characters, subject to compile-time determination). Debian relaxes the check a lot. As of squeeze, anything but whitespace and : is allowed. See bug #264879 and bug #377844. POSIX requires allowing letters of either case, digits and ._- (like in file names). POSIX doesn't set any restriction if you don't care about portability. A number of recommended restrictions come from usage: Colons, newlines and nulls are right out; you just can't use them in /etc/passwd or /etc/group. An name consisting solely of digits is a bad idea — chown and chgrp are supposed to treat a digit sequence as a name if it's in the user/group database, but other applications may treat any number as a numerical id. An initial - or a . in a user name is strongly not recommended, because many applications expect to be able to pass $user.$group to an external utility (e.g. chown $user.$group /path/to/file)¹. A . in a group name should cause less trouble, but I'd still recommend against it. / is likely to cause trouble too, because some programs expect to be able to use user names in file names. Any character that the shell would expand is probably risky. Non-ASCII characters should be ok if you don't care about sharing with systems that may use different encodings. ¹ All modern implementations expect chown $user:$group, but support chown $user.$group for backward compatibility, and there are too many applications out there that pass a dot to remove that compatibility support.
What are the allowed group names for groupadd?
1,312,416,838,000
The kernel contains a filesystem, nsfs. snapd creates a nsfs mount under /run/snapd/ns/<snapname>.mnt for each installed snap. ls shows it as a 0 byte file. The kernel source code does not seem to contain any documentation or comments about it. The main implementation seems to be here and the header file here. From that, it seems to be namespace related. A search of the repo does not even find Kconfig entries to enable or disable it... What is the purpose of this filesystem and what is used for?
As described in the kernel commit log linked to by jiliagre above, the nsfs filesystem is a virtual filesystem making Linux-kernel namespaces available. It is separate from the /proc "proc" filesystem, where some process directory entries reference inodes in the nsfs filesystem in order to show which namespaces a certain process (or thread) is currently using. The nsfs doesn't get listed in /proc/filesystems (while proc does), so it cannot be explicitly mounted. mount -t nsfs ./namespaces fails with "unknown filesystem type". This is, as nsfs as it is tightly interwoven with the proc filesystem. The filesystem type nsfs only becomes visible via /proc/$PID/mountinfo when bind-mounting an existing(!) namespace filesystem link to another target. As Stephen Kitt rightly suggests above, this is to keep namespaces existing even if no process is using them anymore. For example, create a new user namespace with a new network namespace, then bind-mount it, then exit: the namespace still exists, but lsns won't find it, since it's not listed in /proc/$PID/ns anymore, but exists as a (bind) mount point. # bind mount only needs an inode, not necessarily a directory ;) touch mynetns # create new network namespace, show its id and then bind-mount it, so it # is kept existing after the unshare'd bash has terminated. # output: net:[##########] NS=$(sudo unshare -n bash -c "readlink /proc/self/ns/net && mount --bind /proc/self/ns/net mynetns") && echo $NS # notice how lsns cannot see this namespace anymore: no match! lsns -t net | grep ${NS:5:-1} || echo "lsns: no match for net:[${NS:5:-1}]" # however, findmnt does locate it on the nsfs... findmnt -t nsfs | grep ${NS:5:-1} || echo "no match for net:[${NS:5:-1}]" # output: /home/.../mynetns nsfs[net:[##########]] nsfs rw # let the namespace go... echo "unbinding + releasing network namespace" sudo umount mynetns findmnt -t nsfs | grep ${NS:5:-1} || echo "findmnt: no match for net:[${NS:5:-1}]" # clean up rm mynetns Output should be similar to this one: net:[4026532992] lsns: no match for net:[4026532992] /home/.../mynetns nsfs[net:[4026532992]] nsfs rw unbinding + releasing network namespace findmnt: no match for net:[4026532992] Please note that it is not possible to create namespaces via the nsfs filesystem, only via the syscalls clone() (CLONE_NEW...) and unshare. The nsfs only reflects the current kernel status w.r.t. namespaces, but it cannot create or destroy them. Namespaces automatically get destroyed whenever there isn't any reference to them left, no processes (so no /proc/$PID/ns/...) AND no bind-mounts either, as we've explored in the above example.
What is the NSFS filesystem?
1,312,416,838,000
I set up some iptables rules so it logs and drops the packets that are INVALID (--state INVALID). Reading the logs how can I understand why the packet was considered invalid? For example, the following: Nov 29 22:59:13 htpc-router kernel: [6550193.790402] ::IPT::DROP:: IN=ppp0 OUT= MAC= SRC=31.13.72.7 DST=136.169.151.82 LEN=40 TOS=0x00 PREC=0x00 TTL=242 ID=5104 DF PROTO=TCP SPT=80 DPT=61597 WINDOW=0 RES=0x00 ACK RST URGP=0
Packets can be in various states when using stateful packet inspection. New: The packet is not part of any known flow or socket and the TCP flags have the SYN bit on. Established: The packet matches a flow or socket tracked by CONNTRACK and has any TCP flags. After the initial TCP handshake is completed the SYN bit must be off for a packet to be in state established. Related: The packet does not match any known flow or socket, but the packet is expected because there is an existing socket that predicates it (examples of this are data on port 20 when there is an existing FTP session on port 21, or UDP data for an existing SIP connection on TCP port 5060). This requires an associated ALG. Invalid: If none of the previous states apply the packet is in state INVALID. This could be caused by various types of stealth network probes, or it could mean that you're running out of CONNTRACK entries (which you should also see stated in your logs). Or it may simply be entirely benign. In your case, the packet that you cite shows that the TCP flags ACK and RST, and that the source port is 80. What that means is that the web server at 31.13.72.7 (which happens to be Facebook) sent a reset packet to you. It's entirely impossible to say why without seeing the packets that came before it (if any). But most likely it is sending you a reset for the same reason your computer thinks it's invalid.
How to understand why the packet was considered INVALID by the `iptables`?
1,312,416,838,000
I wonder how to get processes currently running semaphores by /proc ? I guess it's possible by SysVIPC subdirectory.But I don't know how to use this commands. Ubuntu 12.10
My only experience in dealing with semaphores and shared memory is through the use of the command ipcs. Take a look at the ipcs man page for more details. This command shows you what processes have semaphores: $ ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x4d114854 65536 saml 600 8 With the semid known we can query for addition info about the PIDs that have semaphores (note there are 8 - the nsems column): $ ipcs -s -i 65536 Semaphore Array semid=65536 uid=500 gid=501 cuid=500 cgid=501 mode=0600, access_perms=0600 nsems = 8 otime = Sun May 12 14:44:53 2013 ctime = Wed May 8 22:12:15 2013 semnum value ncount zcount pid 0 1 0 0 0 1 1 0 0 0 2 1 0 0 2265 3 1 0 0 2265 4 1 0 0 0 5 1 0 0 0 6 1 0 0 4390 7 1 0 0 4390 The pid column are these processes. You can either look them up using ps or look through the /proc file-system, /proc/<pid>. For example: $ more /proc/2265/cmdline mono POSIX & SystemV Building off of a comment left by @lgeorget I dug into my PID 2265's /proc/2265/map contents and did find the following /dev/shm references: $ grep shm /proc/2265/maps 7fa38e7f6000-7fa38ebdf000 rw-s 00000000 00:11 18517 /dev/shm/mono-shared-500-shared_fileshare-grinchy-Linux-x86_64-40-12-0 7fa38f0ca000-7fa38f0cb000 rw-s 00000000 00:11 18137 /dev/shm/mono.2265 7fa3967be000-7fa3967d3000 rw-s 00000000 00:11 18516 /dev/shm/mono-shared-500-shared_data-grinchy-Linux-x86_64-328-12-0
How to get proccesses currently running semaphores by /proc?
1,312,416,838,000
When I install sendmail from the debian repos, I get the following output: Disabling HOST statistics file(/var/lib/sendmail/host_status). Creating /etc/mail/sendmail.cf... Creating /etc/mail/submit.cf... Informational: confCR_FILE file empty: /etc/mail/relay-domains Informational: confCT_FILE file empty: /etc/mail/trusted-users Updating /etc/mail/access... Updating /etc/mail/aliases... WARNING: local host name (ixtmixilix) is not qualified; see cf/README: WHO AM I? Can someone please tell me what this means, what I need to do to qualify my hostname?
It's referring to this page from the readme, which tells you how to specify your hostname. It's warning you that your hostname won't work outside your local network; sendmail attaches your hostname as the sender of the message, but it's going to be useless on the other end because people outside your local network can't find the machine ixtmixilix. You should specify a hostname that can be resolved from anywhere, like ixtmixilix.example.com
What is sendmail referring to here?
1,312,416,838,000
I am new to system administration and I have a permission related query. I have a group called administration. Inside the administration group, I have the users user1, user2, user3, superuser. All the users are in the administration group. Now, I need to give permissions to the user superuser to be able to view the /home directory of the other users. However, I do not want user1, user2, user3 to see the home of any other user other than himself. (That is, user1 should be able to see only user1's home and so on). I have created the users and groups and assigned all the users to the group. How should I specify the permissions for the superuser now? In other words, I'm thinking of having two groups (say NormalUsers and Superuser). The NormalUsers group will have the users user1, user2 and user3. The Superuser group will only have the user Superuser. Now, I need the Superuser to have full access on the files of users in the group NormalUsers. Is this possible in Linux?
If the users are cooperative, you can use access control lists (ACL). Set an ACL on the home directory of user1 (and friends) that grants read access to superuser. Set the default ACL as well, for newly created files, and also the ACL on existing files. setfacl -R -m user:superuser:rx ~user1 setfacl -d -R -m user:superuser:rx ~user1 user1 can change the ACL on his files if he wishes. Even if user1 is cooperating, new files may accidentally be unreadable, e.g. when untarring an archive with restrictive permissions, or because some applications deliberately create files that are only readable by the user (e.g. mail clients tend to do this). If you want to always give superuser read access to user1's files, you can create another view of the users' home directories with different permissions, with bindfs. mkdir -p ~superuser/spyglass/user1 chown superuser ~superuser/spyglass chmod 700 ~superuser/spyglass bindfs -p a+rD-w ~user1 ~superuser/spyglass/user1 Files accessed through ~superuser/spyglass/user1 are world-readable. Other than the permissions, ~superuser/spyglass/user1 is a view of user1's home directory. Since superuser is the only user who can access ~superuser/spyglass, only superuser can benefit from this. This method is automatic and user1 cannot opt out.
Allow a user to read some other users' home directories
1,312,416,838,000
I am trying to remove some extremely large directories, however no success. Here are some observations: # cwd contains the two larger directories $ ls -lhF drwxrwxr-x 2 hongxu hongxu 471M Oct 16 18:52 J/ drwxr-xr-x 2 hongxu hongxu 5.8M Oct 16 17:21 u/ # Note that this is the output of `ls` of the directory themselves so they should be *huge* # J/ seems much larger than u/ (containing more files), so take u/ as an example $ rm -rf u/ # hang for a very long time, and finally report rm: traversal failed: u: Bad message $ cd u/ # can cd into u/ without problems $ ls -lhF # hang for a long time; cancel succeeds when I press Ctrl-C $ rm * # hang for a long time; cancel fails when I press Ctrl-C # however there are no process associated with `rm` as reported by `ps aux` These two directories mostly contain lots of small files (each of which not exceeding 10k, I suppose). Now that I have to remove these two directories to free more disk space. What should I do? UPDATE1: Please see the output of rm -rf u/ which tells that rm: traversal failed: u: Bad message after quite a long time (> 2 hours). Therefore, the problem seems not about efficiency. UPDATE2: When applying fsck, it reports as follows (seems fine): $ sudo fsck -A -y /dev/sda2 fsck from util-linux 2.31.1 fsck.fat 4.1 (2017-01-24) /dev/sda1: 13 files, 1884/130812 clusters $ df /dev/sda2 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 244568380 189896000 43628648 82% / UPDATE3: In case it may be relevant (but probably not), these two directories (J/ and u/) contain terminfo generated by tic command; different from regular compiled terminfo files (e.g., those inside /lib/terminfo), these were generated with some fuzzing techniques so may not be "legal" terminfo files. irrelevant! UPDATE4: Some more observations: $ find u/ -type f | while read f; do echo $f; rm -f $f; done # hang for a long time, IUsed (`df -i /dev/sda2`) not decreased $ mkdir emptyfolder && rsync -r --delete emptyfolder/ u/ # hang for a long time, IUsed (`df -i /dev/sda2`) not decreased $ strace rm -rf u/ execve("/bin/rm", ["rm", "-rf", "u"], 0x7fffffffc550 /* 121 vars */) = 0 brk(NULL) = 0x555555764000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=125128, ...}) = 0 mmap(NULL, 125128, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ffff7fd8000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\34\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=2030544, ...}) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ffff7fd6000 mmap(NULL, 4131552, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7ffff79e4000 mprotect(0x7ffff7bcb000, 2097152, PROT_NONE) = 0 mmap(0x7ffff7dcb000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e7000) = 0x7ffff7dcb000 mmap(0x7ffff7dd1000, 15072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ffff7dd1000 close(3) = 0 arch_prctl(ARCH_SET_FS, 0x7ffff7fd7540) = 0 mprotect(0x7ffff7dcb000, 16384, PROT_READ) = 0 mprotect(0x555555762000, 4096, PROT_READ) = 0 mprotect(0x7ffff7ffc000, 4096, PROT_READ) = 0 munmap(0x7ffff7fd8000, 125128) = 0 brk(NULL) = 0x555555764000 brk(0x555555785000) = 0x555555785000 openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1683056, ...}) = 0 mmap(NULL, 1683056, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ffff7e3b000 close(3) = 0 ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0 lstat("/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 newfstatat(AT_FDCWD, "u", {st_mode=S_IFDIR|0755, st_size=6045696, ...}, AT_SYMLINK_NOFOLLOW) = 0 openat(AT_FDCWD, "u", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_DIRECTORY) = 3 fstat(3, {st_mode=S_IFDIR|0755, st_size=6045696, ...}) = 0 fcntl(3, F_GETFL) = 0x38800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOFOLLOW|O_DIRECTORY) fcntl(3, F_SETFD, FD_CLOEXEC) = 0 getdents(3, /* 2 entries */, 32768) = 48 getdents(3, /* 1 entries */, 32768) = 24 ... (repeated lines) getdents(3, /* 1 entries */, 32768) = 24 getdents(3strace: Process 5307 detached <detached ...> # (manually killed) $ ls -f1 u/ ./ ../ ../ ../ ../ ... (repeated lines) ../ $ sudo journalctl -ex Oct 17 16:00:16 CSLRF03AU kernel: JBD2: Spotted dirty metadata buffer (dev = sda2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error: 6971 callbacks suppressed Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rm: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rsync: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rm: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksum Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksum # #9789534 is the inode of `u/` as reported by `ls -i` So should be a filesystem corruption. But rebooting does not work :(
Okay, I finally solved the issues. It was due to the filesystem errors that cause ls to display wrongly, and other utilities to malfunction. I'm sorry that the question title is misleading (despite that there are indeed many files inside u/, the directory is not extremely large). I solved the problem by using a live usb since the corrupted filesystem is /. The fix was simply applying sudo fsck -cfk /dev/sda2 where dev/sda2 is the corrupted disk.
"traversal failed: u: Bad message" when deleting an extremely large directory in Linux
1,312,416,838,000
A vanilla ss -l lists (on my current machine) lots of open sockets, with various Netid types and many of which are only listening on localhost. How do I get a list of all and only those sockets through which a remote machine can conceivably exchange data with the machine? This would include TCP, UDP, any other transport-layer protocols, RAW sockets, and any others I may not be aware of. (Is ss complete in this sense?) I believe this would exclude UNIX sockets (they're over the local filesystem only, right? or of UNIX sockets can act remotely, they should be included). Localhost-restricted listeners can be ignored but I don't know if there are any caveats in terms of how localhost can be represented/mapped. The essential criteria is "any socket I can be remotely hacked through, if the listening process allowed it". (I recall ss commands from a few years ago showed a lot fewer results than what I get now. This makes me wonder if some distributions configure ss to hide stuff by default. I'm looking for a ss or similar utility command which is as portable as possible, insofar as it won't hide anything just because it was run in a different environment. Also, from a security-theoretic point of view, we can assume for the threat model that the machine is fully under our control and is running ordinary, non-malicious software.) So how do I list all and only the relevant sockets?
In most environments, you would only expect to find tcp, udp, raw and packet sockets. Happily, ss knows about all of these. Assuming ss knows all the protocols you need it to, I might use the following command. This will exclude the "unix" sockets. It also excludes "netlink" sockets, which are only used to communicate with the local kernel. sudo ss -l -p | grep -vE '^(u_|nl )' Often you do not have a lot of listening sockets. So you can look through them all, and manually ignore any that listen on loopback IP addresses. Alternatively, you can ask ss to do all the filtering: sudo ss -l -p -A 'all,!unix,!netlink' 'not src 127.0.0.1 not src [::1]' In both cases, the output can also include "client" udp sockets. So it might also show DNS clients, HTTP/3 clients, ... If you do not need to see information about the program which opened each socket, then you can remove the -p option, and you do not need to run ss (or netstat) with root privileges (sudo). How comprehensive is the above command? Despite being advertised as a replacement for netstat, ss lacks the support for showing "udplite" sockets. Also, the answer depends on your version of ss (and I guess the kernel as well). When this answer was originally written, before 2017, ss did not support "sctp". netstat supported it since February 2014). sctp is expected specifically inside phone companies. Outside of phone companies, VOIP typically uses udp. Unfortunately if you look for a comprehensive list in man netstat, it gets quite confusing. Options for sctp and udplite are shown in the first line, along with tcp, udp and raw. Further down there's what looks like a comprehensive list of protocol families: [-4|--inet] [-6|--inet6] [--unix|-x] [--inet|--ip|--tcpip] [--ax25] [--x25] [--rose] [--ash] [--bluetooth] [--ipx] [--netrom] [--ddp|--appletalk] [--econet|--ec]. Although netstat supports udplite and sctp, it does not support "dccp". Also netstat doesn't support packet sockets (like raw sockets but including link-level headers), as selected by ss -l -0. In conclusion, I hate everything, and I could probably stand to be less pedantic. Also ss does not support bluetooth sockets. Bluetooth sockets are not a traditional concern. This could be relevant if you were doing a full audit. Bluetooth security is quite a specific question though; I am not answering it here. Omitting localhost in netstat? netstat does not have a specific way to omit sockets bound to localhost. You could use | grep -v on the end. Take care if you use the -p option to netstat / ss. You might accidentally exclude some of your processes, if there is a match in the process name. I would include the colon in your pattern, like grep -v localhost:. Except the default in ss is to show numeric addresses, so in that case you would use | grep -vE (127.0.0.1|\[::1\]):. I suppose you could try to check for processes which would be accidentally excluded, e.g. ps ax | grep -E (127.0.0.1|\[::1\]):. Is there a simpler command? It's unfortunate about the packet sockets. Otherwise, I might suggest a plain netstat -l command. netstat helpfully anticipates your request and splits the output into "Internet connections", "UNIX domain sockets", and "Bluetooth connections". You would just look at the first section. There is no section for netlink sockets. Suppose you're only concerned with tcp, udp, raw, and packet sockets. For the first three types of socket you could use netstat -l -46. Packet sockets are in common use. So you would also need to train yourself to run ss -l -0 (or ss -l --packet). Unfortunately this leaves you with a big pitfall. The problem is it is now tempting to try and combine the two commands... Traps to avoid with ss ss -l -046 looks appealing as a single-command answer. However this is not true. ss -46 only shows IPv6 sockets. ss -64 only shows IPv4 sockets. I suggest always sanity-checking your results. Learn what to expect; go through each protocol and see if there's anything missing that should be there. If you have no IPv4 addresses, or no IPv6 addresses, that's very suspicious. You can expect most servers to have an SSH service listening on both. Most non-servers should also show packet or raw sockets, due to using DHCP. If you don't want to interpret the output of two different commands, one alternative might be to replace the netstat command with ss -l -A inet. This is slightly unfortunate because when you run netstat, the exact same options would exclude ipv6 sockets. So for a single command, you could use ss -l -A inet,packet .... IMO you might as well use ss -l | grep ... as I suggested in the first section. It is easy to remember this command, and it avoids any and all confusing behaviour in the selection options of ss. Although if you write scripts that use this output to automate something, then you should probably prefer to filter on a positive list of socket types instead. Otherwise the script could break when ss starts supporting a new type of local-only socket. Did I mention that ss -a -A raw -f link shows a combination of sockets from ss -a -A raw and ss -a -f link ? Whereas ss -a -A inet -f inet6 shows less sockets than ss -a -A inet? I think -f inet6 and -f inet are special cases, which are not documented properly. (-0, -4 and -6 are aliases for -f link, -f inet, and -f inet6). Did I mention that ss -A packet will show headings, but will never show any sockets? strace shows that it literally does not read anything. It seems to be because it treats packet sockets as always being "listening". ss does not bother to provide a warning about this. And this is different from raw sockets, which ss treats as being simultaneously "listening" and "non-listening". (man 7 raw says that if a raw socket is not bound to a specific protocol which are not bound to a specific IP protocol, then it is transmit-only. I have not checked if these are treated as listening sockets only)
How do I list all sockets which are open to remote machines?
1,312,416,838,000
So, I understand the difference between the three ideas in the title. atime -- access time = last time file opened mtime -- modified time = last time file contents was modified ctime -- changed time = last time file inode was modified So, presumably when I type something like find ~/Documents -name '*.py' -type f -mtime 14 will find all match all files ending with .py which were modified in the last 2 weeks. Nothing shows up... So I try find ~/Documents -name '*.py' -type f -atime 1400 which should match anything opened within the last 1400 days (ending with .py and having type file) and still nothing. Am I misunderstanding the documentation? Does it mean exactly 1400 days, for example? A relevant post: find's mtime and ctime options
Yes, -mtime 14 means exactly 14. See the top of that section in the GNU find manual (labelled "TESTS") where it says "Numeric arguments can be specified as [...]": Numeric arguments can be specified as +n for greater than n, -n for less than n, n for exactly n. Note that "less than" means "strictly less than", so -mtime -14 means "last modified at the current time of day, 13 days ago or less" and -mtime +14 means "last modified at the current time of day, 15 days ago or more".
Understanding find with atime, ctime, and mtime
1,312,416,838,000
I got into a little debate with someone yesterday regarding the logic and/or veracity of my answer here, vis., that logging and maintaining fs meta-data on a decent (GB+) sized SD card could never be significant enough to wear the card out in a reasonable amount of time (years and years). The jist of the counter-argument seemed to be that I must be wrong since there are so many stories online of people wearing out SD cards. Since I do have devices with SD cards in them containing rw root filesystems that are left on 24/7, I had tested the premise before to my own satisfaction. I've tweaked this test a bit, repeated it (using the same card, in fact) and am presenting it here. The two central questions I have are: Is the method I used to attempt to wreck the card viable, keeping in mind it's intended to reproduce the effects of continuously re-writing small amounts of data? Is the method I used to verify the card was still okay viable? I'm putting the question here rather than S.O. or SuperUser because an objection to the first part would probably have to assert that my test didn't really write to the card the way I'm sure it does, and asserting that would require some special knowledge of linux. [It could also be that SD cards use some kind of smart buffering or cache, such that repeated writes to the same place would be buffered/cached somewhere less prone to wear. I haven't found any indication of this anywhere, but I am asking about that on S.U.] The idea behind the test is to write to the same small block on the card millions of times. This is well beyond any claim of how many write cycles such devices can sustain, but presuming wear leveling is effective, if the card is of a decent size, millions of such writes still shouldn't matter much, as "the same block" would not literally be the same physical block. To do this, I needed to make sure every write was truly flushed to the hardware, and to the same apparent place. For flushing to hardware, I relied on the POSIX library call fdatasync(): #include <stdio.h> #include <string.h> #include <fcntl.h> #include <errno.h> #include <unistd.h> #include <stdlib.h> // Compile std=gnu99 #define BLOCK 1 << 16 int main (void) { int in = open ("/dev/urandom", O_RDONLY); if (in < 0) { fprintf(stderr,"open in %s", strerror(errno)); exit(0); } int out = open("/dev/sdb1", O_WRONLY); if (out < 0) { fprintf(stderr,"open out %s", strerror(errno)); exit(0); } fprintf(stderr,"BEGIN\n"); char buffer[BLOCK]; unsigned int count = 0; int thousands = 0; for (unsigned int i = 1; i !=0; i++) { ssize_t r = read(in, buffer, BLOCK); ssize_t w = write(out, buffer, BLOCK); if (r != w) { fprintf(stderr, "r %d w %d\n", r, w); if (errno) { fprintf(stderr,"%s\n", strerror(errno)); break; } } if (fdatasync(out) != 0) { fprintf(stderr,"Sync failed: %s\n", strerror(errno)); break; } count++; if (!(count % 1000)) { thousands++; fprintf(stderr,"%d000...\n", thousands); } lseek(out, 0, SEEK_SET); } fprintf(stderr,"TOTAL %lu\n", count); close(in); close(out); return 0; } I ran this for ~8 hours, until I had accumulated 2 million+ writes to the beginning of the /dev/sdb1 partition.1 I could just have easily used /dev/sdb (the raw device and not the partition) but I cannot see what difference this would make. I then checked the card by trying to create and mount a filesystem on /dev/sdb1. This worked, indicating the specific block I had been writing to all night was feasible. However, it does not mean that some regions of the card had not been worn out and displaced by wear levelling, but left accessible. To test that, I used badblocks -v -w on the partition. This is a destructive read-write test, but wear levelling or not, it should be a strong indication of the feasibility of the card since it must still provide space for each rolling write. In other words, it is the literal equivalent of filling the card completely, then checking that all of that was okay. Several times, since I let badblocks work through a few patterns. [Contra Jason C's comments below, there is nothing wrong or false about using badblocks this way. While it would not be useful for actually identifying bad blocks due to the nature of SD cards, it is fine for doing destructive read-write tests of an arbitrary size using the -b and -c switches, which is where the revised test went (see my own answer). No amount of magic or caching by the card's controller can fool a test whereby several megabytes of data can be written to hardware and read back again correctly. Jason's other comments seem based on a misreading -- IMO an intentional one, which is why I have not bothered to argue. With that head's up, I leave it to the reader to decide what makes sense and what does not.] 1 The card was an old 4 GB Sandisk card (it has no "class" number on it) which I've barely used. Once again, keep in mind that this is not 2 million writes to literally the same physical place; due to wear leveling the "first block" will have been moved constantly by the controller during the test to, as the term states, level out the wear.
Peterph's answer did make me consider the issue of possible caching further. After digging around, I still can't say for sure whether any, some, or all SD cards do this, but I do think it is possible. However, I don't believe that the caching would involve data larger than the erase block. To be really sure, I repeated the test using a 16 MB chunk instead of 64 kB. This is 1/250th the total volume of the 4 GB card. It took ~8 hours to do this 10,000 times. If wear leveling does its best to spread the load around, this means every physical block would have been used 40 times. That's not much, but the original point of the test was to demonstrate the efficacy of wear leveling by showing that I could not easily damage the card through repeated writes of modest amounts of data to the same (apparent) location. IMO the previous 64 kB test was probably real -- but the 16 MB one must be. The system has flushed the data to the hardware and the hardware has reported the write without an error. If this were a deception, the card would not be good for anything, and it can't be caching 16 MB anywhere but in primary storage, which is what the test is intended to stress. Hopefully, 10,000 writes of 16 MB each is enough to demonstrate that even on a bottom end name brand card (value: $5 CDN), running a rw root filesystem 24/7 that writes modest amounts of data daily will not wear the card out in a reasonable period of time. 10,000 days is 27 years...and the card is still fine... If I were getting paid to develop systems that did heavier work than that, I would want to do at least a few tests to determine how long a card can last. My hunch is that with one like this, which has a low write speed, it could take weeks, months, or years of continuous writing at the max speed (the fact there aren't oodles of comparative tests of this sort online speaks to the fact that it would be a very prolonged affair). With regard to confirming the card is still okay, I no longer think using badblocks in it's default configuration is appropriate. Instead, I did it this way: badblocks -v -w -b 524288 -c 8 Which means to test using a 512 kB block repeated 8 times (= 4 MB). Since this is a destructive rw test, it would probably be good as my homespun one with regard to stressing the device if used in a continuous loop. I've also created a filesystem on it, copied in a 2 GB file, diff'd the file against the original and then -- since the file was an .iso -- mounted it as an image and browsed the filesystem inside that. The card is still fine. Which is probably to be expected, after all... ;) ;)
Stress testing SD cards using linux
1,312,416,838,000
I've been trying to understand the booting process, but there's just one thing that is going over my head.. As soon as the Linux kernel has been booted and the root file system (/) mounted, programs can be run and further kernel modules can be integrated to provide additional functions. To mount the root file system, certain conditions must be met. The kernel needs the corresponding drivers to access the device on which the root file system is located (especially SCSI drivers). The kernel must also contain the code needed to read the file system (ext2, reiserfs, romfs, etc.). It is also conceivable that the root file system is already encrypted. In this case, a password is needed to mount the file system. The initial ramdisk (also called initdisk or initrd) solves precisely the problems described above. The Linux kernel provides an option of having a small file system loaded to a RAM disk and running programs there before the actual root file system is mounted. The loading of initrd is handled by the boot loader (GRUB, LILO, etc.). Boot loaders only need BIOS routines to load data from the boot medium. If the boot loader is able to load the kernel, it can also load the initial ramdisk. Special drivers are not required. If /boot is not a different partition, but is present in the / partition, then shouldn't the boot loader require the SCSI drivers, to access the 'initrd' image and the kernel image? If you can access the images directly, then why exactly do we need the SCSI drivers??
Nighpher, I'll try to answer your question, but for a more comprehensive description of boot process, try this article at IBM. Ok, I assume, that you are using GRUB or GRUB2 as your bootloader for explanation. First off, when BIOS accesses your disk to load the bootloader, it makes use of its built-in routines for disk access, which are stored in the famous 13h interrupt. Bootloader (and kernel at setup phase) make use of those routines when they access disk. Note that BIOS runs in real mode (16-bit mode) of the processor, thus it cannot address more than 2^20 bytes of RAM (2^20, not 2^16, because each address in real mode is comprised of segment_address*16 + offset, where both segment address and offset are 16-bit, see "x86 memory segmentation" at Wikipedia). Thus, these routines can't access more than 1 MiB of RAM, which is a strict limitation and a major inconvenience. BIOS loads bootloader code right from the MBR – the first 512 bytes of your disk – and executes it. If you're using GRUB, that code is GRUB stage 1. That code loads GRUB stage 1.5, which is located either in the first 32 KiB of disk space, called DOS compatibility region, or from a fixed address of the file system. It doesn't need to understand the file system structure to do this, because even if stage 1.5 is in the file system, it is "raw" code and can be directly loaded to RAM and executed: See "Details of GRUB on the PC" at pixelbeat.org, which is the source for the below image. Load of stage 1.5 from disk to RAM makes use of BIOS disk access routines. Stage 1.5 contains the filesystem utilities, so that it can read the stage 2 from the filesystem (well, it still uses BIOS 13h to read from disk to RAM, but now it can decipher filesystem info about inodes, etc., and get raw code out of the disk). Older BIOSes might not be able to access the whole HD due to limitations in their disk addressing mode – they might use Cylinder-Head-Sector system, unable to address more than first 8 GiB of disk space: http://en.wikipedia.org/wiki/Cylinder-head-sector. Stage 2 loads the kernel into RAM (again, using BIOS disk utilities). If it's 2.6+ kernel, it also has initramfs compiled within, so no need to load it. If it's an older kernel, bootloader also loads standalone initrd image into memory, so that kernel could mount it and get drivers for mounting real file system from disk. The problem is that the kernel (and ramdisk) weigh more than 1 MiB; thus, to load them into RAM you have to load the kernel into the first 1 MiB, then jump to protected mode (32-bit), move the loaded kernel to high memory (free the first 1 MiB for real mode), then return to real (16-bit) mode again, get ramdisk from disk to first 1 MiB (if it's a separate initrd and older kernel), possibly switch to protected (32-bit) mode again, put it to where it belongs, possibly get back to real mode (or not: https://stackoverflow.com/questions/4821911/does-grub-switch-to-protected-mode) and execute the kernel code. Warning: I'm not entirely sure about thoroughness and accuracy of this part of description. Now, when you finally run the kernel, you already have it and ramdisk loaded into RAM by bootloader, so the kernel can use disk utilities from ramdisk to mount your real root file system and pivot root to it. ramfs drivers are present in the kernel, so it can understand the contents of initramfs, of course.
How does Linux load the 'initrd' image?
1,312,416,838,000
Imagine there's a company A that releases a new graphics adapter. Who manages the process that results in this new graphics adapter being supported by the Linux kernel in the future? How does that proceed? I'm curious how kernel support for any new hardware is handled; on Windows companies develop drivers on their own, but how does Linux get specific hardware support?
Driver support works the same way as with all of open source: someone decides to scratch their own itch. Sometimes the driver is supplied by the company providing the hardware, just as on Windows. Intel does this for their network chips, 3ware does this for their RAID controllers, etc. These companies have decided that it is in their best interest to provide the driver: their "itch" is to sell product to Linux users, and that means ensuring that there is a driver. In the best case, the company works hard to get their driver into the appropriate source base that ships with Linux distros. For most drivers, that means the Linux kernel. For graphics drivers, it means X.org. There's also CUPS for printer drivers, NUT for UPS drivers, SANE for scanner drivers, etc. The obvious benefit of doing this is that Linux distros made after the driver gets accepted will have support for the hardware out of the box. The biggest downside is that it's more work for the company to coordinate with the open source project to get their driver in, for the same basic reasons it's difficult for two separate groups to coordinate anything. Then there are those companies that choose to offer their driver source code directly, only. You typically have to download the driver source code from their web site, build it on your system, and install it by hand. Such companies are usually smaller or specialty manufacturers without enough employees that they can spare the effort to coordinate with the appropriate open source project to get their driver into that project's source base. A rare few companies provide binary-only drivers instead of source code. An example are the more advanced 3D drivers from companies like NVIDIA. Typically the reason for this is that the company doesn't want to give away information they feel proprietary about. Such drivers often don't work with as many Linux distros as with the previous cases, because the company providing the hardware doesn't bother to rebuild their driver to track API and ABI changes. It's possible for the end user or the Linux distro provider to tweak a driver provided as source code to track such changes, so in the previous two cases, the driver can usually be made to work with more systems than a binary driver will. When the company doesn't provide Linux drivers, someone in the community simply decides to do it. There are some large classes of hardware where this is common, like with UPSes and printers. It takes a rare user who a) has the hardware; b) has the time; c) has the skill; and d) has the inclination to spend the time to develop the driver. For popular hardware, this usually isn't a problem because with millions of Linux users, these few people do exist. You get into trouble with uncommon hardware.
How is new hardware support added to the linux kernel?
1,312,416,838,000
I have found by coincidence that on my Debian Jessie there is no LD_LIBRARY_PATH variable (to be exact printenv | grep LD shows nothing related to linker and echo "$LD_LIBRARY_PATH" shows also nothing). This is the case in x terminal emulator (which might clear it due to setgid) as well as in basic terminal (Ctrl+Alt+F1). I know that LD_LIBRARY_PATH may be considered bad so Debian may block it somehow, but on the other hand there are a few files in /etc/ld.so.conf.d/ that contains some directories to be added to LD_LIBRARY_PATH. None of my rc files (that I know of) mess with LD_LIBRARY_PATH either. Why I don't see an LD_LIBRARY_PATH variable?
Yes, it is normal that you don't have any explicit LD_LIBRARY_PATH. Read also ldconfig(8) and ld-linux(8) and about the rpath. Notice that ldconfig updates /etc/ld.so.cache, not the LD_LIBRARY_PATH. Sometimes you'll set the rpath of an executable explicitly with -Wl,-rpath,directory passed to gcc at link time. If you need a LD_LIBRARY_PATH (but you probably should not), set it yourself (e.g. in ~/.bashrc). If you need system wide settings, you could e.g. consider adding /usr/local/lib/ in /etc/ld.so.conf and run ldconfig after installation of every library there. AFAIK $LD_LIBRARY_PATH is used only by the dynamic linker ld-linux.so (and by dlopen(3) which uses it) after execve(2). See also ldd(1). Read Drepper's How To Write Shared Libraries for more.
Is it normal that LD_LIBRARY_PATH variable is missing from an environment?
1,312,416,838,000
According to the man page, and wikipedia; nice ranges from -20 to 20. Yet when I run the following command, I find some processes have a non numerical value such as (-). See the sixth column from the left with title 'NI'. What does a niceness of (-) indicate? ps axl F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 4 0 1 0 20 0 19356 1548 poll_s Ss ? 0:00 /sbin/init 1 0 2 0 20 0 0 0 kthrea S ? 0:00 [kthreadd] 1 0 3 2 -100 - 0 0 migrat S ? 0:03 [migration/0] 1 0 4 2 20 0 0 0 ksofti S ? 0:51 [ksoftirqd/0] 1 0 5 2 -100 - 0 0 cpu_st S ? 0:00 [migration/0] 5 0 6 2 -100 - 0 0 watchd S ? 0:09 [watchdog/0] 1 0 7 2 -100 - 0 0 migrat S ? 0:08 [migration/1] 1 0 8 2 -100 - 0 0 cpu_st S ? 0:00 [migration/1] 1 0 9 2 20 0 0 0 ksofti S ? 1:03 [ksoftirqd/1] 5 0 10 2 -100 - 0 0 watchd S ? 0:09 [watchdog/1] 1 0 11 2 -100 - 0 0 migrat S ? 0:05 [migration/2] I've checked 3 servers running: Ubuntu 12.04 and CentOs 6.5 and Mac OsX 10.9. Only the Ubuntu and CentOs machines have non digit niceness values.
What does a niceness of (-) indicate? Notice those also have a PRI score of -100; this indicates the process is scheduled as a realtime process. Realtime processes do not use nice scores and always have a higher priority than normal ones, but still differ with respect to one another. You can view details per process with the chrt command (e.g. chrt -p 3). One of your -100 ones will likely report a "current scheduling priority" of 99 -- unlike nice, here high values are higher priority, which is probably where top created the -100 number from. Non-realtime processes will always show a "current scheduling priority" of 0 in chrt regardless of nice value, and under linux a "current scheduling policy" of SCHED_OTHER. Only the Ubuntu and CentOs machines have non digit niceness values. Some versions of top seem to report realtime processes with rt under PRI and then 0 under NI.
What does a niceness value of (-) mean?
1,312,416,838,000
CentOS 5.x I apologize if this is a repeat question. I've seen a lot of similar questions (regarding deleting files) but not exactly the same scenario. I have a directory containing hundreds of thousands of files (possibly over a million) and as a short-term fix to a different issue, I need to move these files to another location. For the purpose of discussion, let's say the these files originally reside in /home/foo/bulk/ and I want to move them to /home/foo2/bulk2/ If I try mv /home/foo/bulk/* /home/foo2/bulk2/ I get a "too many arguments" error. Mr. Google tells me that an alternative for deleting files in bulk would be to run find. Something like: find . -name "*.pdf" -maxdepth 1 -print0 | xargs -0 rm That would be fine if I was deleting stuff but in this case I want to move the files... If I type something like find . -name "*" -maxdepth 1 -print0 | xargs -0 mv /home/foo2/bulk2/ bash complains about the file not being a directory. What's the best command to use here for moving the files in bulk from one directory to another?
Taking advantage of GNU mv's -t option to specify the target directory, instead of relying on the last argument: find . -name "*" -maxdepth 1 -exec mv -t /home/foo2/bulk2 {} + If you were on a system without the option, you could use an intermediate shell to get the arguments in the right order (find … -exec … + doesn't support putting extra arguments after the list of files). find . -name "*" -maxdepth 1 -exec sh -c 'mv "$@" "$0"' /home/foo2/bulk2 {} +
What is the most efficient way to move a large number of files that reside in a single directory?
1,312,416,838,000
I have device file that appears in /dev when a specific board is plugged in. The read and write operations to it work just fine, but in order to open the device file the program needs to be executed with root priveledges. Is there any way I can all a non-root user to open this one specific device file without having to use sudo?
Yes, you may write an udev rule. In /etc/udev/rules.d make a file 30-mydevice.rules (number has to be from 0 to 99 and decides only about the script running order; name doesn't really matter, it has just to be descriptive; .rules extension is required, though) In this example I'm assuming your device is USB based and you know it's vendor and product id (can be checked using lsusb -v), and you're using mydevice group your user has to be in to use the device. This should be file contents in that case: SUBSYSTEM=="usb", SYSFS{idVendor}=="0123", SYSFS{idProduct}=="4567", ACTION=="add", GROUP="mydevice", MODE="0664" MODE equal to 0664 allows device to be written to by it's owner (probably root) and the defined group.
How to grant non-root user access to device files
1,312,416,838,000
Whenever I run file on an ELF binary I get this output: [jonescb@localhost ~]$ file a.out a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped I'm just wondering what changed in Linux 2.6.9 that this binary couldn't run on 2.6.8? Wasn't ELF support added in Linux 2.0?
glibc has a configure option called --enable-kernel that lets you specify the minimum supported kernel version. When object files are linked with that glibc build, the linker adds a SHT_NOTE section to the resulting executable named .note.ABI-tag that includes that minimum kernel version. The exact format is defined in the LSB, and file knows to look for that section and how to interpret it. The reason your particular glibc was built to require 2.6.9 depends on who built it. It's the same on my system (Gentoo); a comment in the glibc ebuild says that it specifies 2.6.9 because it's the minimum required for the NPTL, so that's likely a common choice. Another one that seems to come up is 2.4.1, because it was the minimum required for LinuxThreads, the package used before NPTL
Why does the file command say that ELF binaries are for Linux 2.6.9?
1,312,416,838,000
What does this command do? grep "\bi\b" linux.txt What is it searching for?
\b in a regular expression means "word boundary". With this grep command, you are searching for all words i in the file linux.txt. i can be at the beginning of a line or at the end, or between two space characters in a sentence.
What does \b mean in a grep pattern?
1,312,416,838,000
I got an USB gamepad and I would like to see and inspect the signals and commands that this peripherals is actually sending to my PC/kernel: how I can do that ? I was assuming that something like cat /dev/bus/usb/006/003 Was enough, but apparently this command returns immediately and prints some unreadable encoded chars . There is a way to "debug" an USB device like that ?
You can capture USB traffic with Wireshark. From its wiki: To dump USB traffic on Linux, you need the usbmon module, which has existed since Linux 2.6.11. Information on that module is available in /usr/src/linux/Documentation/usb/usbmon.txt in the Linux source tree. Depending on the distribution you're using, and the version of that distribution, that module might be built into the kernel, or might be a loadable module; if it's a loadable module, depending on the distribution you're using, and the version of that distribution, it might or might not be loaded for you. If it's a loadable module, and not loaded, you will have to load it with the command modprobe usbmon which must be run as root. libpcap releases prior to 1.0 do not include USB support, so you will need at least libpcap 1.0.0. For versions of the kernel prior to 2.6.21, the only USB traffic capture mechanism available is a text-based mechanism that limits the total amount of data captured for each raw USB block to about 30 bytes. There is no way to change this without patching the kernel. If debugfs is not already mounted on /sys/kernel/debug, ensure that it is mounted there by issuing the following command as root: mount -t debugfs / /sys/kernel/debug For kernel version 2.6.21 and later, there is a binary protocol for tracing USB packets which doesn't have that size limitation. For that kernel version, you will need libpcap 1.1.0 or newer, because the libpcap 1.0.x USB support uses, but does not correctly handle, the memory-mapped mechanism for USB traffic, which libpcap will use if available - it cannot be made unavailable, so libpcap will always use it. In libpcap 1.0.x, the devices for capturing on USB have the name usbn, where n is the number of the bus.  In libpcap 1.1.0 and later, they have the name usbmonn. You will also need a Wireshark 1.2.x or newer.
How to dump USB traffic?
1,312,416,838,000
What filesystem would be best for backups? I'm interested primary in stability (especially uncorruptability of files during hard reboots etc.) but how efficiently it handles large (>5GB) files is also important. Also, which mount parameters should I use? Kernel is Linux >= 2.6.34. EDIT: I do not want backup methods. I need the filesystem to store them.
You can use ext4 but I would recommend mounting with journal_data mode which will turn off dealloc (delayed allocation) which 'caused some earlier problems. The disabling of dealloc will make new data writes slower, but make writes in the event of power failure less likely to have loss. I should also mention that you can disable dealloc without using journal_data which has some other benefits (or at least it did in ext3), such as slightly improved reads, and I believe better recovery. Extents will still help with fragmentation. Extents make delete's of large files much faster than ext3, a delete of any sized data (single file) should be near instantaneous on ext4 but can take a long time on ext3. (any extent based FS has this advantage) ext4 also fsck 's faster than ext3. One last note, there were bugfixes in ext4 up to like 2.6.31? I would basically make sure you aren't running a kernel pre 2.6.32 which is an LTS kernel.
Rock-stable filesystem for large files (backups) for linux
1,312,416,838,000
I'd like to install linux, but I don't want to risk damaging my current windows installation as I have heard a lot of horror stories. Fortunately, I have an extra hard drive. Can I install linux onto that and then dual boot windows without having to modify the windows drive? Also, I have a UEFI "BIOS" and the windows drive is in GPT format.
I'm going use the term BIOS below when referring to concepts that are the same for both newer UEFI systems and traditional BIOS systems, since while this is a UEFI oriented question, talking about the "BIOS" jibes better with, e.g., GRUB documentation, and "BIOS/UEFI" is too clunky. GRUB (actually, GRUB 2 — this is often used ambiguously) is the bootloader installed by linux and used to dual boot Windows. First, a word about drive order and boot order. Drive order refers to the order in which the drives are physically connected to the bus on the motherboard (first drive, second drive, etc.); this information is reported by the BIOS. Boot order refers to the sequence in which the BIOS checks for a bootable drive. This is not necessarily the same as the drive order, and is usually configurable via the BIOS set-up screen. Drive order should not be configurable or affected by boot order, since that would be a very OS unfriendly thing to do (but in theory an obtuse BIOS could). Also, if you unplug the first drive, the second drive will likely become the first one. We are going to use UUIDs in configuring the boot loader to try and avoid issues such as this (contemporary linux installers also do this). The ideal way to get what you want is to install linux onto the second drive in terms of drive order and then select it first in terms of boot order using the UEFI set-up. An added advantage of this is that you can then use the BIOS/UEFI boot order to select the windows drive and bypass grub if you want. The reason I recommend linux on the second drive is because GRUB must "chainload" the Windows native bootloader, and the windows bootloader always assumes it is on the first drive. There is a way to trick it, however, if you prefer or need it the other way around. Hopefully, you can just go ahead and use a live CD or whatever and get this done using the GUI installer. Not all installers are created equal, however, and if this gets screwed up and you are left with problems such as: I installed linux onto the first disk and now I can't boot windows, or I installed linux onto the second disk, but using the first disk for the bootloader, and now I can't boot anything! Then keep reading. In the second case, you should first try and re-install linux onto the second disk, and this time make sure that's where the bootloader goes. The easiest and most foolproof way to do that would be to temporarily remove the Windows drive from the machine, since we are going to assume there is nothing extra installed on it, regardless of drive order. Once you have linux installed and you've made sure it can boot, plug the Windows drive back in (if you removed it — and remember, we ideally want it first in terms of drive order, and the second drive first in terms of boot order) and proceed to the next step. Accessing the GRUB configuration Boot linux, open a terminal, and > su root You will be asked for root's password. From this point forward, you are the superuser in that terminal (to check, try whoami), so do not do anything stupid. However, you are still a normal user in the GUI, and since we will be editing a text file, if you prefer a GUI editor we will have to temporarily change the ownership of that file and the directory it is in: > chown -R yourusername /etc/grub.d/ If you get "Operation not permitted", you did not su properly. If you get chown: invalid user: ‘yourusername’, you took the last command too literally. You can now navigate to /etc/grub.d in your filebrowser and look for a file called 40_custom. It should look like this: #!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. If you can't find it, in the root terminal enter the following commands: > touch /etc/grub.d/40_custom > chmod 755 /etc/grub.d/40_custom > chown yourusername /etc/grub.d/40_custom Open it in your text editor, copy paste the part above (starting w/ #!/bin/sh) and on to the next step. Adding a Windows boot option Copy-paste this in with the text editor at the end of the file: menuentry "MS Windows" { insmod part_gpt insmod search_fs_uuid insmod ntfs insmod chain } This is list of modules GRUB will need to get things done (ntfs may be superfluous, but shouldn't hurt anything either). Note that this is an incomplete entry — we need to add some crucial commands. Finding the Windows Second Stage Bootloader Your linux install has probably automounted your Windows partition and you should be able to find it in a file browser. If not, figure out a way to make it so (if you are not sure how, ask a question on this site). Once that's done, we need to know the mount point -- this should be obvious in the file browser, e.g. /media/ASDF23SF23/. To save some typing, we're going put that into a shell variable: win="/whatever/the/path/is" There should be no spaces on either side of the equals sign. Do not include any elements of a Windows path here. This should point to the top level folder on the Windows partition. Now: cd $win find . -name bootmgfw.efi This could take a few minutes if you have a big partition, but most likely the first thing it spits out is what we are looking for; there may be further references in the filesystem containing long gobbledygook strings — those aren't it. Use Ctrl-c to stop the find once you see something short and simple like ./Windows/Boot/EFI/bootmgfw.efi or ./EFI/HP/boot/bootmgfw.efi. Except for the . at the beginning, remember this path for later; you can copy it into your text editor on a blank line at the bottom, since we will be using it there. If you want to go back to your previous directory now, use cd -, although it does not matter where you are in the shell from here on forward. Setting the Right Parameters GRUB needs to be able to find and hand off the boot process to the second stage Windows bootloader. We already have the path on the Windows partition, but we also need some parameters to tell GRUB where that partition is. There should be a tool installed on your system called grub-probe or (on, e.g., Fedora) grub2-probe. Type grub and then hit Tab two or three times; you should see a list including one or the other. > grub-probe --target=hints_string $win You should see a string such as: --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 Go back to the text editor with the GRUB configuration in it and add a line after all the insmod commands (but before the closing curly brace) so it looks like: insmod chain search --fs-uuid --set=root [the complete "hint bios" string] } Don't break that line or allow your text editor to do so. It may wrap around in the display — an easy way to tell the difference is to set line numbering on. Next: > grub-probe --target=fs_uuid $win This should return a shorter string of letters, numbers, and possible dashes such as "123A456B789X6X" or "b942fb5c-2573-4222-acc8-bbb883f19043". Add that to the end of the search --fs-uuid line after the hint bios string, separated with a space. Next, if (and only if) Windows is on the second drive in terms of drive order, add a line after the search --fs-uuid line: drivemap -s hd0 hd1 This is "the trick" mentioned earlier. Note it is not guaranteed to work but it does not hurt to try. Finally, the last line should be: chainloader (${root})[the Windows path to the bootloader] } Just to be clear, for example: chainloader (${root})/Windows/Boot/EFI/bootmgfw.efi That's it. Save the file and check in a file browser to make sure it really has been saved and looks the way it should. Add the New Menu Option to GRUB This is done with a tool called grub-mkconfig or grub2-mkconfig; it will have been in that list you found with Tab earlier. You may also have a a command called update-grub. To check for that, just type it in the root terminal. If you get "command not found", you need to use grub-mkconfig directly. If not (including getting a longer error), you've just set the configuration and can skim down a bit. To use grub-mkconfig directly, we first need to find grub.cfg: > find /boot -name grub.cfg This will probably be /boot/grub/grub.cfg or /boot/grub2/grub.cfg. > grub-mkconfig -o /boot/grub/grub.cfg update-grub will automatically scan the configuration for errors. grub-mkconfig will not, but it is important to do so because it's much easier to deal with them now than when you try to boot the machine. For this, use grub-script-check (or grub2-script-check): > grub-script-check /boot/grub/grub.cfg If this (or update-grub) produces an error indicating a line number, that's the line number in grub.cfg, but you need to fix the corresponding part in /etc/grub.d/40_custom (the file in your text editor). You may need to be root just to look at the former file though, so try less /boot/grub/grub.cfg in the terminal, hit :, and enter the line number. You should see your menu entry. Find the typo, correct it in the text editor, and run update-grub or grub-mkconfig again. When you are done you can close the text editor and type exit in the terminal to leave superuser mode. Reboot! When you get to the grub menu, scroll down quickly (before the timeout expires, usually 5 seconds) to the "Windows" option and test it. If you get a text message error from grub, something is wrong with the configuration. If you get an error message from Windows, that problem is between you and Microsoft. Don't worry, however, your Windows drive has not been modified and you will be able to boot directly into it by putting it first (in terms of boot order) via the BIOS set-up. When you return to linux again, return the ownership of the /etc/grub.d directory and it's contents to their original state: sudo chmod 755 /etc/grub.d/40_custom References GRUB 2 Manual Arch Linux Wiki GRUB page Arch has some of the best documentation going, and much of it (including that page) is mostly applicable to any GNU/Linux distro.
Dual boot windows on second harddrive, UEFI/GPT system
1,312,416,838,000
What is exactly "Arch Fallback" in the Arch boot menu?
The Arch Wiki mkinitcpio page explains the difference between the two: The fallback image utilizes the same configuration file as the default image, except the autodetect hook is skipped during creation, thus including a full range of modules. The autodetect hook detects required modules and tailors the image for specific hardware, shrinking the initramfs. You can create your own image by using the -c and -g options to mkinitcpio - this is helpful if you want to test your own images (to, for example, remove uneeded hooks), like so: sudo mkinitcpio -c /etc/mkinitcpio.conf.new -g /boot/linux-new.img
What is Arch Fallback in Arch boot menu?
1,312,416,838,000
I was wondering if there is a feature in linux like OSX "shake to locate cursor", which temporarily makes the user's mouse or trackpad cursor much larger when shaken back and forth, making it easier to locate if the user loses track of it.
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed". I'm not sure if something similar is available on other distros. Not quite what you asked for. Possibly useful?
"Shake to locate cursor" feature
1,312,416,838,000
Searching for what one can monitor with perf_events on Linux, I cannot find what Kernel PMU event are? Namely, with perf version 3.13.11-ckt39 the perf list shows events like: branch-instructions OR cpu/branch-instructions/ [Kernel PMU event] Overall there are: Tracepoint event Software event Hardware event Hardware cache event Raw hardware event descriptor Hardware breakpoint Kernel PMU event and I would like to understand what they are, where they come from. I have some kind of explanation for all, but Kernel PMU event item. From perf wiki tutorial and Brendan Gregg's page I get that: Tracepoints are the clearest -- these are macros on the kernel source, which make a probe point for monitoring, they were introduced with ftrace project and now are used by everybody Software are kernel's low level counters and some internal data-structures (hence, they are different from tracepoints) Hardware event are some very basic CPU events, found on all architectures and somehow easily accessed by kernel Hardware cache event are nicknames to Raw hardware event descriptor -- it works as follows as I got it, Raw hardware event descriptor are more (micro?)architecture-specific events than Hardware event, the events come from Processor Monitoring Unit (PMU) or other specific features of a given processor, thus they are available only on some micro-architectures (let's say "architecture" means "x86_64" and all the rest of the implementation details are "micro-architecture"); and they are accessible for instrumentation via these strange descriptors rNNN [Raw hardware event descriptor] cpu/t1=v1[,t2=v2,t3 ...]/modifier [Raw hardware event descriptor] (see 'man perf-list' on how to encode it) -- these descriptors, which events they point to and so on is to be found in processor's manuals (PMU events in perf wiki); but then, when people know that there is some useful event on a given processor they give it a nickname and plug it into linux as Hardware cache event for ease of access -- correct me if I'm wrong (strangely all Hardware cache event are about something-loads or something-misses -- very like the actual processor's cache..) now, the Hardware breakpoint mem:<addr>[:access] [Hardware breakpoint] is a hardware feature, which is probably common to most modern architectures, and works as a breakpoint in a debugger? (probably it is googlable anyway) finally, Kernel PMU event I don't manage to google on; it also doesn't show up in the listing of Events in Brendan's perf page, so it's new? Maybe it's just nicknames to hardware events specifically from PMU? (For ease of access it got a separate section in the list of events in addition to the nickname.) In fact, maybe Hardware cache events are nicknames to hardware events from CPU's cache and Kernel PMU event are nicknames to PMU events? (Why not call it Hardware PMU event then?..) It could be just new naming scheme -- the nicknames to hardware events got sectionized? And these events refer to things like cpu/mem-stores/, plus since some linux version events got descriptions in /sys/devices/ and: # find /sys/ -type d -name events /sys/devices/cpu/events /sys/devices/uncore_cbox_0/events /sys/devices/uncore_cbox_1/events /sys/kernel/debug/tracing/events -- debug/tracing is for ftrace and tracepoints, other directories match exactly what perf list shows as Kernel PMU event. Could someone point me to a good explanation/documentation of what Kernel PMU events or /sys/..events/ systems are? Also, is /sys/..events/ some new effort to systemize hardware events or something alike? (Then, Kernel PMU is like "the Performance Monitoring Unit of Kernel".) PS To give better context, not-privileged run of perf list (tracepoints are not shown, but all 1374 of them are there) with full listings of Kernel PMU events and Hardware cache events and others skipped: $ perf list List of pre-defined events (to be used in -e): cpu-cycles OR cycles [Hardware event] instructions [Hardware event] ... cpu-clock [Software event] task-clock [Software event] ... L1-dcache-load-misses [Hardware cache event] L1-dcache-store-misses [Hardware cache event] L1-dcache-prefetch-misses [Hardware cache event] L1-icache-load-misses [Hardware cache event] LLC-loads [Hardware cache event] LLC-stores [Hardware cache event] LLC-prefetches [Hardware cache event] dTLB-load-misses [Hardware cache event] dTLB-store-misses [Hardware cache event] iTLB-loads [Hardware cache event] iTLB-load-misses [Hardware cache event] branch-loads [Hardware cache event] branch-load-misses [Hardware cache event] branch-instructions OR cpu/branch-instructions/ [Kernel PMU event] branch-misses OR cpu/branch-misses/ [Kernel PMU event] bus-cycles OR cpu/bus-cycles/ [Kernel PMU event] cache-misses OR cpu/cache-misses/ [Kernel PMU event] cache-references OR cpu/cache-references/ [Kernel PMU event] cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event] instructions OR cpu/instructions/ [Kernel PMU event] mem-loads OR cpu/mem-loads/ [Kernel PMU event] mem-stores OR cpu/mem-stores/ [Kernel PMU event] ref-cycles OR cpu/ref-cycles/ [Kernel PMU event] stalled-cycles-frontend OR cpu/stalled-cycles-frontend/ [Kernel PMU event] uncore_cbox_0/clockticks/ [Kernel PMU event] uncore_cbox_1/clockticks/ [Kernel PMU event] rNNN [Raw hardware event descriptor] cpu/t1=v1[,t2=v2,t3 ...]/modifier [Raw hardware event descriptor] (see 'man perf-list' on how to encode it) mem:<addr>[:access] [Hardware breakpoint] [ Tracepoints not available: Permission denied ]
Googling and ack-ing is over! I've got some answer. But firstly let me clarify the aim of the question a little more: I want clearly distinguish independent processes in the system and their performance counters. For instance, a core of a processor, an uncore device (learned about it recently), kernel or user application on the processor, a bus (= bus controller), a hard drive are all independent processes, they are not synchronized by a clock. And nowadays probably all of them have some Process Monitoring Counter (PMC). I'd like to understand which processes the counters come from. (It is also helpful in googling: the "vendor" of a thing zeros it better.) Also, the gear used for the search: Ubuntu 14.04, linux 3.13.0-103-generic, processor Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (from /proc/cpuinfo, it has 2 physical cores and 4 virtual -- the physical matter here). Terminology, things the question involves From Intel: processor is a core device (it's 1 device/process) and a bunch of uncore devices, core is what runs the program (clock, ALU, registers etc), uncore are devices put on die, close to the processor for speed and low latency (the real reason is "because the manufacturer can do it"); as I understood it is basically the Northbridge, like on PC motherboard, plus caches; and AMD actually calls these devices NorthBridgeinstead ofuncore`; ubox which shows up in my sysfs $ find /sys/devices/ -type d -name events /sys/devices/cpu/events /sys/devices/uncore_cbox_0/events /sys/devices/uncore_cbox_1/events -- is an uncore device, which manages Last Level Cache (LLC, the last one before hitting RAM); I have 2 cores, thus 2 LLC and 2 ubox; Processor Monitoring Unit (PMU) is a separate device which monitors operations of a processor and records them in Processor Monitoring Counter (PMC) (counts cache misses, processor cycles etc); they exist on core and uncore devices; the core ones are accessed with rdpmc (read PMC) instruction; the uncore, since these devices depend on actual processor at hand, are accessed via Model Specific Registers (MSR) via rdmsr (naturally); apparently, the workflow with them is done via pairs of registers -- 1 register sets which events the counter counts, 2 register is the value in the counter; the counter can be configured to increment after a bunch of events, not just 1; + there are some interupts/tech noticing overflows in these counters; more one can find in Intel's "IA-32 Software Developer's Manual Vol 3B" chapter 18 "PERFORMANCE MONITORING"; also, the MSR's format concretely for these uncore PMCs for version "Architectural Performance Monitoring Version 1" (there are versions 1-4 in the manual, I don't know which one is my processor) is described in "Figure 18-1. Layout of IA32_PERFEVTSELx MSRs" (page 18-3 in mine), and section "18.2.1.2 Pre-defined Architectural Performance Events" with "Table 18-1. UMask and Event Select Encodings for Pre-Defined Architectural Performance Events", which shows the events which show up as Hardware event in perf list. From linux kernel: kernel has a system (abstraction/layer) for managing performance counters of different origin, both software (kernel's) and hardware, it is described in linux-source-3.13.0/tools/perf/design.txt; an event in this system is defined as struct perf_event_attr (file linux-source-3.13.0/include/uapi/linux/perf_event.h), the main part of which is probably __u64 config field -- it can hold both a CPU-specific event definition (the 64bit word in the format described on those Intel's figures) or a kernel's event The MSB of the config word signifies if the rest contains [raw CPU's or kernel's event] the kernel's event defined with 7 bits for type and 56 for event's identifier, which are enum-s in the code, which in my case are: $ ak PERF_TYPE linux-source-3.13.0/include/ ... linux-source-3.13.0/include/uapi/linux/perf_event.h 29: PERF_TYPE_HARDWARE = 0, 30: PERF_TYPE_SOFTWARE = 1, 31: PERF_TYPE_TRACEPOINT = 2, 32: PERF_TYPE_HW_CACHE = 3, 33: PERF_TYPE_RAW = 4, 34: PERF_TYPE_BREAKPOINT = 5, 36: PERF_TYPE_MAX, /* non-ABI */ (ak is my alias to ack-grep, which is the name for ack on Debian; and ack is awesome); in the source code of kernel one can see operations like "register all PMUs dicovered on the system" and structure types struct pmu, which are passed to something like int perf_pmu_register(struct pmu *pmu, const char *name, int type) -- thus, one could just call this system "kernel's PMU", which would be an aggregation of all PMUs on the system; but this name could be interpreted as monitoring system of kernel's operations, which would be misleading; let's call this subsystem perf_events for clarity; as any kernel subsystem, this subsystem can be exported into sysfs (which is made to export kernel subsystems for people to use); and that's what are those events directories in my /sys/ -- the exported (parts of?) perf_events subsystem; also, the user-space utility perf (built into linux) is still a separate program and has its' own abstractions; it represents an event requested for monitoring by user as perf_evsel (files linux-source-3.13.0/tools/perf/util/evsel.{h,c}) -- this structure has a field struct perf_event_attr attr;, but also a field like struct cpu_map *cpus; that's how perf utility assigns an event to all or particular CPUs. Answer Indeed, Hardware cache event are "shortcuts" to the events of the cache devices (ubox of Intel's uncore devices), which are processor-specific, and can be accessed via the protocol Raw hardware event descriptor. And Hardware event are more stable within architecture, which, as I understand, name the events from the core device. There no other "shortcuts" in my kernel 3.13 to some other uncore events and counters. All the rest -- Software and Tracepoints -- are kernel's events. I wonder if the core's Hardware events are accessed via the same Raw hardware event descriptor protocol. They might not -- since the counter/PMU sits on core, maybe it is accessed differently. For instance, with that rdpmu instruction, instead of rdmsr, which accesses uncore. But it is not that important. Kernel PMU event are just the events, which are exported into sysfs. I don't know how this is done (automatically by kernel all discovered PMCs on the system, or just something hard-coded, and if I add a kprobe -- is it exported? etc). But the main point is that these are the same events as Hardware event or any other in the internal perf_event system. And I don't know what those $ ls /sys/devices/uncore_cbox_0/events clockticks are. Details on Kernel PMU event Searching through the code leads to: $ ak "Kernel PMU" linux-source-3.13.0/tools/perf/ linux-source-3.13.0/tools/perf/util/pmu.c 629: printf(" %-50s [Kernel PMU event]\n", aliases[j]); -- which happens in the function void print_pmu_events(const char *event_glob, bool name_only) { ... while ((pmu = perf_pmu__scan(pmu)) != NULL) list_for_each_entry(alias, &pmu->aliases, list) {...} ... /* b.t.w. list_for_each_entry is an iterator * apparently, it takes a block of {code} and runs over some lost * Ruby built in kernel! */ // then there is a loop over these aliases and loop{ ... printf(" %-50s [Kernel PMU event]\n", aliases[j]); ... } } and perf_pmu__scan is in the same file: struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu) { ... pmu_read_sysfs(); // that's what it calls } -- which is also in the same file: /* Add all pmus in sysfs to pmu list: */ static void pmu_read_sysfs(void) {...} That's it. Details on Hardware event and Hardware cache event Apparently, the Hardware event come from what Intel calls "Pre-defined Architectural Performance Events", 18.2.1.2 in IA-32 Software Developer's Manual Vol 3B. And "18.1 PERFORMANCE MONITORING OVERVIEW" of the manual describes them as: The second class of performance monitoring capabilities is referred to as architectural performance monitoring. This class supports the same counting and Interrupt-based event sampling usages, with a smaller set of available events. The visible behavior of architectural performance events is consistent across processor implementations. Availability of architectural performance monitoring capabilities is enumerated using the CPUID.0AH. These events are discussed in Section 18.2. -- the other type is: Starting with Intel Core Solo and Intel Core Duo processors, there are two classes of performance monitoring capa-bilities. The first class supports events for monitoring performance using counting or interrupt-based event sampling usage. These events are non-architectural and vary from one processor model to another... And these events are indeed just links to underlying "raw" hardware events, which can be accessed via perf utility as Raw hardware event descriptor. To check this one looks at linux-source-3.13.0/arch/x86/kernel/cpu/perf_event_intel.c: /* * Intel PerfMon, used on Core and later. */ static u64 intel_perfmon_event_map[PERF_COUNT_HW_MAX] __read_mostly = { [PERF_COUNT_HW_CPU_CYCLES] = 0x003c, [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, [PERF_COUNT_HW_CACHE_REFERENCES] = 0x4f2e, [PERF_COUNT_HW_CACHE_MISSES] = 0x412e, ... } -- and exactly 0x412e is found in "Table 18-1. UMask and Event Select Encodings for Pre-Defined Architectural Performance Events" for "LLC Misses": Bit Position CPUID.AH.EBX | Event Name | UMask | Event Select ... 4 | LLC Misses | 41H | 2EH -- H is for hex. All 7 are in the structure, plus [PERF_COUNT_HW_REF_CPU_CYCLES] = 0x0300, /* pseudo-encoding *. (The naming is a bit different, addresses are the same.) Then the Hardware cache events are in structures like (in the same file): static __initconst const u64 snb_hw_cache_extra_regs [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = {...} -- which should be for sandy bridge? One of these -- snb_hw_cache_extra_regs[LL][OP_WRITE][RESULT_ACCESS] is filled with SNB_DMND_WRITE|SNB_L3_ACCESS, where from the def-s above: #define SNB_L3_ACCESS SNB_RESP_ANY #define SNB_RESP_ANY (1ULL << 16) #define SNB_DMND_WRITE (SNB_DMND_RFO|SNB_LLC_RFO) #define SNB_DMND_RFO (1ULL << 1) #define SNB_LLC_RFO (1ULL << 8) which should equal to 0x00010102, but I don't know how to check it with some table. And this gives an idea how it is used in perf_events: $ ak hw_cache_extra_regs linux-source-3.13.0/arch/x86/kernel/cpu/ linux-source-3.13.0/arch/x86/kernel/cpu/perf_event.c 50:u64 __read_mostly hw_cache_extra_regs 292: attr->config1 = hw_cache_extra_regs[cache_type][cache_op][cache_result]; linux-source-3.13.0/arch/x86/kernel/cpu/perf_event.h 521:extern u64 __read_mostly hw_cache_extra_regs linux-source-3.13.0/arch/x86/kernel/cpu/perf_event_intel.c 272:static __initconst const u64 snb_hw_cache_extra_regs 567:static __initconst const u64 nehalem_hw_cache_extra_regs 915:static __initconst const u64 slm_hw_cache_extra_regs 2364: memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs, 2365: sizeof(hw_cache_extra_regs)); 2407: memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs, 2408: sizeof(hw_cache_extra_regs)); 2424: memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs, 2425: sizeof(hw_cache_extra_regs)); 2452: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, 2453: sizeof(hw_cache_extra_regs)); 2483: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, 2484: sizeof(hw_cache_extra_regs)); 2516: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); $ The memcpys are done in __init int intel_pmu_init(void) {... case:...}. Only attr->config1 is a bit odd. But it is there, in perf_event_attr (same linux-source-3.13.0/include/uapi/linux/perf_event.h file): ... union { __u64 bp_addr; __u64 config1; /* extension of config */ }; union { __u64 bp_len; __u64 config2; /* extension of config1 */ }; ... They are registered in kernel's perf_events system with calls to int perf_pmu_register(struct pmu *pmu, const char *name, int type) (defined in linux-source-3.13.0/kernel/events/core.c: ): static int __init init_hw_perf_events(void) (file arch/x86/kernel/cpu/perf_event.c) with call perf_pmu_register(&pmu, "cpu", PERF_TYPE_RAW); static int __init uncore_pmu_register(struct intel_uncore_pmu *pmu) (file arch/x86/kernel/cpu/perf_event_intel_uncore.c, there are also arch/x86/kernel/cpu/perf_event_amd_uncore.c) with call ret = perf_pmu_register(&pmu->pmu, pmu->name, -1); So finally, all events come from hardware and everything is ok. But here one could notice: why do we have LLC-loads in perf list and not ubox1 LLC-loads, since these are HW events and they actualy come from uboxes? That's a thing of the perf utility and its' perf_evsel structure: when you request a HW event from perf you define the event which processors you want it from (default is all), and it sets up the perf_evsel with the requested event and processors, then at aggregation is sums the counters from all processors in perf_evsel (or does some other statistics with them). One can see it in tools/perf/builtin-stat.c: /* * Read out the results of a single counter: * aggregate counts across CPUs in system-wide mode */ static int read_counter_aggr(struct perf_evsel *counter) { struct perf_stat *ps = counter->priv; u64 *count = counter->counts->aggr.values; int i; if (__perf_evsel__read(counter, perf_evsel__nr_cpus(counter), thread_map__nr(evsel_list->threads), scale) < 0) return -1; for (i = 0; i < 3; i++) update_stats(&ps->res_stats[i], count[i]); if (verbose) { fprintf(output, "%s: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n", perf_evsel__name(counter), count[0], count[1], count[2]); } /* * Save the full runtime - to allow normalization during printout: */ update_shadow_stats(counter, count); return 0; } (So, for the utility perf a "single counter" is not even a perf_event_attr, which is a general form, fitting both SW and HW events, it is an event of your query -- the same events may come from different devices and they are aggregated.) Also a notice: struct perf_evsel contains only 1 struct perf_evevent_attr, but it also has a field struct perf_evsel *leader; -- it is nested. There is a feature of "(hierarchical) groups of events" in perf_events, when you can dispatch a bunch of counters together, so that they can be compared to each other and so on. Not sure how it works with independent events from kernel, core, ubox. But this nesting of perf_evsel is it. And, most likely, that's how perf manages a query of several events together.
What are Kernel PMU event-s in perf_events list?
1,312,416,838,000
I want to use English language with German locale settings. Right now my system runs with the following setup (configured during installation procedure in Debian Expert Installer): Language: English - English (Default) Country, territory or area: other -> Europe -> Austria Country to base default locale settings on: United States - en_US.UTF-8 Keyboard: German My question now is: How can I preserve English language but switch the current locale (United States - en_US.UTF-8) to desired German locale (de_DE.UTF-8)? During installation procedure this was not possible because an error occurred ("Invalid language/locale settings combination detected").
en_DE doesn’t exist as a default locale, so you can’t select English localised for German-speaking countries as a locale during installation. (Why should one use update-locale instead of directly setting LANGUAGE? describes the checks involved in choosing a locale.) There are two approaches to achieve what you’re after. One is to create a new locale with your settings; see How to (easily) be able to use a new en_** locale? for details. The other is to set up your locale settings in a finer-grained fashion, using the various LC_ variables; for example: export LANG=en_US.UTF-8 export LC_MONETARY=de_DE.UTF-8 export LC_TIME=de_DE.UTF-8 or, if you want German to be the default except for messages: export LANG=de_DE.UTF-8 export LC_MESSAGES=en_US.UTF-8 (and unset any other conflicting LC_ variables, in particular LC_ALL which overrides all other settings). You can check your settings using the locale program; see How does the "locale" program work? for details.
Debian 9: How to set English language with German Locale?
1,482,905,948,000
I am using Debian GNU/Linux 7.8 (wheezy). While running my MATLAB program today, I got this message in terminal. Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091929] Uhhuh. NMI received for unknown reason 20 on CPU 4. Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091932] Do you have a strange power saving mode enabled? Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091932] Dazed and confused, but trying to continue I also remember hearing some beep sound in between. What does this mean? And what should I do further?
The problem seems to be that the End of Interrupt isn't communicated properly. For libvirt, make sure eoi is enabled: <domain> … <features> <apic eoi='on'/> … On the command line for KVM that translates to -cpu …,+kvm_pv_eoi This seems to work for us with -M q35, host cpu passthrough and default config otherwise (RTC interrupts queued, PIT interrupts dropped, HPET unavailable).
NMI received for unknown reason 20 — Do you have a strange power saving mode enabled?
1,482,905,948,000
What is the difference between where and which shell commands? Here are some examples ~ where cc /usr/bin/cc /usr/bin/cc ~ which cc /usr/bin/cc and ~ which which which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' /usr/bin/which ~ which where /usr/bin/which: no where in (/usr/local/bin:/bin:/usr/bin:/home/bnikhil/bin:/bin) also ~ where which which: aliased to alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde which: shell built-in command /usr/bin/which /usr/bin/which ~ where where where: shell built-in command To me it seems that they do the same thing one being a shell builtin, not quite sure how that is different from a command?
zsh is one of the few shells (the other ones being tcsh (which originated as a csh script for csh users, which also had its limitation, tcsh made it a builtin as an improvement)) where which does something sensible since it's a shell builtin, but somehow you or your OS (via some rc file) broke it by replacing it with a call to the system which command which can't do anything sensible reliably since it doesn't have access to the interns of the shell so can't know how that shell interprets a command name. In zsh, all of which, type, whence and where are builtin commands that are all used to find out about what commands are, but with different outputs. They're all there for historical reason, you can get all of their behaviours with different flags to the whence command. You can get the details of what each does by running: info zsh which info zsh whence ... Or type info zsh, then bring up the index with i, and enter the builtin name (completion is available). And avoid using /usr/bin/which. There's no shell nowadays where that which is needed. As Timothy says, use the builtin that your shell provides for that. Most POSIX shells will have the type command, and you can use command -v to only get the path of a command (though both type and command -v are optional in POSIX (but not Unix, and not any longer in LSB), they are available in most if not all the Bourne-like shells you're likely to ever come across). (BTW, it looks like /usr/bin appears twice in your $PATH, you could add a typeset -U path to your ~/.zshrc)
What is the difference between which and where
1,482,905,948,000
I have a mouse with a weird problem. The left button sometimes fires a double click even if I click just one time. I would like to know if there's a way to avoid fast double clicks, ignoring clicks with an interval lower than a defined value. I'm using Fedora 15. Thanks in advance.
I found this: https://aur.archlinux.org/packages/xf86-input-evdev-debounce/ after googling for "linux xinput mouse debounce" I'm not gonna test it. It's been a long time since you asked, but maybe someone has the same problem with a favorite mouse, so here it is. Also, I'm not an Arch user; but they really rock!
Avoid very fast double clicks
1,482,905,948,000
I wanted to ask is there any reason not to use rsync for everything and abandon cp? I wasn't aware of rsync and now I don't know why cp is ever needed.
Strictly speaking yes, you can always use rsync. From man rsync (emphasis mine): Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algo‐ rithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destina‐ tion. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use. Now, sometimes it is just not worth typing those few extra characters just to use a tank to kill a fly. Also, rsync is often not installed by default so cp is nice to have.
Why not always use rsync?
1,482,905,948,000
I accidentally moved all folders from root to a subfolder. (/bin, /etc, /home, /lib, /usr... all moved) The only ones that were not moved, since they were in use, are /bak, /boot, /dev, /proc, /sys. Now, any command that I try to execute will simply not happen. I constantly get "No such file or directory". I am connected through ssh and through ftp, but I cannot move files through ftp, as direct SU login is disabled. I also have access to the actual server if I need to do something directly from there. I'm assuming I would need to edit a configuration file in order to tell it where to find the /bin folder and that would help me get access again, but I don't know which file that would be or how to do it (since I can't even run chmod to change permissions). Is there any way out of this other than re-installing? I am working on an old version of CentOS. I'm extremely new to the world of Linux, hence this action and the question...
If you still have a root shell, you may have a chance to repair your system. Let's say that you moved all the common directories (/bin, /etc, /lib, /sbin, /usr — these are the ones that could make recovery difficult) under /oops. You won't be able to issue the mv command directly, even if you specify the full path /oops/bin/mv. That's because mv is dynamically linked; because you've moved the /lib directory, mv can't run because it can't find the libraries that constitute part of its code. In fact, it's even worse than that: mv can't find the dynamic loader /lib/ld-linux.so.2 (the name may vary depending on your architecture and unix variant, and the directory could be a different name such as /lib32 or /lib64). Therefore, until you've moved the /lib directory back, you need to invoke the linker explicitly, and you need to specify the path to the moved libraries. Here's the command tested on Debian squeeze i386. export LD_LIBRARY_PATH=/oops/lib:/oops/lib/i386-linux-gnu /oops/lib/ld-linux.so.2 /oops/bin/mv /oops/* / You may need to adjust this a little for other distributions or architectures. For example, for CentOS on x86_64: export LD_LIBRARY_PATH=/oops/lib:/oops/lib64 /oops/lib64/ld-linux-x86-64.so.2 /oops/bin/mv /oops/* / When you've screwed up something /lib, it helps to have a statically linked toolbox lying around. Some distributions (I don't know about CentOS) provide a statically-linked copy of Busybox. There's also sash, a standalone shell with many commands built-in. If you have one of these, you can do your recovery from there. If you haven't installed them before the fact, it's too late. # mkdir /oops # mv /lib /bin /oops # sash Stand-alone shell (version 3.7) > -mv /oops/* / > exit If you don't have a root shell anymore, but you still have an SSH daemon listening and you can log in directly as root over ssh, and you have one of these statically-linked toolboxes, you might be able to ssh in. This can work if you've moved /lib and /bin, but not /etc. ssh [email protected] /oops/bin/sash [email protected]'s password: Stand-alone shell (version 3.7) > -mv /oops/* / Some administrators set up an alternate account with a statically-linked shell, or make the root account use a statically-linked shell, just for this kind of trouble. If you don't have a root shell and haven't taken precautions, you'll need to boot from a Linux live CD/USB (any will do as long as it's recent enough to be able to access your disks and filesystems) and move the files back.
Moved bin and other folders! How to get them back?
1,482,905,948,000
After reading some pretty nice answers from this question, I am still fuzzy on why you would want to pretend that you are root without getting any of the benefits of actually being root. So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? A Google Groups discussion here points out that you need fakeroot to compile a Debian kernel (if you want to do it from an unprivileged user). My comment is that, the reason you need to be root in order to compile is probably because read permissions were not set for other users. If so isn't it a security violation that fakeroot allows for compilation(which means gcc can now read a file that was for root)? This answer here describes that the actual system calls are made with real uid/gid of the user, so again where does fakeroot help? How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? From what I have gathered, fakeroot is just useful when you want to change the owner of any package files that you built to root. But you can do that with chown, so where am I lacking in my understanding of how this component is suppose to be used?
So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? Because you can’t just do that with chown, at least not as a non-root user. (And if you’re running as root, you don’t need fakeroot.) That’s the whole point of fakeroot: to allow programs which expect to be run as root to run as a normal user, while pretending that the root-requiring operations succeed. This is used typically when building a package, so that the installation process of the package being installed can proceed without error (even if it runs chown root:root, or install -o root, etc.). fakeroot remembers the fake ownership which it pretended to give files, so subsequent operations looking at the ownership see this instead of the real one; this allows subsequent tar runs for example to store files as owned by root. How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? fakeroot doesn’t trick tar into doing anything, it preserves changes the build wants to make without letting those changes take effect on the system hosting the build. You don’t need fakeroot to produce a tarball containing a file owned by root and suid; if you have a binary evilbinary, running tar cf evil.tar --mode=4755 --owner=root --group=root evilbinary, as a regular user, will create a tarball containing evilbinary, owned by root, and suid. However, you won’t be able to extract that tarball and preserve those permissions unless you do so as root: there is no privilege escalation here. fakeroot is a privilege de-escalation tool: it allows you to run a build as a regular user, while preserving the effects the build would have had if it had been run as root, allowing those effects to be replayed later. Applying the effects “for real” always requires root privileges; fakeroot doesn’t provide any method of acquiring them. To understand the use of fakeroot in more detail, consider that a typical distribution build involves the following operations (among many others): install files, owned by root ... archive those files, still owned by root, so that when they’re extracted, they’ll be owned by root The first part obviously fails if you’re not root. However, when running under fakeroot, as a normal user, the process becomes install files, owned by root — this fails, but fakeroot pretends it succeeds, and remembers the changed ownership ... archive those files, still owned by root — when tar (or whatever archiver is being used) asks the system what the file ownership is, fakeroot changes the answer to match the ownership it recorded earlier Thus you can run a package build without being root, while obtaining the same results you’d get if you were really running as root. Using fakeroot is safer: the system still can’t do anything your user can’t do, so a rogue installation process can’t damage your system (beyond touching your files). In Debian, the build tools have been improved so as not to require this any more, and you can build packages without fakeroot. This is supported by dpkg directly with the Rules-Requires-Root directive (see rootless-builds.txt). To understand the purpose of fakeroot, and the security aspects of running as root or not, it might help to consider the purpose of packaging. When you install a piece of software from source, for use system-wide, you proceed as follows: build the software (which can be done without privileges) install the software (which needs to be done as root, or at least as a user allowed to write to the appropriate system locations) When you package a piece of software, you’re delaying the second part; but to do so successfully, you still need to “install” the software, into the package rather than onto the system. So when you package software, the process becomes: build the software (with no special privileges) pretend to install the software (again with no special privileges) capture the software installation as a package (ditto) make the package available (ditto) Now a user completes the process by installing the package, which needs to be done as root (or again, a user with the appropriate privileges to write to the appropriate locations). This is where the delayed privileged process is realised, and is the only part of the process which needs special privileges. fakeroot helps with steps 2 and 3 above by allowing us to run software installation processes, and capture their behaviour, without running as root.
How fakeroot is not a security breach in Linux?
1,482,905,948,000
Coming from the Windows world, where I'm in the habit of putting every new EXE or Installation file through something like Virustotal, or searching Stack Exchange/Reddit for reviews on the safety (no malware, no spyware, etc) of a particular piece of software before installing it. With Linux, is it mostly completely safe to install any utility or software so long as I'm using the default repositories that come with new installs of the OS from vendor images? If not, what is a general process for validating the safety of a particular Linux utility/program/application?
Short answer Yes, it is 'mostly safe' to install any utility or software so long as you are using the default repositories that come with new installs of the OS. The default repositories contain software that is tested by the developers and/or maintainers of the Linux distro. Example There are levels of security. Take Ubuntu as an example: The Ubuntu developers/maintainers working at Canonical are fully responsible for the central program packages (the repository main etc) that are used in the server version and all flavours of Ubuntu desktop. In some cases they are developing these programs, but in many cases these programs are developed and packaged 'upstream' by other persons/groups, for example Debian. Regardless of the packages' origin, all packages in main benefit from full security support provided by Ubuntu itself. The functionality of the software in the repositories universe and multiverse is tested, but the software is developed and packaged by other people or groups of people, and Ubuntu cannot guarantee the security. Software from a PPA is not tested by the Ubuntu developers and/or maintainers. The quality and security depends of the developer/maintainer. (I'm responsible for one PPA, and I am using some other PPAs, but I know many people who stay away from them because of the security risk.) All the software above are kept updated automatically. Software downloaded separately (like the typical case of Windows applications) are less secure (for example, you must check that they are up to date). Software that you compile yourself or even develop yourself may or may not be safe depending on your skill and what the software is dealing with. These links describe the Ubuntu case in more detail: Which Ubuntu repositories are totally safe and free from malware? https://ubuntu.com/security General conclusion In similar ways other Linux distros have repositories that are more or less tested for function and security. You should check carefully the origin, reputation, and maintenance of more 'peripheral' software before you install it. Before installing it is a good idea to test software in a separate 'test' system for example in a virtual machine or a live system or a second computer.
Is it mostly safe to install any software from default repos? ( "yum install" "apt-get install" , etc)
1,482,905,948,000
I want to stop internet on my system using iptables so what should I do? iptables -A INPUT -p tcp --sport 80 -j DROP or iptables -A INPUT -p tcp --dport 80 -j DROP ?
Reality is you're asking 2 different questions. --sport is short for --source-port --dport is short for --destination-port also the internet is not simply the HTTP protocol which is what typically runs on port 80. I Suspect you're asking how to block HTTP requests. to do this you need to block 80 on the outbound chain. iptables -A OUTPUT -p tcp --dport 80 -j DROP will block all outbound HTTP requests, going to port 80, so this won't block SSL, 8080 (alt http) or any other weird ports, to do those kinds of things you need L7 filtering with a much deeper packet inspection.
What is sport and dport?
1,482,905,948,000
I'm confused. Running Fedora Linux, lscpu yields: Architecture: i686 CPU op-mode(s): 32-bit, 64-bit ... But when I try to install a 64-bit program (Chrome) I get error like: Package /....x86_64.rpm has incompatible architecture x86_64. Valid architectures are ['i686', 'i586', 'i486', i386'] I'm less interested in being able to install Chrome and more interested in why lscpu says that my CPU can run in 64-bit mode; clearly this can't mean I can run 64-bit programs. Can anyone clarify?
lscpu is telling you that your architecture is i686 (an Intel 32-bit CPU), and that your CPU supports both 32-bit and 64-bit operating modes. You won't be able to install x64 built applications since they're built specifically for x64 architectures. Your particular CPU can handle either the i386 or i686 built packages. There are a number of ways to verify your architecture & OS preferences. lscpu As you're already aware, you can use the command lscpu. It works well at giving you a rough idea of what you're CPU is capable of. $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit CPU(s): 4 Thread(s) per core: 2 Core(s) per socket: 2 CPU socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 37 Stepping: 5 CPU MHz: 1199.000 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 3072K NUMA node0 CPU(s): 0-3 /proc/cpuinfo This is actually the data provided by the kernel that most of the tools such as lscpu use to display. I find this output a little nice in the fact that it shows you some model number info about your particular CPU. Also it will show you a section for each core that your CPU may have. Here's output for a single core: $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz stepping : 5 cpu MHz : 1466.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.74 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Here's what the first 3 lines of each section for a core looks like: $ grep processor -A 3 /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 The output from /proc/cpuinfo can also tell you the type of architecture your CPU is providing through the various flags that it shows. Notice these lines from the above command: $ grep /proc/cpuinfo | head -1 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid The flags that end in _lm tell you that your processor support "long mode". Long mode is another name for 64-bit. uname This command can be used to determine what platform your kernel was built to support. For example: 64-bit kernel $ uname -a Linux grinchy 2.6.35.14-106.fc14.x86_64 #1 SMP Wed Nov 23 13:07:52 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux 32-bit kernel $ uname -a Linux skinner.bubba.net 2.6.18-238.19.1.el5.centos.plus #1 SMP Mon Jul 18 10:07:01 EDT 2011 i686 i686 i386 GNU/Linux This output can be refined a bit further using the switches, [-m|--machine], [-p|--processor], and [-i|--hardware-platform]. Here's that output for the same above systems. 64-bit $ uname -m; uname -p; uname -i x86_64 x86_64 x86_64 32-bit $ uname -m; uname -p; uname -i i686 i686 i386 NOTE: There's also a short-form version of uname -m that you can run as a stand alone command, arch. It returns exactly the same thing as uname -m. You can read more about the arch command in the coreutils documentation. excerpt arch prints the machine hardware name, and is equivalent to ‘uname -m’. hwinfo Probably the best tool for analyzing your hardware has got to be hwinfo. This package can show you pretty much anything that you'd want/need to know about any of your hardware, right from the terminal. It's save me dozens of times when I'd need some info off of a chip on a system's motherboard or needed to know the revision of a board in a PCI slot. You can query it against the different subsystems of a computer. In our case we'll be looking at the cpu subsystem. $ hwinfo --cpu 01: None 00.0: 10103 CPU [Created at cpu.301] Unique ID: rdCR.a2KaNXABdY4 Hardware Class: cpu Arch: X86-64 Vendor: "GenuineIntel" Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" Features: fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,aes,lahf_lm,ida,arat,tpr_shadow,vnmi,flexpriority,ept,vpid Clock: 2666 MHz BogoMips: 5319.74 Cache: 3072 kb Units/Processor: 16 Config Status: cfg=new, avail=yes, need=no, active=unknown Again, similar to /proc/cpuinfo this command shows you the makeup of each individual core in a multi-core system. Here's the first line from each section of a core, just to give you an idea. $ hwinfo --cpu | grep CPU 01: None 00.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 02: None 01.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 03: None 02.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 04: None 03.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" getconf This is probably the most obvious way to tell what architecture your CPU is presenting to the OS. Making use of getconf, your querying the system variable LONG_BIT. This isn't an environment variable. # 64-bit system $ getconf LONG_BIT 64 # 32-bit system $ getconf LONG_BIT 32 lshw Yet another tool, similar in capabilities to hwinfo. You can query pretty much anything you want to know about the underlying hardware. For example: # 64-bit Kernel $ lshw -class cpu *-cpu description: CPU product: Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz vendor: Intel Corp. physical id: 6 bus info: cpu@0 version: Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz slot: None size: 1199MHz capacity: 1199MHz width: 64 bits clock: 133MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=2 enabledcores=2 threads=4 # 32-bit Kernel $ lshw -class cpu *-cpu:0 description: CPU product: Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz vendor: Intel Corp. physical id: 400 bus info: cpu@0 version: 6.15.2 serial: 0000-06F2-0000-0000-0000-0000 slot: Microprocessor size: 1800MHz width: 64 bits clock: 800MHz capabilities: boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe x86-64 constant_tsc pni monitor ds_cpl est tm2 ssse3 cx16 xtpr lahf_lm configuration: id=1 *-logicalcpu:0 description: Logical CPU physical id: 1.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 1.2 width: 64 bits capabilities: logical CPU op-mode(s)? Several of the commands report that what looks to be a 32-bit CPU as supporting 32-bit & 64-bit modes. This can be a little confusing and misleading, but if you understand the history of CPU's, Intel specifically, you'll know that they have a history of playing games with their products where a CPU might have an instruction set that supports 16-bits, but can address more RAM that 2^16. The same thing is going on with these CPUs. Most people know that a 32-bit CPU can address only 2^32 = 4GB of RAM. But there are versions of CPUs that can address more. These CPUs would often make use of a Linux kernel with the suffix PAE - Physical Address Extension. Using a PAE enabled kernel along with this hardware would allow you to address up to 64GB on a 32-bit system. You might think well then why do I need a 64-bit architecture? The problem with these CPUs is that a single processes space is limited to 2^32, so if you have a large simulation or computational program that needed more than the 2^32 of addressable space in RAM, then this wouldn't have helped you with that. Take a look at the wikipedia page on the P6 microarchitecture (i686) for more info. TL;DR - So what the heck is my CPU's architecture? In general it can get confusing because a number of the commands and methodologies above are using the term "architecture" loosely. If you're interested in whether the underlying OS is 32-bit or 64-bit use these commands: lscpu getconf LONG_BIT uname If on the other hand you want to know the CPU's architecture use these commands: /proc/cpuinfo hwinfo lshw Specifically you want to look for fields where it says things like "width: 64" or "width: 32" if you're using a tool like lshw, or look for the flags: lm: Long Mode (x86-64: amd64, also known as Intel 64, i.e. 64-bit capable) lahf_lm: LAHF/SAHF in long mode The presents of these 2 flags tells you that the CPU is 64-bit. Their absences tells you that it's 32-bit. See these URLs for additional information on the CPU flags. What do the flags in /proc/cpuinfo mean? CPU feature flags and their meanings References man pages lscpu man page /proc/cpuinfo reference page uname man page hwinfo man page getconf man page articles: Check if a machine runs on 64 bit or 32 bit Processor/Linux OS? Find out if processor is 32bit or 64 (Linux) Need Help : 32 bit / 64 bit check for Linux
32-bit, 64-bit CPU op-mode on Linux
1,482,905,948,000
You log in to an unfamiliar UNIX or Linux system (as root). Which commands do you run to orient yourself and figure out what kind of system you are on? How do you figure out what type of hardware is in use, which type of operating system is running and what the current situation is when it comes to permissions and security? What is the first and second command you type?
a dual-use question! Either a Software Archaeologist or an Evil Hacker could use the answers to this question! Now, which am I? I always used to use ps -ef versus ps -augxww to find out what I was on. Linux and System V boxes tended to like "-ef" and error on "-augxww", vice versa for BSD and old SunOS machines. The output of ps can let you know a lot as well. If you can log in as root, and it's a Linux machine, you should do lsusb and lspci - that will get you 80% of the way towards knowing what the hardware situation is. dmesg | more can help you understand any current problems on just about anything. It's beginning to be phased out, but doing ifconfig -a can usually tell you a lot about the network interfaces, and the networking. Running mii-tool and/or ethtool on the interfaces you see in ifconfig output that look like cabled ethernet can give you some info too. Runnin ip route or netstat -r can be informative about Internet Protocol routing, and maybe something about in-use network interfaces. A mount invocation can tell you about the disk(s) and how they're mounted. Running uptime, and then last | more can tell you something about the current state of maintenance. Uptimes of 100+ days probably means "it's time to change the oil and fluids", metaphorically speaking. Running who is also Looking at /etc/resolv.conf and /etc/hosts can tell you about the DNS setup of that machine. Maybe do nslookup google.com or dig bing.com to see if DNS is mostly functional. It's always worth watching what errors ("command not found") and what variants of commands ("ps -ef" vs "ps augxww") work to determine what variant of Unix or Linux or BSD you just ended up on. The presence or absence of a C compiler, and where it lives is important. Do which cc or better, which -a cc to find them.
Commands to learn about an unfamiliar system [closed]
1,482,905,948,000
How can I set file to be executable only to other users but not readable/writable, the reason for this I'm executing something with my username but I don't want to give out the password. I tried : chmod 777 testfile chmod a=x chmod ugo+x I still get permission denied when executing as another user.
You need both read and execute permissions on a script to be able to execute it. If you can't read the contents of the script, you aren't able to execute it either. tony@matrix:~$ ./hello.world hello world tony@matrix:~$ ls -l hello.world -rwxr-xr-x 1 tony tony 17 Jul 13 22:22 hello.world tony@matrix:~$ chmod 100 hello.world tony@matrix:~$ ls -l hello.world ---x------ 1 tony tony 17 Jul 13 22:22 hello.world tony@matrix:~$ ./hello.world bash: ./hello.world: Permission denied
File permission execute only
1,482,905,948,000
The output of the top command shows that 29GB of memory is used by "buff/cache". What does it mean and how I can free it? It is near to 90% of memory.
You don't need to free "buff/cache". "buff/cache" is memory that Linux uses for disk caching, and that will be freed whenever applications require it. So you don't have to worry if a large amount is being shown in this field, as it doesn't count as "used" memory. Quoted from http://www.linuxatemyram.com (emphasis mine): Both you and Linux agree that memory taken by applications is "used", while memory that isn't used for anything is "free". But how do you count memory that is currently used for something, but can still be made available to applications? You might count that memory as "free" and/or "available". Linux instead counts it as "used", but also "available". (...) This "something" is (roughly) what top and free calls "buffers" and "cached". Since your and Linux's terminology differs, you might think you are low on ram when you're not.
"buff/cache" is very high, how I can free it? [duplicate]
1,482,905,948,000
Using vim I keep getting a message saying "Swap file xxx already exists" when I'm editing an apache config. However, I don't see it in the working directory on in tmp. How do I delete this?
Vim swap files are normally hidden (Unix hidden files begin with a .). In order to view hidden files as well as regular ones, you need to ls -A (mnemonic: A for All). That should show you whether a swap file is there or not.
"Swap file xxx already exists" when editing apache configuration file in vim?
1,482,905,948,000
I'm using Linux CentOS 7 Server and I already installed OpenVPN and NordVPN servers which I use to connect my Linux to. After establishing the VPN Connection, immediately my SSH access got disconnected. How to allow SSH access to the server while it's connected to VPN Server? And how to make it work whenever the server is rebooted? I used this tutorial on my setup: https://nordvpn.com/tutorials/linux/openvpn/
I were able to find a solution for my issue by: when you connect to the Server by its public IP address, the return packets get routed over the VPN. You need to force these packets to be routed over the public eth0 interface. These route commands should do the trick: ip rule add from x.x.x.x table 128 ip route add table 128 to y.y.y.y/y dev eth0 ip route add table 128 default via z.z.z.z Where x.x.x.x is your Server public IP, y.y.y.y/y should be the subnet of your Server public IP address, eth0 should be your Server public Ethernet interface, and z.z.z.z should be the default gateway.
How to allow SSH into Terminal after connecting to VPN server using NordVPN servers through OpenVPN?
1,482,905,948,000
I need to create filesystem with just one partition from nothing (/dev/zero). I tried this sequence of commands: dd if=/dev/zero of=mountedImage.img bs=512 count=131072 fdisk mountedImage.img n p 2048 131072 Basically, I need to create 64MB image file filled with zeroes. Then I use fdisk to add a new partition for new filesystem (which should finally be FAT32), starting at sector 2048 and using all remaining sectors. losetup /dev/loop1 mountedImage.img mkfs -t vfat /dev/loop1 But here I'm hitting problems. If I set up a loop device and format it using mkfs -t vfat, partition table is overwritten and filesystem (FAT32) is placed to disk. I don't need whole disk formatted with FAT32, I just need my primary partition to be so. Does anybody know how can I format only one partition of raw disk image, not whole image?
If on Linux, when loading the loop module, make sure you pass a max_part option to the module so that the loop devices are partitionable. Check the current value: cat /sys/module/loop/parameters/max_part If it's 0: modprobe -r loop # unload the module modprobe loop max_part=31 To make this setting persistent, add the following line to /etc/modprobe.conf or to a file in /etc/modprobe.d if that directory exists on your system: options loop max_part=31 If modprobe -r loop fails because “Module loop is builtin”, you'll need to add loop.max_part=31 to your kernel command line and reboot. If your bootloader is Grub2, add to it to the value of GRUB_CMDLINE_LINUX in /etc/default/grub and update GRUB using update-grub. Now, you can create a partitionable loop device: truncate -s 64M file # no need to fill it with zeros, just make it sparse fdisk file # create partitions losetup /dev/loop0 file mkfs.vfat /dev/loop0p1 # for the first partition. mount /dev/loop0p1 /mnt/ Unmount after using and detach loop device umount /mnt losetup -d /dev/loop0 (note that you need a relatively recent version of Linux).
How to create a formatted partition image file from scratch?
1,482,905,948,000
Let's say I have a makefile with a recipe called hour_long_recipe, which as its name suggests takes an hour to run. At random points throughout the recipe it asks yes/no questions. Let's say it asks 10 questions in total. One possible (and often recommended) way to run it is: yes | make hour_long_recipe which answers all questions with y. However, from my understanding, yes keeps outputting to its stdout at up to 10.2 GiB per second regardless of whether make is actually using that data from its stdin. Even if it was just 10 MiB/s (much slower than any implementation of yes if that reddit thread is to be believed), during the course of an hour it would add up to over 35 GiB, of which just 20 bytes will be read. Where does the data go? It's possible to save it to disk but that's wasteful, and if the disk fills up fast enough it can even cause make to fail. Presumably the operating system will stop it from getting to that, but how? What is the limit, and what happens when that limit is reached?
tl;dr: at some point, yes will be blocked from writing if the data isn't being read on the other side. It will not be able to continue executing until either that data is read, or it receives a signal, so you typically don't need to worry about yes writing gigabytes and gigabytes of data. The important thing to remember is that a pipe is a FIFO data structure, not simply a pure stream which drops data if not immediately read on the receiver. That is, while it may appear in most cases to be a seamless stream of data from the writing application to the reading application, it does require intermediate storage to perform that, and that intermediate storage is of finite size.* If we look at the pipe(7) man page, we can read the following about the size of that internal buffer (emphasis added): In Linux versions before 2.6.11, the capacity of a pipe was the same as the system page size (e.g., 4096 bytes on i386). Since Linux 2.6.11, the pipe capacity is 16 pages (i.e., 65,536 bytes in a system with a page size of 4096 bytes). Since Linux 2.6.35, the default pipe capacity is 16 pages, but the capacity can be queried and set using the fcntl(2) F_GETPIPE_SZ and F_SETPIPE_SZ operations. Assuming you're using a standard x86_64 system, it's very likely that you use 4KiB pages, so the 2^16 upper limit on pipe capacity likely is correct unless either side of the pipeline at some point used fcntl(F_SETPIPE_SZ). Either way, the principle stands: the intermediate storage between two sides of a pipe is finite, and is stored in memory. In an abstract pipeline a | b, this storage is used in the period between a writing some data, and b actually reading it. Assuming, then, that your make invocation (and any children also connected to this pipe by inheritance) don't actually try to read stdin, or only do so sparingly, the write syscall from yes will eventually simply not wake up yes from sleep when buffer space is exhausted. yes will then wait to be woken up, either when buffer space is available again, or a signal is received.** All of this is handled by the kernel's process scheduler. You can see this in pipe_write(), which is the write() handler for pipes: static ssize_t pipe_write(struct kiocb *iocb, struct iov_iter *from) { /* ... */ if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) wake_next_writer = false; if (wake_next_writer) wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM); /* ... */ } When the make side eventually terminates, yes will be sent SIGPIPE as a result of writing to a pipe with nothing remaining on the other end. This will then — depending on yes implementation — invoke either its own signal handler or the default kernel signal handler, and it will terminate.*** * In simple circumstances, where the receiver is processing the data at roughly the same rate that it's being written, this transfer can also be zero-copy with no intermediate buffer by using virtual memory to map and make available the physical page from the writing process available to the receiver. However, the case you're describing will certainly eventually need to use the pipe buffer to store the unread data. ** It's also possible that the writing is done with the O_NONBLOCK flag set on the file descriptor, which enables non-blocking mode. In this case, you'll probably get one incomplete write, and then write will return EAGAIN and the application will need to deal with that itself. It will likely either do that by suspending or running some other code of its choosing to handle the pipe being full. In the case of every modern yes version I can find and most other applications, though, the description above is what happens, since they don't use O_NONBLOCK. *** An application can do whatever it likes upon receiving SIGPIPE -- it may even theoretically decide not to terminate. However, all common yes use the default SIGPIPE handler, which just terminates without executing any more userspace instructions.
What happens when writing gigabytes of data to a pipe?
1,482,905,948,000
I have been looking into the iowait property shown in top utility output as shown below. top - 07:30:58 up 3:37, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 86 total, 1 running, 85 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st iowait is generally defined as follows: "It is the time during which CPU is idle and there is some IO pending." It is my understanding that a process is run on a single CPU. After it gets de-scheduled either because it has used up its time slot or after it gets blocked, it can eventually be scheduled again on any one CPU again. In case of IO request, a CPU that puts a process in uninterruptible sleep is responsible for tracking the iowait time. The other CPUs would be reporting the same time as idle time on their end as they really are idle. Is this assumption correct? Furthermore, assuming there is a long IO request (meaning the process had several opportunities to get scheduled but didn't get scheduled because the IO wasn't complete), how does a CPU know there is "pending IO"? Where is that kind of information fetched from? How can a CPU simply find out that some process was put to sleep some time for an IO to complete as any of the CPUs could have put that process to sleep. How is this status of "pending IO" confirmed?
The CPU doesn’t know any of this, the task scheduler does. The definition you quote is somewhat misleading; the current procfs(5) manpage has a more accurate definition, with caveats: iowait (since Linux 2.5.41) (5) Time waiting for I/O to complete. This value is not reliable, for the following reasons: The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate. The value in this field may decrease in certain conditions. iowait tries to measure time spent waiting for I/O, in general. It’s not tracked by a specific CPU, nor can it be (point 2 above — which also matches what you’re wondering about). It is measured per CPU though, as far as possible. The task scheduler “knows” there is pending I/O, because it knows that it suspended a given task because it’s waiting for I/O. This is tracked per task in the in_iowait field of the task_struct; you can look for in_iowait in the scheduler core to see how it is set, tracked and cleared. Brendan Gregg’s recent article on Linux load averages includes useful background information. The iowait entry in /proc/stat, which is what ends up in top, is incremented whenever a timer tick is accounted for, and the current process “on” the CPU is idle; you can see this by looking for account_idle_time in the scheduler’s CPU time-tracking code. So a more accurate definition would be “time spent on this CPU waiting for I/O, when there was nothing better to do”...
How does a CPU know there is IO pending?
1,482,905,948,000
This is probably an easy one, but I can't figure it out and it's pretty much not searchable. In a folder hierarchy I have exactly one file of type xyz. I want to find that file and open it with a terminal command. find . -name *.xyz This will return the file I'm looking for. Now how do I open it automatically, without typing the name? find . -name *xyz | open This doesn't work. It says it doesn't found the open command.
@retracile is correct. You need to open it with 'something'. However, I prefer to use exec over xargs. find . -name '*.xyz' -exec cat {} \; this will return cat fileFound.xyz; cat fileFound2.xyx; etc.. however, you are only expecting to find one file. note that changing \; to + would return cat fileFound.xyz fileFound2.xyz depending on case the later maybe the preferred choice. for more on this I would direct you to this question
Open file found with 'find' command
1,482,905,948,000
I read through this popular IBM doc (I see it referred quite often on the web) explaining the function of the initial RAM disk. I hit a wall in conceptualizing how this works though. In the doc it says The boot loader, such as GRUB, identifies the kernel that is to be loaded and copies this kernel image and any associated initrd into memory I'm already confused: Does it copy the entire kernel into memory or just part of it? If the entire kernel is in memory then why do we even need the initial RAM disk? I thought the purpose of initrd was to be able to have a small generalized kernel image and initrd will install the correct modules in it before the kernel image is loaded. But if the entire kernel is already in memory why do we need initrd? That also brings up another thing that confuses me - where are the modules that get loaded into the kernel located? Are all the kernel modules stored inside initrd?
The entire kernel is loaded into memory at boot, typically along with an initramfs nowadays. (It is still possible to set up a system to boot without an initramfs but that's unusual on desktops and servers.) The initramfs's role is to provide the functionality needed to mount the "real" filesystems and continue booting the system. That involves kernel modules, and also various binaries: you need at least udev, perhaps some networking, and kmod which loads modules. Modules can be loaded into the kernel later than just boot, so there's no special preparation of the kernel by the initramfs. They can be stored anywhere: the initramfs, /lib/modules on the real filesystem, in a development tree if you're developing a module... The initramfs only needs to contain the modules which are necessary to mount the root filesystem (which contains the rest).
Is the entire kernel loaded into memory on boot?
1,482,905,948,000
I currently run Angstrom Linux 2.6.32. I intend to upgrade linux kernel from 2.6.32 to 3.0.7. For this reason, I had to configure kernel 3.0.7 running make menuconfig. Now, I want to compare the new kernel configuration with the previous, but I can't find kernel 3.0.7 configuration file. Any ideas?
Your new one is .config at the top level of your kernel source tree. It may also get installed to /boot/config-3.0.7 or similar, depending.
Where kernel configuration file is stored?
1,482,905,948,000
I setup my environment to create a core dump of everything that crashes, however when I run an program with SUID set on a different user than the executing user it doesn't create a core dump. Any idea's why this might be? I couldn't find it anywhere on the web, I think it's some sort of security feature but I would like to have it disabled... Problem: $ cd /tmp $ cat /etc/security/limits.conf | grep core * - core unlimited root - core unlimited $ ls -l ohai -rwsr-sr-x 1 root root 578988 2011-06-23 23:29 ohai $ ./ohai ... Floating point exception $ sudo -i # ./ohai ... Floating point exception (core dumped) # chmod -s ohai # exit $ ./ohai ... Floating point exception (core dumped) Edit: To make it work as secure as possible I now have the following script to setup the environment: mkdir -p /var/coredumps/ chown root:adm /var/coredumps/ chmod 772 /var/coredumps/ echo "kernel.core_pattern = /var/coredumps/core.%u.%e.%p" >> /etc/sysctrl.conf echo "fs.suid_dumpable = 2" >> /etc/sysctl.conf echo -e "*\t-\tcore\tunlimited" >> /etc/security/limits.conf echo -e "root\t-\tcore\tunlimited" >> /etc/security/limits.conf Now all that's left to do is add ACL to /var/coredumps so users can only add files and don't modify nor read them ever again. The only downsize is that I would still have a problem with chroot'ed applications which would need a bind mount or something like that.
The memory of a setuid program might (is likely to, even) contain confidential data. So the core dump would have to be readable by root only. If the core dump is owned by root, I don't see an obvious security hole, though the kernel would have to be careful not to overwrite an existing file. Linux disables core dumps for setxid programs. To enable them, you need to do at least the following (I haven't checked that this is sufficient): Enable setuid core dumps in general by setting the fs.suid_dumpable sysctl to 2, e.g. with echo 2 >/proc/sys/fs/suid_dumpable. (Note: 2, not 1; 1 means “I'm debugging the system as a whole and want to remove all security”.) Call prctl(PR_SET_DUMPABLE, 1) from the program.
How come no core dump is create when an application has SUID set?
1,482,905,948,000
Is there a site someplace that lists the contents of /proc and what each entry means?
The documentation for Linux's implementation of /proc is in Documentation/filesystems/proc.txt in the kernel documentation. Beware that /proc is one of the areas where *ixes differ most. It started out as a System V specific feature, was then greatly extended by Linux, and is now in the process of being deprecated by things like /sys. The BSDs — including OS X — haven't adopted it at all. Therefore, if you write a program or script that accesses things in /proc, there is a good chance it won't work on other *ixes.
Where are the contents of /proc of the Linux kernel documented?
1,482,905,948,000
I checked my /var/log/messages log file, on every 2 secs interval there is some log getting added.. Mar 20 11:42:30 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:32 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:34 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:36 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:38 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:40 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:42 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 Mar 20 11:42:44 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 I didn't do any kind of operation on the system, but still error is getting logged. I suppose FS is corrupted. What should I do?
I am sharing the answer, as how I resolved this issue. I edited the /etc/fstab and provided the root FS with FSCK=1, /dev/mapper/vg_vipin-lv_root / ext4 defaults 0 1 And then I did a reboot. fsck will be performed and now everything is back to normal.
"ext4_lookup: deleted inode referenced" error in /var/log/messages
1,482,905,948,000
I want to try out Google public DNS. For this I need to change the nameserver address. I know it's in the file /etc/resolv.conf, but whenever I start network-manager, it overwrites the values in that file with what it obtains by using DHCP. How do I tell it not to do it? I looked through the GUI, but I could only find an option to add more IP addresses. Below is the trophy :)
Method #1 Find the NetworkManager configuration file and add/modify the following entry (in CentOS5 it is in /etc/NetworkManager/nm-system-settings.conf or /etc/NetworkManager/system-connections/) and edit your DSL connection file : [ipv4] method=auto dns=8.8.8.8;4.2.2.2; ignore-auto-dns=true Note:- if [ipv4] does not work then try with [ppp] Method #2 You can change permission of /etc/resolv.conf so that it can't be written by other services or you can use chattr. Method #3 Create a script as mentioned below in /etc/Networkmanager/dispatcher.d/ and don't forget to make it executable: #!/bin/bash # # Override /etc/resolv.conf and tell # NetworkManagerDispatcher to go pluck itself. # # scripts in the /etc/NetworkManager/dispatcher.d/ directory # are called alphabetically and are passed two parameters: # $1 is the interface name, and $2 is "up" or "down" as the # case may be. # Here, no matter what interface or state, override the # created resolver config with my config. cp -f /etc/resolv.conf.myDNSoverride /etc/resolv.conf entry of /etc/resolv.conf.myDNSoverride nameserver 8.8.8.8
How to set DNS resolver in Fedora using network-manager?
1,482,905,948,000
What's the purpose of the /proc/pid/mountinfo file (with pid being numerical process id)? As far as I can see it reflects contents of the /proc/mounts file but with added information. Also the file seems to stay the same for all processes: diff for two randomly chosen processes returns no output (diff /proc/3833/mountinfo /proc/2349/mountinfo) Please note that I'm not asking what does it contain. From the definitions on the internet I see that 'This file contains information about mount points.'. I'm asking why is it present in every process directory? What is its purpose there?
Check the kernel documentation for information about files in /proc. There is one such file per process because not all processes see the same mount points. Chroot is a traditional Unix feature that makes it possible to restrict processes to a subtree of the filesystem tree. A chrooted process would not see mount points outside its root. Linux takes this further with namespaces: a process can compose its own view of the filesystem by grafting subtrees around. For more information on mount namespaces, see per process private file system mount points and Michael Kerrisk's articles on namespaces on LWN.
What's the purpose of the /proc/pid/mountinfo file?
1,482,905,948,000
We are assembling some lightweight machines with the express purpose of displaying a single web page over a large screen. I need the machine to essentially boot up as lightweight and as quickly as possible and essentially have it run a browser (WebKit?) in full screen, loading one page which will be controlled dynamically by JavaScript. I'll be using an Intel D525 dual-core processor with integrated GPU, so I shouldn't need to set up any proprietary graphics drivers. Once I get one of these machines set up properly, I should just be able to dd the hard drive onto my computer and then dump it onto each new machine. I have the following questions: How can I create a "distribution" which includes only what I need? I suppose I'll need the kernel (;]), X, and a web browser of some sort, but not really too much else. Could I take something like Ubuntu Server and simply install X Server and find a way to have the machine automatically log in, start X, and start the web browser, no questions asked? Is there a book I can read or an article or something? What can I use for a nice, stripped-down web browser that essentially runs a "chromeless Chromium?" These machines won't be accepting user input at all. If I need to manage them, I'll use SSH.
Many distributions have some facility for a minimal install; essentially where you manually select only those packages that you explicitly wish to install. Debian has this ability and would be a better choice, in your situation, than the other obvious minimal contender, Arch Linux. Arch's rolling release status may provide a level of ongoing complexity that you wish to eschew. Debian would provide the simple, minimal base you are looking for plus offer stability. There is a blog post on using Debian as a kiosk that may offer some helpful tips. For a browser, as beav_35 suggests, Uzbl is a good choice. My recommendation would be Vimprobable, a WebKit browser that is scriptable, keyboard driven and can be controlled effectively over SSH. As a window manager, I would recommend dwm: at less than 2000 SLOC, it is extremely lightweight and can be easily configured for a kiosk-type setup.
How can I build a custom distribution for running a simple web browser?
1,482,905,948,000
I am new to the Linux environment, and I have started research on fonts. I have read that fontconfig is the library which actually deals with the font management in Linux. So for this sake I have downloaded the fontconfig source code and I have compiled it and it's ready to use. When I went into the main source directory I saw many sub modules like fc-cache, fc-list, fc-query etc. I tried to search about them but couldn't find any great detail of how they actually work. So I have decided to understand the source myself but I'm facing lots of troubles as I don't know what's the actually starting point, like when we write command on terminal like below, what is actually happening. $ fc-query /usr/share/fonts/truetype/fonts-japanese-gothic.ttf Lets suppose I want to modify a fontconfig file, like Fcquery.c, to make it call some other function which resides in some other shared library. What do I have to do? Will just compiling work, or do I have to register some thing in the Makefile. I am new so please elaborate.
TL;DR: Understanding fontconfig requires understanding why it was created and what problems it is trying to solve. That require a lot of understanding of Xorg. Font configuration on UNIX machines went through different phases and fontconfig is simply one of the possibilities you can use to use fonts through Xorg. Reading the source of fontconfig without a good understanding of the source of Xorg is probably very difficult. But, I believe that an understanding of the concepts behind the evolution of fonts may prove a decent starting point. Disclaimer: I deal a lot with fonts on Linux, but I never really needed to change Xorg code relating to fonts. The Arch Linux wiki has a lot of info on this too A bit of history Original UNIX fonts were simply bitmap fonts. Today these can be found in /usr/share/fonts/misc, the PCF (portable compiled format) is used for pretty much all of them today. It is a binary format. There have been other formats of binary fonts but I need to admit that I never needed to use any other format than PCF for binary fonts. Using xfontsel you can configure a Xorg string to define the points, spacing, pixel size, terminal weight (bold, slant), encoding, among others of the font. The bitmap fonts have different files for different pixel sizes of the font. The bitmap fonts already introduce the concept of font family. Postscript (and TeX to some extent) created Type 1 fonts which are vector based fonts. These are in /usr/share/fonts/Type1. Vector fonts are configured with several configuration values, e.g. antialias, embolden, dpi, or size (not necessarily point based this time). Vector based fonts are scaled and do not require several files. Xorg used both bitmap and Type1 fonts. And it created the XFT (well X FreeType is an interface to FreeType which is a GPL/BSD library that mimics and extends Type1). XFT not only allows the usage of Type1 and FreeType fonts but it also other formats: OTF by Adobe and Microsoft, TTF by Apple. Moreover XFT allows scaling of the old bitmap fonts to look like Type1 fonts. Several other attributes, like hinting or hintstyle, were added to define attributes of these fonts. All that can be found in subfolders of /usr/share/fonts. And XFT parameters can be configured in your Xresources. FontConfig And fontconfig needs to deal with all the discrepancies of the above. In other words fontconfig is an attempt of configuring all the font types above in a manner that can exploit the attributes that the distinct fonts have with a common syntax. The bitmap fonts have their problems: several different files for a single font limited sizes by points and pixel sizes. But so does the vector based fonts: scaling takes time, especially if several parameters are used not all font attributes affect different font types in the same way And both have the problem that there are many font formats, and that a user may wish to install fonts of his own in his home. Fontconfig tries to solve these problems. fc-query tells you what fontconfig understand about the font file. Notably what attributes the file is for (for bitmap fonts for example) and what attributes can be used (for vector fonts). fc-list is a way of telling you what fonts can be found in the directories fontconfig is looking at, and therefore con be used by applications. Finally fc-cache performs an indexing of these fonts to find them easier and to scale them (among other things) for application use. The fontconfig shared library on the other hand is the most interesting part. It uses the configuration files (/etc/fonts, ~/.config/fontconfig) and the font cache to give preprepared fonts directly to applications that are linked against it. Since most applications used XFT (and therefore FreeType) and the FreeType library is using calls from the fontconfig library, the use of these fonts become ubiquitous. But note that you can compile a program that will ask Xorg for a bitmap font in the old style (e.g. -*-terminus-medium-r-normal-*-*-200-*-*-c-*-*-u) and the call will not go through the fontconfig shared lib.
How does fontconfig actually work? [closed]
1,482,905,948,000
Possible Duplicate: ext4: How to account for the filesystem space? I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one. Edit: As requested, here's the output of tune2fs -l /dev/sdc tune2fs 1.41.14 (22-Dec-2010) Filesystem volume name: big Last mounted on: /media/big Filesystem UUID: 5d9b9f5d-dae7-4221-9096-cbe7dd78924d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122101760 Block count: 488378624 Reserved block count: 24418931 Free blocks: 480665205 Free inodes: 122101749 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 907 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Wed Nov 23 14:13:57 2011 Last mount time: Wed Nov 23 14:14:24 2011 Last write time: Wed Nov 23 14:14:24 2011 Mount count: 2 Maximum mount count: 20 Last checked: Wed Nov 23 14:13:57 2011 Check interval: 15552000 (6 months) Next check after: Mon May 21 13:13:57 2012 Lifetime writes: 144 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 68e954e4-59b1-4f59-9434-6c636402c3db Journal backup: inode blocks
Theres no missing space. 5% reserved is rounded down to the nearest significant figure. 1k Blocks: 1922860848 Reserved 1k Blocks: (24418931 * 4) = 97675724 Total blocks used: 927384456 + 897800668 + 97675724 = 1922860848 Edit: Regarding your comment on the difference between df blocks and 'Block Count' blocks. So the 4k block difference is (1953514496 - 1922860848)/4 = 7663412 The majority of the 'difference' is made up of the "Inode blocks per group" parameter which is 512. Since there is 32768 blocks per group that puts the number of groups at 488378624 / 32768 which is 14904 rounded down. Multiplied by the 512 blocks it takes up gives 7630848 blocks. That gives us 7663412 - 7630848 = 32564 unaccounted for. I assume that those blocks make up your journal size, but not too sure on that one!
Why is (free_space + used_space) != total_size in df? [duplicate]
1,482,905,948,000
I've always been running chmod/chown commands as a sudo user. But today I wondered if I don't use sudo, what permissions do I need to actually execute chmod/chown command on a folder/file? I've tried googling the question, but nothing popped up that answers specifically this question.
On Linux: chown: "Only a privileged process (Linux: one with the CAP_CHOWN capability) may change the owner of a file." (Source: chown(2)) The easy way to be such a process is to be run by root. See explain_chown for help finding out why a particular chown failed. See capabilities for ways to give processes that capability other than running as root. chmod: The file's owner or root can change permissions, plus other processes with the CAP_FOWNER capability. (Source) chgrp: "The owner of a file may change the group of the file to any group of which that owner is a member. A privileged process (Linux: with CAP_CHOWN) may change the group arbitrarily." (chown(2))
What permissions one needs to run chmod, chown command on a folder/item?
1,439,375,679,000
I would like to give a user permissions to create and read files in a particular directory, but not to modify or delete files. If the user can append to files that is ok, but I'd rather not. This is on Ubuntu Linux. I think this is impossible with standard Unix file permissions, but perhaps this is possible using ACLs? The user will always be connecting using SFTP, so if there was some way to control this within SFTP (as opposed to OS permissions) that would be fine. To be absolutely clear, I want the following: echo hello > test # succeeds, because test doesn't exist, and creation is allowed echo hello >> test # can succeed or fail, depending on whether appending is allowed echo hello2 > test # fails, because test already exists, and modification is not allowed cat test # succeeds, because reads are allowed rm test # fails, because delete is not allowed If you're wondering why I want to do this, it's to make a Duplicati backup system resistant to Ransomware.
You could use bindfs like: $ ls -ld dir drwxr-xr-t 2 stephane stephane 4096 Aug 12 12:28 dir/ That directory is owned by stephane, with group stephane (stephane being its only member). Also note the t that prevents users from renaming or removing entries that they don't own. $ sudo bindfs -u root -p u=rwD,g=r,dg=rwx,o=rD dir dir We bindfs dir over itself with fixed ownership and permissions for files and directories. All files appear owned by root (though underneath in the real directory they're still owned by stephane). Directories get drwxrwxr-x root stephane permissions while other types of files get -rw-r--r-- root stephane ones. $ ls -ld dir drwxrwxr-t 2 root stephane 4096 Aug 12 12:28 dir Now creating a file works because the directory is writeable: $ echo test > dir/file $ ls -ld dir/file -rw-r--r-- 1 root stephane 5 Aug 12 12:29 dir/file However it's not possible to do a second write open() on that file as we don't have permission on it: $ echo test > dir/file zsh: permission denied: dir/file (note that appending is not allowed there (as not part of your initial requirements)). A limitation: while you can't remove or rename entries in dir because of the t bit, new directories that you create in there won't have that t bit, so you'll be able to rename or delete entries there.
Allow owner to create & read files, but not modify or delete
1,439,375,679,000
It is possible to make ls -l output the size field with digits grouped by thousands? If so, how? For instance: $ ls -l -rw-rw---- 1 dahl dahl 43,210,052 2012-01-01 21:52 test.py (Note the commas in the size). Maybe by modifying the LC_NUMERIC setting inside the locale I'm using (en_US.utf8)? I'm on Kubuntu 12.04 LTS.
Block size - GNU Coreutils says A block size specification preceded by ' causes output sizes to be displayed with thousands separators. (Note well that just specifying a block size is not enough). So depending on what you want, you could try BLOCK_SIZE="'1" ls -l BLOCK_SIZE="'1kB" ls -l or ls -l --block-size="'1" ls -l --block-size="'1kB" you can make it permanent using export BLOCK_SIZE="'1" export BLOCK_SIZE="'1kB" or alias ls="ls --block-size=\"'1\"" alias ls="ls --block-size=\"'1kB\""
Output ls -l size field with digits grouped by thousands?
1,439,375,679,000
I have a large number of zip files that were compressed using the zip command. I would like to recompress them with the -9 flag to improve the compression ratio. Does anyone know if that can be done without manually decompressing and then compressing. PS. I need to keep them as zip files since they are served to windows users( and as such have white spaces in their names)
You cannot improve the compression ratio, without decompressing the data. You don't have to extract all of the zip files before compressing them, but I would recommend uncompressing one whole zip file before re-compressing. It is possible to recompress the files in a zip file one at a time and re-adding them before going to the next file contained in the zip file. This requires N rewrites of the zip file for a zip file containing N files. It is much more efficient to extract the N files and generate the new zipfile in one go, compressing all files with -9.
Compress zip files with higher compression
1,439,375,679,000
I'm trying to debug an init script on a Linux system; I'm trying to pass init=/bin/sh to the kernel to make it start sh without starting init so I can run through the init sequence manually. What I've found is that the kernel is starting init anyway. During bootup, one of the printk messages is the command line, and that is showing that the line is being set properly; in addition, I can affect other things using the kernel command line. I have checked to make sure the path exists; it does. This is a busybox system, and init is a symlink to busybox; so to make sure busybox doesn't do strange magic when its PID is 1, I also tried running a non-busybox program as init; that didn't work, either. It seems that no matter what I do, init is run. What could be causing this behavior?
Looking at Linux kernel source, I see that if the file /init exists, the kernel will always attempt to run it on the assumption that it's doing a ramdisk boot. Check your system to see if /init exists, if it does, then that's probably your problem.
What can make passing init=/path/to/program to the kernel not start program as init?
1,439,375,679,000
I have a CSV file users.csv with a list of userNames, userIDs, and other data: username, userid, sidebar_side, sidebar_colour "John Lennon", 90123412, "left", "blue" "Paul McCartny", 30923833, "left", "black" "Ringo Starr", 77392318, "right", "blue" "George Harrison", 72349482, "left", "green" In another file toremove.txt I have a list of userIDs: 30923833 77392318 Is there a clever, efficient way to remove all the rows from the users.csv file which contain the IDs in toremove.txt? I have written a simple Python app to parse the two files and write to a new file only those lines that are not found in toremove.txt, but it is extraordinarily slow. Perhaps some sed or awk magic can help here? This is the desired result, considering the examples above: username, userid, sidebar_side, sidebar_colour "John Lennon", 90123412, "left", "blue" "George Harrison", 72349482, "left", "green"
With grep, you can do: $ grep -vwF -f toremove.txt users.txt username, userid, sidebar_side, sidebar_colour "John Lennon", 90123412, "left", "blue" "George Harrison", 72349482, "left", "green" With awk: $ awk -F'[ ,]' 'FNR==NR{a[$1];next} !($4 in a)' toremove.txt users.txt username, userid, sidebar_side, sidebar_colour "John Lennon", 90123412, "left", "blue" "George Harrison", 72349482, "left", "green"
Remove all lines in file A which contain the strings in file B
1,439,375,679,000
As answered in Highlight the current date in cal the current date in output form cal is automatically highlighted (reverse colors) if the output goes to terminal. That's what I had always been getting. However, with my current Debian GNU/Linux, it is not the case any more, and I'm wondering what the fix is. $ echo $TERM xterm $ lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux bullseye/sid Release: testing Codename: bullseye
I just wanted to add onto @ShiB's great answer with the following: function cal() { if [ -t 1 ]; then ncal -b "${@}"; else command cal "${@}"; fi } Changes from @ShiB's answer: Use a function instead of an alias to facilitate passing arguments to the command(s) Replace /usr/bin/cal with command cal in case the user has cal installed in a non-standard location
Current date in cal is not highlighted in recent Debian
1,439,375,679,000
I know how to redirect output and how to suppress them in bash. Now, suppose I accidentally forgot to append the output redirection part to the command (e.g. 2>&1 or > /tmp/mystdout) and my background process is already running for a while, can I still change to where stdout and stderr are being written to? I really would like not to kill and restart the application. To be more specific as asked by Gilles in his comment, I would like to fiddle with it in these scenarios in specific: wrong output file forgot to redirect stderr to stdout or a combination of both E.g. I have Apache running and I can see the file descriptors: /proc/8019/fd/0 -> /dev/null /proc/8019/fd/1 -> /dev/null /proc/8019/fd/2 -> /var/log/apache2/error.log
You can do it using reredirect (https://github.com/jerome-pouiller/reredirect/). reredirect -m /dev/null <PID> You can restore initial output of your process later using something like: reredirect -N -O <M> -E <N> <PID> (<M> and <N> are provided by previous launch of reredirect). reredirect README also explains how to redirect to another command or to redirect only stdout or stderr.
How to change the output redirection of a running process?
1,439,375,679,000
Is it possible to set the umask for a system-user (created with useradd --system username)?
There are three normal ways to set a user's umask. Set UMASK in /etc/login.defs Add pam_umask.so to your PAM configuration in /etc/pam.d Set it in the shell startup files, e.g. /etc/profile There is no difference between system users and normal users in this regard. But I'm assuming you're trying to start a daemon with a custom umask? The problem is: all of the above options happen when a user logs in. If you're running a daemon, it never logs in. It's started by init, then either runs as root, or calls setuid to run as the system user you specify. Your main options are: put umask in your init script (do a grep umask /etc/init.d/* for an example) configure init to start the program with a custom umask (systemd.exec upstart umask) if using start-stop-daemon, pass the umask option modify the program itself to call the umask function or system call
How to set umask for a system-user?
1,439,375,679,000
My question is similar to this question but with a couple of different constraints: I have a large \n delimited wordlist -- one word per line. Size of files range from 2GB to as large as 10GB. I need to remove any duplicate lines. The process may sort the list during the course of removing the duplicates but not required. There is enough space on the partition to hold the new unique wordlist outputted. I have tried both of these methods but they both fail with out of memory errors. sort -u wordlist.lst > wordlist_unique.lst awk '!seen[$0]++' wordlist.lst > wordlist_unique.lst awk: (FILENAME=wordlist.lst FNR=43601815) fatal: assoc_lookup: bucket-ahname_str: can't allocate 10 bytes of memory (Cannot allocate memory) What other approaches can I try?
Try using sort with the -o/--output=FILE option instead of redirecting the output. You might also try setting the buffer-size with the -S/--buffer-size=SIZE. Also, try -s/--stable. And read the man page, it offers all of the info I gave. The full command you can use that might work for what you're doing: sort -us -o wordlist_unique.lst wordlist.lst You might also want to read the following URL: http://www.gnu.org/s/coreutils/manual/html_node/sort-invocation.html That more thoroughly explains sort than the man page.
How to remove duplicate lines in a large multi-GB textfile?
1,439,375,679,000
After going through the bash documentation, this question and this one it's still not clear to me how can I perform atomic write (append) operations to a file in bash. I have a script that is run in multiple instances and at some point must write data to a file: echo "$RESULT" >> `pwd`/$TEMP_DIR/$OUT_FILE How is it possible to make all write operations from all concurrently running scripts to that file atomic (so that data from one instance doesn't overlap data from another)?
It seems you need to use flock as in the example from man (http://linux.die.net/man/1/flock) ( flock -x 200 # Put here your commands that must do some writes atomically ) 200>/var/lock/mylockfile And put all your commands that must be atomic in ().
Performing atomic write operations in a file in bash
1,439,375,679,000
I'm wondering if there is a way to watch films or images without running X server. I'm not using login manager - I log in to tty and start X server manually. Hypothetical situation: I log in, but decide to only watch film, or maybe view few photos. I don't want to run X server and all the GUI stuff just for this purpose. How can I watch films/images without X?
For Images: You can watch images with fbi: NAME fbi - linux framebuffer imageviewer SYNOPSIS fbi [ options ] file ... DESCRIPTION fbi displays the specified file(s) on the linux console using the framebuffer device. PhotoCD, jpeg, ppm, gif, tiff, xwd, bmp and png are supported directly. For other formats fbi tries to use ImageMagick's convert. Example command: $ fbi path/to/file.jpg For videos: You can use vlc from tty/console: Example command: $ vlc /path/to/file.mp4 You can also use mplayer: $ mplayer /path/to/file.mp4 Note: Video output drivers can be set by -vo option e.g caca, fbdev.(This external article may help)
How to watch films/images without X?
1,439,375,679,000
I was reading a blog post about filesystem repair and the author posted a good question… fsck -p is supposed to fix minor errors automatically without human intervention. But what exactly will it fix when it's told to preen the filesystem? What errors will it fix, and what will cause it to stop and tell the user he or she must run fsck interactively? Is there a list of some kind? I've been Googling around and all I find is the man page, which doesn't really tell what -p will fix or what triggers the hands-on flag. I'm specifically interested in the ext4 filesystem.
The answer to your question lies in the e2fsck/problems.c file of the e2fsprogs source code. Looking for the PR_PREEN_OK flag should get you started. As the complete error handling is a bit more involved, due to the multitude of different error conditions that may occur, you are advised to have a closer look at the code if you are concerned about a specific case. However, the lists below were extracted from the comments to the error conditions and should give you a rough overview about the effects of the preen-mode. The following errors/warnings are currently handled automatically when the -p flag is specified: Relocate hint Journal inode is invalid Journal superblock is corrupt Superblock has_journal flag is clear but has a journal Superblock needs_recovery flag is set but not journal is present Filesystem revision is 0, but feature flags are set Superblock hint for external superblock group descriptor N marked uninitialized without feature set. group N block bitmap uninitialized but inode bitmap in use. Group descriptor N has invalid unused inodes count. Last group block bitmap uninitialized. The test_fs flag is set (and ext4 is available) Last mount time is in the future (fudged) Last write time is in the future (fudged) Block group checksum (latch question) is invalid. Root directory has dtime set Reserved inode has bad mode Deleted inode has zero dtime Inode in use, but dtime set Zero-length directory Inode has incorrect i_size Inode has incorrect i_blocks Bad superblock in group Bad block group descriptors in group Block claimed for no reason Error allocating blocks for relocating metadata Error allocating block buffer during relocation process Relocating metadata group information from X to Y Relocating metatdata group information to X Block read error during relocation process Block write error during relocation process Immutable flag set on a device or socket inode Non-zero size for device, fifo or socket inode Filesystem revision is 0, but feature flags are set Journal inode is not in use, but contains data Journal has bad mode INDEX_FL flag set on a non-HTREE filesystem INDEX_FL flag set on a non-directory Invalid root node in HTREE directory Unsupported hash version in HTREE directory Incompatible flag in HTREE root node HTREE too deep invalid inode->i_extra_isize invalid ea entry->e_name_len invalid ea entry->e_value_offs invalid ea entry->e_value_block invalid ea entry->e_value_size invalid ea entry->e_hash inode missing EXTENTS_FL, but is an extent inode Inode should not have EOFBLOCKS_FL set Directory entry has deleted or unused inode Directory filetype not set Directory filetype set on filesystem Invalid HTREE root node Invalid HTREE limit Invalid HTREE count HTREE interior node has out-of-order hashes in table Inode found in group where _INODE_UNINIT is set Inode found in group unused inodes area i_blocks_hi should be zero /lost+found not found Unattached zero-length inode Inode ref count wrong Padding at end of inode bitmap is not set. Padding at end of block bitmap is not set. Block bitmap differences header Block not used, but marked in bitmap Block used, but not marked used in bitmap Block bitmap differences end Inode bitmap differences header Inode not used, but marked in bitmap Inode used, but not marked used in bitmap Inode bitmap differences end Free inodes count for group wrong Directories count for group wrong Free inodes count wrong Free blocks count for group wrong Free blocks count wrong Block range not used, but marked in bitmap Block range used, but not marked used in bitmap Inode range not used, but marked in bitmap Inode range used, but not marked used in bitmap Group N block(s) in use but group is marked BLOCK_UNINIT Group N inode(s) in use but group is marked INODE_UNINIT Recreate journal if E2F_FLAG_JOURNAL_INODE flag is set The following error conditions cause the non-interactive fsck process to abort, even if the -p flag is set: Block bitmap not in group Inode bitmap not in group Inode table not in group Filesystem size is wrong Inode count in superblock is incorrect The Hurd does not support the filetype feature Journal has an unknown superblock type Ask if we should clear the journal Journal superblock has an unknown read-only feature flag set Journal superblock has an unknown incompatible feature flag set Journal has unsupported version number Ask if we should run the journal anyway Reserved blocks w/o resize_inode Resize_inode not enabled, but resize inode is non-zero Resize inode invalid Last mount time is in the future Last write time is in the future group descriptor N checksum is invalid. Root directory is not an inode Block bitmap conflicts with some other fs block Inode bitmap conflicts with some other fs block Inode table conflicts with some other fs block Block bitmap is on a bad block Inode bitmap is on a bad block Illegal blocknumber in inode Block number overlaps fs metadata Inode has illegal blocks (latch question) Too many bad blocks in inode Illegal block number in bad block inode Bad block inode has illegal blocks (latch question) Bad block used as bad block indirect block Inconsistency can't be fixed prompt Bad primary block prompt Suppress messages prompt Imagic flag set on an inode when filesystem doesn't support it Compression flag set on an inode when filesystem doesn't support it Deal with inodes that were part of orphan linked list Deal with inodes that were part of corrupted orphan linked list (latch question) Error reading extended attribute block Invalid extended attribute block Extended attribute reference count incorrect Multiple EA blocks not supported Error EA allocation collision Bad extended attribute name Bad extended attribute value Inode too big (latch question) Directory too big Regular file too big Symlink too big Bad block has indirect block that conflicts with filesystem block Resize inode failed inode appears to be a directory Error while reading extent tree Failure to iterate extents Bad starting block in extent Extent ends beyond filesystem EXTENTS_FL flag set on a non-extents filesystem inode has extents, superblock missing INCOMPAT_EXTENTS feature Fast symlink has EXTENTS_FL set Extents are out of order Inode has an invalid extent node Clone duplicate/bad blocks? Bad inode number for '.' Directory entry has bad inode number Directry entry is link to '.' Directory entry points to inode now located in a bad block Directory entry contains a link to a directory Directory entry contains a link to the root directry Directory entry has illegal characters in its name Missing '.' in directory inode Missing '..' in directory inode First entry in directory inode doesn't contain '.' Second entry in directory inode doesn't contain '..' i_faddr should be zero i_file_acl should be zero i_dir_acl should be zero i_frag should be zero i_fsize should be zero inode has bad mode directory corrupted filename too long Directory inode has a missing block (hole) '.' is not NULL terminated '..' is not NULL terminated Illegal character device inode Illegal block device inode Duplicate '.' entry Duplicate '..' entry Final rec_len is wrong Error reading directory block Error writing directory block Directory entry for '.' is big. Split? Illegal FIFO inode Illegal socket inode Directory filetype incorrect Directory filename is null Invalid symlink i_file_acl (extended attribute block) is bad Filesystem contains large files, but has no such flag in sb Clear invalid HTREE directory Bad block in htree interior node Duplicate directory entry found Non-unique filename found i_blocks_hi should be zero Unexpected HTREE block Root inode not allocated No room in lost+found Unconnected directory inode .. entry is incorrect Lost+found not a directory Unattached inode Superblock corrupt Fragments not supported Error determing physical device size of filesystem The external journal has (unsupported) multiple filesystems Can't find external journal External journal has bad superblock Superblock has a bad journal UUID Error allocating inode bitmap Error allocating block bitmap Error allocating icount link information Error allocating directory block array Error while scanning inodes Error while iterating over blocks Error while storing inode count information Error while storing directory block information Error while reading inode (for clearing) Error allocating refcount structure Error reading Extended Attribute block while fixing refcount Error writing Extended Attribute block while fixing refcount Error allocating EA region allocation structure Error while scanning inodes Error allocating inode bitmap Internal error: couldn't find dir_info Error allocating icount structure Error iterating over directory blocks Error deallocating inode Error adjusting EA refcount Error allocating inode bitmap Error creating root directory Root inode is not directory; aborting Cannot proceed without a root inode. Internal error: couldn't find dir_info Programming error: bitmap endpoints don't match Internal error: fudging end of bitmap Error copying in replacement inode bitmap Error copying in replacement block bitmap
What does fsck -p (preen) do on ext4?