date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,439,375,679,000 |
I have a fairly standard disk encryption setup in Debian 5.0.5: unencrypted /boot partition, and encrypted sdaX_crypt that contains all other partitions.
Now, this is a headless server installation and I want to be able to boot it without a keyboard (right now I can boot it only with a keyboard and a monitor attached).
So far I have an idea of moving /boot partition to an USB drive and make slight modifications to auto-enter the key (I think there is just a call to askpass in the boot script somewhere). This way I can boot headless, just need to have a flash drive in at boot time.
As I see it, the problem with it is that
I need to invest time into figuring out all bits and pieces to make it work,
If there is an update, which regenerates initrd, I need to regenerate the boot partition on the USB, which seems tedious.
The question: is there a standard low-upkeep solution available for what I want to do? Or should I be looking elsewhere altogether?
|
You can setup your system to require a key instead of a password and change some scripts to search for this key on a USB stick. I found a detailed explanation for this process on Debian Lenny. There are some notes in the end that describe necessary changes for newer versions of Debian.
| Full disk encryption with password-less authentication in Linux |
1,439,375,679,000 |
Linux as router: I have 3 Internet providers, each with its own modem.
Provider1, which is gateway address 192.168.1.1
Connected to linux router eth1/192.168.1.2
Provider2, gateway address 192.168.2.1
Connected to linux router eth2/192.168.2.2
Provider3, gateway address 192.168.3.1
Connected to linux router eth3/192.168.3.2
________
+------------+ /
| | |
+----------------------+ Provider 1 +--------|
__ |192.168.1.2 |192.168.1.1 | /
___/ \_ +------+-------+ +------------+ |
_/ \__ | eth1 | +------------+ /
/ \ eth0| |192.168.2.2 | | |
|Client network -----+ ROUTER eth2|--------------+ Provider 2 +------| Internet
\10.0.0.0/24 __/ | | |192.168.2.1 | |
\__ __/ | eth3 | +------------+ \
\___/ +------+-------+ +------------+ |
|192.168.3.2 | | \
+----------------------+ Provider 3 +-------|
|192.168.3.1 | |
+------------+ \________
I would like to route the clients in network 10.0.0.0/24 by source IP to different gateways.
The interface to the client network is eth0/10.0.0.1, which is the default gateway for all clients.
For example:
10.0.0.11 should be routed to Provider1 @ eth1
10.0.0.12 should be routed to Provider2 @ eth2
...and so on...
I think I need to use ip route and iptables for SNAT, but I have not figured out exactly how.
Here is the script I have so far.
ipv4 forwarding is enabled.
#!/bin/bash
# flush tables
ip route flush table connection1
ip route flush table connection2
ip route flush table connection3
# add the default gateways for each table
ip route add table connection1 default via 192.168.1.1
ip route add table connection2 default via 192.168.2.1
ip route add table connection3 default via 192.168.3.1
# add some IP addresses for marking
iptables -t mangle -A PREROUTING -s 10.0.0.11 -j MARK --set-mark 1
iptables -t mangle -A PREROUTING -s 10.0.0.12 -j MARK --set-mark 2
iptables -t mangle -A PREROUTING -s 10.0.0.13 -j MARK --set-mark 3
# add the source nat rules for each outgoing interface
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 192.168.1.2
iptables -t nat -A POSTROUTING -o eth2 -j SNAT --to-source 192.168.2.2
iptables -t nat -A POSTROUTING -o eth3 -j SNAT --to-source 192.168.3.2
# link routing tables to connections (?)
ip rule add fwmark 1 table connection1
ip rule add fwmark 2 table connection2
ip rule add fwmark 3 table connection3
#default route for anything not configured above should be eth2
|
Here is a similar setup from one of our routers (with some irrelevant stuff snipped). Note that this handles incoming connections as well.
Note the use of variables instead of hard-coded mark numbers. So much easier to maintain! They're stored in a separate script, and sourced in. Table names are configured in /etc/iproute2/rt_tables. Interface names are set in /etc/udev/rules.d/70-persistent-net.rules.
##### fwmark ######
iptables -t mangle -F
iptables -t mangle -X
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark
iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j RETURN # if already set, we're done
iptables -t mangle -A PREROUTING -i wan -j MARK --set-mark $MARK_CAVTEL
iptables -t mangle -A PREROUTING -i comcast -j MARK --set-mark $MARK_COMCAST
iptables -t mangle -A PREROUTING -i vz-dsl -j MARK --set-mark $MARK_VZDSL
iptables -t mangle -A POSTROUTING -o wan -j MARK --set-mark $MARK_CAVTEL
iptables -t mangle -A POSTROUTING -o comcast -j MARK --set-mark $MARK_COMCAST
iptables -t mangle -A POSTROUTING -o vz-dsl -j MARK --set-mark $MARK_VZDSL
iptables -t mangle -A POSTROUTING -j CONNMARK --save-mark
##### NAT ######
iptables -t nat -F
iptables -t nat -X
for local in «list of internal IP/netmask combos»; do
iptables -t nat -A POSTROUTING -s $local -o wan -j SNAT --to-source «IP»
iptables -t nat -A POSTROUTING -s $local -o comcast -j SNAT --to-source «IP»
iptables -t nat -A POSTROUTING -s $local -o vz-dsl -j SNAT --to-source «IP»
done
# this is an example of what the incoming traffic rules look like
for extip in «list of external IPs»; do
iptables -t nat -A PREROUTING -p tcp -d $extip --dport «port» -j DNAT --to-destination «internal-IP»:443
done
And the rules:
ip rule flush
ip rule add from all pref 1000 lookup main
ip rule add from A.B.C.D/29 pref 1500 lookup comcast # these IPs are the external ranges (we have multiple IPs on each connection)
ip rule add from E.F.G.H/29 pref 1501 lookup cavtel
ip rule add from I.J.K.L/31 pref 1502 lookup vzdsl
ip rule add from M.N.O.P/31 pref 1502 lookup vzdsl # yes, you can have multiple ranges
ip rule add fwmark $MARK_COMCAST pref 2000 lookup comcast
ip rule add fwmark $MARK_CAVTEL pref 2001 lookup cavtel
ip rule add fwmark $MARK_VZDSL pref 2002 lookup vzdsl
ip rule add pref 2500 lookup comcast # the pref order here determines the default—we default to Comcast.
ip rule add pref 2501 lookup cavtel
ip rule add pref 2502 lookup vzdsl
ip rule add pref 32767 lookup default
The routing tables get set up in /etc/network/interfaces, so that taking down an interface makes it switch to using a different one:
iface comcast inet static
address A.B.C.Q
netmask 255.255.255.248
up ip route add table comcast default via A.B.C.R dev comcast
down ip route flush table comcast
Note: If you're doing filtering as well (which you probably are) you'll also need to add the appropriate rules to FORWARD to ACCEPT the traffic. Especially for any incoming traffic.
| Linux as router with multiple internet providers |
1,439,375,679,000 |
While looking through /proc/[PID]/fd/ folder of various processes, I found curious entry for dbus
lrwx------ 1 root root 64 Aug 20 05:46 4 -> anon_inode:[eventpoll]
Hence the question, what are anon_inodes ? Are these similar to anonymous pipes ?
|
Everything under /proc is covered in the man proc. This section covers anon_inode.
For file descriptors for pipes and sockets, the entries will be symbolic links whose content is the file type with the inode. A readlink(2) call on this file
returns a string in the format:
type:[inode]
For example, socket:[2248868] will be a socket and its inode is 2248868. For sockets, that inode can be used to find more information in one of the files under
/proc/net/.
For file descriptors that have no corresponding inode (e.g., file descriptors produced by epoll_create(2), eventfd(2), inotify_init(2), signalfd(2), and
timerfd(2)), the entry will be a symbolic link with contents of the form
anon_inode:<file-type>
In some cases, the file-type is surrounded by square brackets.
For example, an epoll file descriptor will have a symbolic link whose content is the string anon_inode:[eventpoll].
For more on epoll I discuss them here - What information can I find out about an eventpoll on a running thread?.
For additional information on anon_inode's - What is an anonymous inode in Linux?. Basically there is/was data on disk that no longer has a filesystem reference to access it. An anon_inode shows that there's a file descriptor which has no referencing inode.
| What is anon_inode in the output of "ls -l /proc/[PID]/fd"? |
1,439,375,679,000 |
Some time ago I had a RAID5 system at home. One of the 4 disks failed but after removing and putting it back it seemed to be OK so I started a resync. When it finished I realized, to my horror, that 3 out of 4 disks failed. However I don't belive that's possible. There are multiple partitions on the disks each part of a different RAID array.
md0 is a RAID1 array comprised of sda1, sdb1, sdc1 and sdd1.
md1 is a RAID5 array comprised of sda2, sdb2, sdc2 and sdd2.
md2 is a RAID0 array comprised of sda3, sdb3, sdc3 and sdd3.
md0 and md2 reports all disks up while md1 reports 3 failed (sdb2, sdc2, sdd2). It's my uderstanding that when hard drives fail all the partitions should be lost not just the middle ones.
At that point I turned the computer off and unplugged the drives. Since then I was using that computer with a smaller new disk.
Is there any hope of recovering the data? Can I somehow convince mdadm that my disks are in fact working? The only disk that may really have a problem is sdc but that one too is reported up by the other arrays.
Update
I finally got a chance to connect the old disks and boot this machine from SystemRescueCd. Everything above was written from memory. Now I have some hard data. Here is the output of mdadm --examine /dev/sd*2
/dev/sda2:
Magic : a92b4efc
Version : 0.90.00
UUID : 53eb7711:5b290125:db4a62ac:7770c5ea
Creation Time : Sun May 30 21:48:55 2010
Raid Level : raid5
Used Dev Size : 625064960 (596.11 GiB 640.07 GB)
Array Size : 1875194880 (1788.33 GiB 1920.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Mon Aug 23 11:40:48 2010
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1
Checksum : 68b48835 - correct
Events : 53204
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 2 0 active sync /dev/sda2
0 0 8 2 0 active sync /dev/sda2
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
/dev/sdb2:
Magic : a92b4efc
Version : 0.90.00
UUID : 53eb7711:5b290125:db4a62ac:7770c5ea
Creation Time : Sun May 30 21:48:55 2010
Raid Level : raid5
Used Dev Size : 625064960 (596.11 GiB 640.07 GB)
Array Size : 1875194880 (1788.33 GiB 1920.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Mon Aug 23 11:44:54 2010
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Checksum : 68b4894a - correct
Events : 53205
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 18 1 active sync /dev/sdb2
0 0 0 0 0 removed
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
/dev/sdc2:
Magic : a92b4efc
Version : 0.90.00
UUID : 53eb7711:5b290125:db4a62ac:7770c5ea
Creation Time : Sun May 30 21:48:55 2010
Raid Level : raid5
Used Dev Size : 625064960 (596.11 GiB 640.07 GB)
Array Size : 1875194880 (1788.33 GiB 1920.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Mon Aug 23 11:44:54 2010
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 2
Spare Devices : 1
Checksum : 68b48975 - correct
Events : 53210
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 34 2 active sync /dev/sdc2
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
/dev/sdd2:
Magic : a92b4efc
Version : 0.90.00
UUID : 53eb7711:5b290125:db4a62ac:7770c5ea
Creation Time : Sun May 30 21:48:55 2010
Raid Level : raid5
Used Dev Size : 625064960 (596.11 GiB 640.07 GB)
Array Size : 1875194880 (1788.33 GiB 1920.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Mon Aug 23 11:44:54 2010
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 2
Spare Devices : 1
Checksum : 68b48983 - correct
Events : 53210
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 50 4 spare /dev/sdd2
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
It appears that things have changed since the last boot. If I'm reading this correctly sda2, sdb2 and sdc2 are working and contain synchronized data and sdd2 is spare. I distinctly remember seeing 3 failed disks but this is good news. Yet the array still isn't working:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : inactive sda2[0](S) sdb2[1](S) sdc2[2](S)
1875194880 blocks
md126 : inactive sdd2[4](S)
625064960 blocks
md127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
64128 blocks [4/4] [UUUU]
unused devices: <none>
md0 appears to be renamed to md127. md125 and md126 are very strange. They should be one array not two. That used to be called md1. md2 is completely gone but that was my swap so I don't care.
I can understand the different names and it doesn't really matter. But why is an array with 3 "active sync" disks unreadable? And what's up with sdd2 being in a separate array?
Update
I tried the following after backing up the superblocks:
root@sysresccd /root % mdadm --stop /dev/md125
mdadm: stopped /dev/md125
root@sysresccd /root % mdadm --stop /dev/md126
mdadm: stopped /dev/md126
So far so good. Since sdd2 is spare I don't want to add it yet.
root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c}2 missing
mdadm: cannot open device missing: No such file or directory
mdadm: missing has no superblock - assembly aborted
Apparently I can't do that.
root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c}2
mdadm: /dev/md1 assembled from 1 drive - not enough to start the array.
root@sysresccd /root % cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdc2[2](S) sdb2[1](S) sda2[0](S)
1875194880 blocks
md127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
64128 blocks [4/4] [UUUU]
unused devices: <none>
That didn't work either. Let's try with all the disks.
mdadm --stop /dev/md1
mdadm: stopped /dev/md1
root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c,d}2
mdadm: /dev/md1 assembled from 1 drive and 1 spare - not enough to start the array.
root@sysresccd /root % cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdc2[2](S) sdd2[4](S) sdb2[1](S) sda2[0](S)
2500259840 blocks
md127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
64128 blocks [4/4] [UUUU]
unused devices: <none>
No luck. Based on this answer I'm planning to try:
mdadm --create /dev/md1 --assume-clean --metadata=0.90 --bitmap=/root/bitmapfile --level=5 --raid-devices=4 /dev/sd{a,b,c}2 missing
mdadm --add /dev/md1 /dev/sdd2
Is it safe?
Update
I publish the superblock parser script I used to make that table in the my comment. Maybe someone will find it useful. Thanks for all your help.
|
First check the disks, try running smart selftest
for i in a b c d; do
smartctl -s on -t long /dev/sd$i
done
It might take a few hours to finish, but check each drive's test status every few minutes, i.e.
smartctl -l selftest /dev/sda
If the status of a disk reports not completed because of read errors, then this disk should be consider unsafe for md1 reassembly. After the selftest finish, you can start trying to reassembly your array. Optionally, if you want to be extra cautious, move the disks to another machine before continuing (just in case of bad ram/controller/etc).
Recently, I had a case exactly like this one. One drive got failed, I re-added in the array but during rebuild 3 of 4 drives failed altogether. The contents of /proc/mdadm was the same as yours (maybe not in the same order)
md1 : inactive sdc2[2](S) sdd2[4](S) sdb2[1](S) sda2[0](S)
But I was lucky and reassembled the array with this
mdadm --assemble /dev/md1 --scan --force
By looking at the --examine output you provided, I can tell the following scenario happened: sdd2 failed, you removed it and re-added it, So it became a spare drive trying to rebuild. But while rebuilding sda2 failed and then sdb2 failed. So the events counter is bigger in sdc2 and sdd2 which are the last active drives in the array (although sdd didn't have the chance to rebuild and so it is the most outdated of all). Because of the differences in the event counters, --force will be necessary. So you could also try this
mdadm --assemble /dev/md1 /dev/sd[abc]2 --force
To conclude, I think that if the above command fails, you should try to recreate the array like this:
mdadm --create /dev/md1 --assume-clean -l5 -n4 -c64 /dev/sd[abc]2 missing
If you do the --create, the missing part is important, don't try to add a fourth drive in the array, because then construction will begin and you will lose your data. Creating the array with a missing drive, will not change its contents and you'll have the chance to get a copy elsewhere (raid5 doesn't work the same way as raid1).
If that fails to bring the array up, try this solution (perl script) here Recreating an array
If you finally manage to bring the array up, the filesystem will be unclean and probably corrupted. If one disk fails during rebuild, it is expected that the array will stop and freeze not doing any writes to the other disks. In this case two disks failed, maybe the system was performing write requests that wasn't able to complete, so there is some small chance you lost some data, but also a chance that you will never notice it :-)
edit: some clarification added.
| How to recover a crashed Linux md RAID5 array? |
1,439,375,679,000 |
I have been using Wake-on-LAN successfully for many years now for a number of my Linux devices. It works well enough.
However, I also have a Mac Mini at home. I have noticed that it goes to sleep and has two distinct properties separate from any Linux machine I have while asleep:
It still responds to ping on the network.
It will wake up automatically upon incoming ssh connection, no Wake-on-LAN required.
This 2nd property ends up being really nice: it automatically goes to sleep and saves power when not in use and doesn't require any extra thought to power on when I want to ssh into it. It just wakes up automatically. And after I've logged out, 15 minutes later it will go to sleep again.
My assumption is this is because Apple controls the hardware and software stack. So while industry-wide Wake-on-LAN is a network device feature based on a magic packet (that requires no OS interaction), Mac's magic "wake-on-LAN and also still respond to pings" is because they haven't actually put the whole OS to sleep and/or have a separate network stack still running in sleep mode. But that's just a guess.
I'm curious if anyone has ever seen or implemented this sort of "Wake-on-incoming-SSH" on a Linux machine? Or is this special magic that can be found only on Apple devices where they control hardware-through-software and can do this in a way the rest of the industry can't?
|
ethtools will help you, but the hardware must allow your needs.
# ethtool interface | grep Wake-on
# ethtool eth0 | grep Wake-on
Supports Wake-on: pumbag
Wake-on: d
according to ArchLinux's wiki :
The Wake-on values define what activity triggers wake up:
d (disabled)
p (PHY activity)
u (unicast activity)
m (multicast activity)
b (broadcast activity)
a (ARP activity)
g (magic packet activity)
If you need some sort of "Wake-on-incoming-SSH", try
# ethtool -s interface wol u
| Wake-on-LAN via SSH |
1,439,375,679,000 |
Here's what I would like:
Start with a virtual system, with no installed packages. Then I invoke a tool, similar to apt-get to ask it to compute the dependencies and mark all the packages that would be installed as installed.
Let me be clear: It says the packages are installed, but there are no files actually installed.
Then, if I ask for more packages to be "installed", it may propose to add or remove other packages. It wouldn't actually remove packages, but obviously just mark them removed.
This would be useful because, I would be able to test the installation of packages on a bare Debian or Ubuntu system. It would, allow me to know if a package is installable given a certain scenario.
Doing this to an actual installation would take a lot of disk space and time.
apt has a "simulate" option, but it does not mark packages as installed.
|
You're probably best off hooking into one of the scripting interfaces that Debian has for their various package tools and writing your own simulator.
(Edit: I can't find dpkg-perl and dpkg-python anymore. dpkg-awk and dpkg-ruby exist, but they don't look like they'll do the job.)
However: Debian has a tool "equivs" that lets you build "empty" packages that just satisfy dependencies, but install no files beyond the control files. http://packages.debian.org/search?keywords=equivs
dpkg and apt-get both have options to run with different administration and root directories. The dpkg man page has them, but the apt-get one is buried in apt.conf.
DIRECTORIES
The configuration item RootDir has a special meaning. ...
aptitude lets you pick and choose what to install, and then "commits" it by running dpkg and/or apt with the right settings. Playing around with it might be sufficient for some of your needs, though you'll want to save the settings before hand, and restore afterword.
| Is it possible to simulate installation of Debian packages, and still marking them as installed? |
1,439,375,679,000 |
I'm working on a software which connects to a Real Time data server (using TCP) and I have some connections dropping. My guess is that the clients do not read the data coming from the server fast enough. Therefore I would like to monitor my TCP sockets. For this I found the "ss" tool.
This tool allows to see the state of every socket - here's an example line of the output of the command ss -inm 'src *:50000'
ESTAB 0 0 184.7.60.2:50000 184.92.35.104:1105
mem:(r0,w0,f0,t0) sack rto:204 rtt:1.875/0.75 ato:40
My question is: what does the memory part mean?
Looking at the source code of the tool I found that the data is coming from a kernel structure (sock in sock.h). More precisely, it comes from the fields :
r = sk->sk_rmem_alloc
w = sk->sk_wmem_queued;
f = sk->sk_forward_alloc;
t = sk->sk_wmem_alloc;
Does somebody know what they mean? My guesses are:
rmem_alloc : size of the inbound buffer
wmem_alloc : size of the outbound buffer
sk_forward_alloc : ???
sk->sk_wmem_queued : ???
Here are my buffers sizes :
net.ipv4.tcp_rmem = 4096 87380 174760
net.ipv4.tcp_wmem = 4096 16384 131072
net.ipv4.tcp_mem = 786432 1048576 1572864
net.core.rmem_default = 110592
net.core.wmem_default = 110592
net.core.rmem_max = 1048576
net.core.wmem_max = 131071
|
sk_forward_alloc is the forward allocated memory which is the total memory currently available in the socket's quota.
sk_wmem_queued is the amount of memory used by the socket send buffer queued in the transmit queue and are either not yet sent out or not yet acknowledged.
You can learn more about TCP Memory Management in chapter 9 of TCP/IP Architecture, Design and Implementation in Linux By Sameer Seth, M. Ajaykumar Venkatesulu
| Kernel socket structure and TCP_DIAG |
1,439,375,679,000 |
From this blog.
Intermediate CAs are certificates signed by a root CA that can sign arbitrary certificates for any websites.
They are just as powerful as root CAs, but there's no full list of the ones your system trusts, because root CAs can make new ones at will, and your system will trust them at first sight. There are THOUSANDS logged in CT.
This month an interesting one popped up, generated apparently in September 2015: "Blue Coat Public Services Intermediate CA", signed by Symantec. (No certificates signed by this CA have reached the CT logs or Censys so far.)
I thought it would be a good occasion to write up how to explicitly untrust an intermediate CA that would otherwise be trusted in OS X. It won't stop the root CA from handing a new intermediate to the same organization, but better than nothing.
When I tried the steps in the blog in Ubuntu, I download this certificate https://crt.sh/?id=19538258. When I open the .crt it imports into the Gnome Keyring, but then I couldn't find a way to "untrust" the certificate after importing it.
|
Just to make things difficult, Linux has more than one library for working with certificates.
If you're using Mozilla's NSS, you can Actively Distrust (their terminology) a certificate using certutil's -t trustargs option:
$ certutil -d <path to directory containing database> -M -t p -n "Blue Coat Public Services Intermediate CA"
For Firefox, <path to directory containing database> is usually ~/.mozilla/firefox/<???>.profile where <???> are some random looking characters. (certutil is eg. in ubuntu's libnss3-tools package)
The breakdown is as follows:
-M to modify the database
-t p to set the trust to Prohibited
-n to carry out the operation on the named certificate
Even within NSS, not all applications share the same database; so you may have to repeat this process. For example, to do the same for Chrome, change the -d <path> to -d sql:.pki/nssdb/.
$ certutil -d sql:.pki/nssdb/ -M -t p -n "Blue Coat Public Services Intermediate CA"
However, not all applications use NSS, so this isn't a complete solution. For example, I don't believe it's possible to do this with the OpenSSL library.
As a consequence, any application that uses OpenSSL to provide it's certificate chain building (TLS, IPSec etc) would trust a chain with a Blue Coat certificate and there is nothing that you can do about it short of removing the Root CA that signed it from your trust anchor store (which would be silly considering it's a Symantec Root CA as you'd end up distrusting half the Internet), whereas applications that rely on NSS can be configured more granular to distrust any chain that has the Blue Coat certificate within it.
For example, I believe OpenVPN uses OpenSSL as it's library for certificates, therefore big brother could be listening to your OpenVPN traffic without your knowledge if you are connecting to a commercial VPN provider which uses OpenVPN. If you are really concerned about that then check who your commercial VPN provider's Root CA is - if it's Symantec/Verisign then maybe it's time to ditch them for someone else?
Note that SSH doesn't use X509 certificates therefore you can connect and tunnel using SSH without worrying about Blue Coat MITM attacks.
| Untrusting an intermediate CA in Linux? |
1,439,375,679,000 |
The /dev.. is full:
SERVER:/dev # df -mP /dev
Filesystem 1048576-blocks Used Available Capacity Mounted on
udev 12042 12042 0 100% /dev
There is no files that consume space!
SERVER:/dev # find . -ls | sort -r | head -2
2790517 0 -rw-r--r-- 1 root root 0 Dec 16 10:04 ./devnull
1490005831 0 -rw------- 1 root root 0 Dec 16 07:54 ./nul
120387 0 lrwxrwxrwx 1 root root 12 Dec 03 05:42 ./disk/by-uuid/xx..foo..xx -> ../../dm-13
SERVER:/dev # du -sm * 2>/dev/null | sort -nr | head -4
1 shm
0 zero
0 xconsole
0 watchdog
swap is used heavily:
SERVER:/dev # free -m
total used free shared buffers cached
Mem: 24083 23959 124 0 327 21175
-/+ buffers/cache: 2455 21627
Swap: 10245 10245 0
deleted but still used files (?):
SERVER:/dev # lsof /dev | grep deleted
su 4510 bar 14u REG 0,14 6269616128 2689827477 /dev/shm/kdfoo.a4o (deleted)
grep 4512 root 1u REG 0,14 6269616128 2689827477 /dev/shm/kdfoo.a4o (deleted)
bash 4517 bar 14u REG 0,14 6269616128 2689827477 /dev/shm/kdfoo.a4o (deleted)
sh 4606 bar 14u REG 0,14 6269616128 2689827477 /dev/shm/kdfoo.a4o (deleted)
ksh 24134 root 1u REG 0,14 6329864192 2685851781 /dev/shm/foo5.44m (deleted)
ksh 29209 root 1u REG 0,14 6269616128 2689827477 /dev/shm/kdfoo.a4o (deleted)
su 29571 bar 14u REG 0,14 6329864192 2685851781 /dev/shm/foo5.44m (deleted)
grep 29573 root 1u REG 0,14 6329864192 2685851781 /dev/shm/foo5.44m (deleted)
bash 29578 bar 14u REG 0,14 6329864192 2685851781 /dev/shm/foo5.44m (deleted)
sh 29694 bar 14u REG 0,14 6329864192 2685851781 /dev/shm/foo5.44m (deleted)
SERVER:/dev #
My question: what is using up all the 12 GByte space of "udev on /dev type tmpfs (rw)"?
|
Shared memory is using the 12GB.
On your Linux release /dev/shm part of the /dev filesystem (on some releases, it has its own a dedicated file system mounted there).
As shown by lsof, the sum is 12 GB:
/dev/shm/foo5.44m is 6269616128 bytes
/dev/shm/kdfoo.a4o is 6269616128 bytes
Neither find nor ls can display theses files because they are unlinked (= their names have been deleted).
| Why is /dev full? |
1,439,375,679,000 |
On my Debian GNU/Linux 9 system, when a binary is executed,
the stack is uninitialized but
the heap is zero-initialized.
Why?
I assume that zero-initialization promotes security but, if for the heap, then why not also for the stack? Does the stack, too, not need security?
My question is not specific to Debian as far as I know.
Sample C code:
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
const size_t n = 8;
// --------------------------------------------------------------------
// UNINTERESTING CODE
// --------------------------------------------------------------------
static void print_array(
const int *const p, const size_t size, const char *const name
)
{
printf("%s at %p: ", name, p);
for (size_t i = 0; i < size; ++i) printf("%d ", p[i]);
printf("\n");
}
// --------------------------------------------------------------------
// INTERESTING CODE
// --------------------------------------------------------------------
int main()
{
int a[n];
int *const b = malloc(n*sizeof(int));
print_array(a, n, "a");
print_array(b, n, "b");
free(b);
return 0;
}
Output:
a at 0x7ffe118997e0: 194 0 294230047 32766 294230046 32766 -550453275 32713
b at 0x561d4bbfe010: 0 0 0 0 0 0 0 0
The C standard does not ask malloc() to clear memory before allocating it, of course, but my C program is merely for illustration. The question is not a question about C or about C's standard library. Rather, the question is a question about why the kernel and/or run-time loader are zeroing the heap but not the stack.
ANOTHER EXPERIMENT
My question regards observable GNU/Linux behavior rather than the requirements of standards documents. If unsure what I mean, then try this code, which invokes further undefined behavior (undefined, that is, as far as the C standard is concerned) to illustrate the point:
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
const size_t n = 4;
int main()
{
for (size_t i = n; i; --i) {
int *const p = malloc(sizeof(int));
printf("%p %d ", p, *p);
++*p;
printf("%d\n", *p);
free(p);
}
return 0;
}
Output from my machine:
0x555e86696010 0 1
0x555e86696010 0 1
0x555e86696010 0 1
0x555e86696010 0 1
As far as the C standard is concerned, behavior is undefined, so my question does not regard the C standard. A call to malloc() need not return the same address each time but, since this call to malloc() does indeed happen to return the same address each time, it is interesting to notice that the memory, which is on the heap, is zeroed each time.
The stack, by contrast, had not seemed to be zeroed.
I do not know what the latter code will do on your machine, since I do not know which layer of the GNU/Linux system is causing the observed behavior. You can but try it.
UPDATE
@Kusalananda has observed in comments:
For what it's worth, your most recent code returns different addresses and (occasional) uninitialised (non-zero) data when run on OpenBSD. This obviously does not say anything about the behaviour that you are witnessing on Linux.
That my result differs from the result on OpenBSD is indeed interesting. Apparently, my experiments were discovering not a kernel (or linker) security protocol, as I had thought, but a mere implementational artifact.
In this light, I believe that, together, the answers below of @mosvy, @StephenKitt and @AndreasGrapentin settle my question.
See also on Stack Overflow: Why does malloc initialize the values to 0 in gcc? (credit: @bta).
|
The storage returned by malloc() is not zero-initialized. Do not ever assume it is.
In your test program, it's just a fluke: I guess the malloc()just got a fresh block off mmap(), but don't rely on that, either.
For an example, if I run your program on my machine this way:
$ echo 'void __attribute__((constructor)) p(void){
void *b = malloc(4444); memset(b, 4, 4444); free(b);
}' | cc -include stdlib.h -include string.h -xc - -shared -o pollute.so
$ LD_PRELOAD=./pollute.so ./your_program
a at 0x7ffd40d3aa60: 1256994848 21891 1256994464 21891 1087613792 32765 0 0
b at 0x55834c75d010: 67372036 67372036 67372036 67372036 67372036 67372036 67372036 67372036
Your second example is simply exposing an artifact of the malloc implementation in glibc; if you do that repeated malloc/free with a buffer larger than 8 bytes, you will clearly see that only the first 8 bytes are zeroed, as in the following sample code.
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
const size_t n = 4;
const size_t m = 0x10;
int main()
{
for (size_t i = n; i; --i) {
int *const p = malloc(m*sizeof(int));
printf("%p ", p);
for (size_t j = 0; j < m; ++j) {
printf("%d:", p[j]);
++p[j];
printf("%d ", p[j]);
}
free(p);
printf("\n");
}
return 0;
}
Output:
0x55be12864010 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1
0x55be12864010 0:1 0:1 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2
0x55be12864010 0:1 0:1 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3
0x55be12864010 0:1 0:1 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4
| If the heap is zero-initialized for security, then why is the stack merely uninitialized? |
1,439,375,679,000 |
How can I recursively cleanup all empty files and directories in a parent directory?
Let’s say I have this directory structure:
Parent/
|____Child1/
|______ file11.txt (empty)
|______ Dir1/ (empty)
|____Child2/
|_______ file21.txt
|_______ file22.txt (empty)
|____ file1.txt
I should end up with this:
Parent/
|____Child2/
|_______ file21.txt
|____ file1.txt
|
This is a really simple one liner:
find Parent -empty -delete
It's fairly self explanatory. Although when I checked I was surprised that it successfully deletes Parent/Child1. Usually you would expect it to process the parent before the child unless you specify -depth.
This works because -delete implies -depth. See the GNU find manual:
-delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit status will be nonzero (when it eventually exits). Use of -delete automatically turns on the -depth option.
Note these features are not part of the Posix Standard, but most likely will be there under many Linux Distribution. You may have a specific problem with smaller ones such as Alpine Linux as they are based on Busybox which doesn't support -empty.
Other systems that do include non-standard -empty and -delete include BSD and OSX but apparently not AIX.
| How can I recursively delete all empty files and directories in Linux? |
1,439,375,679,000 |
if grep -q "�" out.txt
then
echo "working"
else
cat out.txt
fi
Basically, if the file "out.txt" contains "�" anywhere in the file I would like it to echo "working" AND if the file "out.txt" does NOT contain "�" anywhere in the file then I would like it to cat out.txt
EDIT: So here's what I'm doing. I'm trying to brute force an openssl decrypt.
openssl enc returns 0 on success, non-zero otherwise. Note: you will get false positives because AES/CBC can only determine if "decryption works" based on getting the padding right. So the file decrypts but it will not be the correct password so it will have gibberish in it. A common character in the gibberish is "�". So I want the do loop to keep going if the output contains "�".
Heres my git link https://github.com/Raphaeangelo/OpenSSLCracker
Heres the script
while read line
do
openssl aes-256-cbc -d -a -in $1 -pass pass:$line -out out.txt 2>out.txt >/dev/null && printf "==================================================\n"
if grep -q "�" out.txt
then
:
else
cat out.txt &&
printf "\n==================================================" &&
printfn"\npassword is $line\n" &&
read -p "press return key to continue..." < /dev/tty;
fi
done < ./password.txt
its still showing me output with the � charicter in it
|
grep is the wrong tool for the job.
You see the � U+FFFD REPLACEMENT CHARACTER not because it’s literally in the file content, but because you looked at a binary file with a tool that is supposed to handle only text-based input. The standard way to handle invalid input (i.e., random binary data) is to replace everything that is not valid in the current locale (most probably UTF-8) with U+FFFD before it hits the screen.
That means it is very likely that a literal \xEF\xBF\xBD (the UTF-8 byte sequence for the U+FFFD character) never occurs in the file. grep is completely right in telling you, there is none.
One way to detect whether a file contains some unknown binary is with the file(1) command:
$ head -c 100 /dev/urandom > rubbish.bin
$ file rubbish.bin
rubbish.bin: data
For any unknown file type it will simply say data. Try
$ file out.txt | grep '^out.txt: data$'
to check whether the file really contains any arbitrary binary and thus most likely rubbish.
If you want to make sure that out.txt is a UTF-8 encoded text file only, you can alternatively use iconv:
$ iconv -f utf-8 -t utf-16 out.txt >/dev/null
| How to grep for unicode � in a bash script |
1,439,375,679,000 |
I had a directory containing some 2000 files.
I ran the following command to move those 2000 files into a target directory:
find /opt/alfresco \
-type f \( -iname \*.pdf -o -iname \*.xml \) \
-exec mv {} /opt/alfresco/archived/2020-01-07 \; > /opt/alfresco/scripts/move.log
But, I forgot to append a / at the end of destination path. So what the above command did is, it created a file with name 2020-01-07 and wrote some binary contents to it which is now unreadable. And my 2000 files are gone. This 2020-01-07 file's size is 220 KB. But those 2000 files' size combined was approx 1 GB.
Is there any way I can recover those 2000 files? Or any way by which I can convert this file 2020-01-07 to a directory 2020-01-07 with my data coming back?
|
Adding a slash at the end of the destination path /opt/alfresco/archived/2020-01-07 would have made the mv command error out, as the 2020-01-07 directory evidently does not exist. This would have saved your files.
They would also have been saved if /opt/alfresco/archived/2020-01-07 had been an existing directory (regardless of whether the destination path had a slash at the end or not), and your files would have been moved into that directory (filename collisions may still have been an issue though, as you move files from several directories into a single directory). This is what you wanted to do. What you forgot to do was to create that directory first.
Now, since the directory did not exist, what the find command did was to take each individual XML and PDF file, rename it to /opt/alfresco/archived/2020-01-07, and then continue doing the same with the next file, overwriting the previous.
The file /opt/alfresco/archived/2020-01-07 is now the last XML or PDF file found by find.
Also note that since you ran your find command across /opt/alfresco, any PDF or XML file below that path, for example in any directory beneath /opt/alfresco/archived, would have met the same fate.
This is such an easy error to make.
There is no convenient way to recover the lost files other than restoring them from your backups.
If you do not take hourly backups of your data, this may be a good point in time to start looking into doing that. I would recommend restic or borgbackup for doing backups of personal files, preferably against some sort of off-site or at least external storage.
The following questions and answers may be of some help:
Unix/Linux undelete/recover deleted files
Recovering accidentally deleted files
And similar questions
In your next rewrite of this script, you may want to ignore the archived subdirectory, and use mv -n -t. You also need to explicitly -print the found files (or use mv -v) as find will otherwise not output their location:
find /opt/alfresco \
-path /opt/alfresco/archived -prune -o \
-type f \( -iname '*.pdf' -o -iname '*.xml' \) \
-exec mv -n -t /opt/alfresco/archived/2020-01-07 {} + \
-print >/opt/alfresco/scripts/move.log
A few things from the comments (below) that may be useful to know:
If GNU mv is used with -t target, it will fail if target is not a directory. You would use -exec mv -t /opt/alfresco/archived/2020-01-07 {} + to move multiple files at once with find (which would also speed up the operation).
If GNU mv is used with -n, it will refuse to overwrite existing files.
Neither -t nor -n are standard (macOS and FreeBSD have -n too though), but that shouldn't stop you from using them in scripts that don't need to be portable between systems.
| How can I change a file to a directory? Files lost after find+mv |
1,439,375,679,000 |
I face disk space full issue in Linux. When checked with df command I found the '/' directory is occupying 100%. So to check which folders consume much space I ran cd / and du -sh. But it takes forever to run the command. But ultimately I want to get the details on which top immediate sub folders of '/' folder are consuming huge disk space. So can any one tell the command for the same.
|
This command will list the 15 largest in order:
du -xhS | sort -h | tail -n15
We use the -x flag to skip directories on separate file systems.
The -h on the du gives the output in human readable format, sort -h can then arrange this in order.
The -S on the du command means the size of subdirectories is excluded.
You can change the number of the tail to see less or more. Super handy command.
| How to get top immediate sub-folders of '/' folder consuming huge disk space in Linux |
1,439,375,679,000 |
How to add more /dev/loop* devices on Fedora 19? I do:
# uname -r
3.11.2-201.fc19.x86_64
# lsmod |grep loop
# ls /dev/loop*
/dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 /dev/loop7 /dev/loop-control
# modprobe loop max_loop=128
# ls /dev/loop*
/dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 /dev/loop7 /dev/loop-control
So nothing changes.
|
You have to create device nodes into /dev with mknod. The device nodes in dev have a type (block, character and so on), a major number and a minor number. You can find out the type and the major number by doing ls -l /dev/loop0:
user@foo:/sys# ls -l /dev/loop0
brw-rw---- 1 root disk 7, 0 Oct 8 08:12 /dev/loop0
This means loop device nodes should have the block type and major number of 7. The minor numbers increment by one for each device node, starting from 0, so loop0 is simply 0 and loop7 is 7.
To create loop8 you run, as root, command mknod -m 0660 /dev/loop8 b 7 8. This will create the device node /dev/loop8 with permissions specified along the -m switch (that's not necessary as you're probably running a desktop system, but it's a good idea not to let everyone read and write your device nodes).
| How to add more /dev/loop* devices on Fedora 19 |
1,439,375,679,000 |
What is the simplest way to disable or temporarily suspend reboot/shutdown when an important process is running? The process takes too long to finish and cannot be paused/resumed so I like to avoid shutting down the pc while it is running. It is run from cron so unless I manually check for running processes, I wouldn't know that it is running. Thanks.
|
Run which shutdown to see where the path to the shutdown program is.
You can rename the file, although I recommend against it.
Another (safer) method. Use an alias: alias shutdown=' '
Something like this is more reversible. If you're trying to prevent shutdown from all users, add the alias globally.
| How to disable shutdown so that an important process cannot be interrupted? |
1,439,375,679,000 |
Suppose I boot a Linux machine without GUI. When it displays a tty login prompt, can I shutdown the machine with a keyboard sequence?
Of course I could type in my username and password and then sudo shutdown -h now; however, is it possible to shut it down before the login using a keyboard shortcut?
|
I've done this before with a user named "s" and no password.
IIRC you set the user's shell to /sbin/shutdown. Prolly need to add it to /etc/shells.
| Shutdown from login prompt in tty |
1,439,375,679,000 |
How do I create a list of modified files programmatically using linux command line tools? I'm not interested in the difference in any particular file (delta, patch). I just want to have a list of new or modified files comparing to previous product release. So that I can publish a new product update.
update: diff -qr doesn't produce very convinient output. The output of diff -qr also needs to be processed. Is there any better way?
|
I`ve got a simple approach for this:
Use the rsync-preview mode:
rsync -aHSvn --delete old_dir/ new-dir/
The files that are shown as "to be deleted" by that command will be the "new" files. The others that are to be transferred have changed in some way. See the rsync-man-page for further details.
| linux diff tools: create list of modified files |
1,439,375,679,000 |
I'm interested in theoretical limits, perhaps with examples of systems having huge numbers of CPU's.
|
At least 2048 in practice. As a concrete example, SGI sells its UV system, which can use 256 sockets (2,048 cores) and 16TB of shared memory, all running under a single kernel. I know that there are at least a few systems that have been sold in this configuration.
According to SGI:
Altix UV runs completely unmodified Linux, including standard distributions from both Novell and Red Hat.
| How many cores can Linux kernel handle? |
1,329,949,954,000 |
I have a CentOS 5.7 VPS using bash as its shell that displays a branded greeting immediately after logging in via SSH. I've been trying to modify it, but can't seem to find where it is in the usual places. So far I've looked in the motd file and checked sshd_config for banner file settings. A banner file is not set.
Where else can I look for where the login message might be?
|
Traditional unix systems display /etc/motd after the user is successfully authenticated and before the user's shell is invoked. On modern systems, this is done by the pam_motd PAM module, which may be configured in /etc/pam.conf or /etc/pam.d/* to display a different file.
The ssh server itself may be configured to print /etc/motd if the PrintMotd option is not turned off in /etc/sshd_config. It may also print the time of the previous login if PrintLastLog is not turned off.
Another traditional message might tell you whether that You have new mail or You have mail. On systems with PAM, this is done by the pam_mail module. Some shells might print a message about available mail.
After the user's shell is launched, the user's startup files may print additional messages. For an interactive login, if the user's login shell is a Bourne-style shell, look in /etc/profile, ~/.profile, plus ~/.bash_profile and ~/.bash_login for bash. For an interactive login to zsh, look in /etc/zprofile, /etc/zlogin, /etc/zshrc, ~/.zprofile, ~/.zlogin and ~/.zshrc. For an interactive login to csh, look in /etc/csh.login and ~/.login.
If the user's login shell is bash and this is a non-interactive login, then bash executes ~/.bashrc (which is really odd, since ~/.bashrc is executed for interactive shells only if the shell is not a login shell). This can be a source for trouble; I recommend including the following snippet at the top of ~/.bashrc to bail out if the shell is not interactive:
if [[ $- != *i* ]]; then return; fi
| What are the different ways that a message can be displayed to a bash shell after a user logs in? |
1,329,949,954,000 |
I need to learn about AIX, and I only have a laptop with Fedora 14/VirtualBox on it. Is there any chance that I could run an AIX guest in my VirtualBox?
My laptop has an Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz, and I read that it only runs on RISC architecture. So there's no way I can run it on my laptop?
|
The best way to learn AIX would be to obtain an account on a machine that's running it. Really, part of what sets AIX apart from other unices is that it's designed for high-end systems (with lots of processors, fancy virtualization capabilities and so on). You won't learn as much by running it in a virtual machine.
If you really want to run an x86 version of AIX on your laptop, you'll have to get an old PS/2 version that runs on an x86 CPU. I don't know if AIX will run on VirtualBox's emulated hardware (PS/2 is peculiar, it's the same problem as running OSX in a VM), but there are hints that it might (user claiming to run an AIX guest). It seems that AIX can run in Virtual PC.
Qemu can emulate PowerPC processors, and it is apparently possible to run a recent, PowerPC version of AIX: see these guides on running AIX 4.3.3, AIX 5.1, and AIX 7.2 on Qemu.
In summary, getting AIX in a VM would be costly (it's not free software), difficult, and not very useful. Try and get an account on some big iron, or get a second-hand system (if you can afford it).
| How to run a fresh version of AIX in a Virtual Machine with a Linux host? |
1,329,949,954,000 |
Is it possible to allow non-root users to install packages system-wide using apt or rpm?
The place where I work currently has an out of date setup on the linux boxes, and admins are sick of having to do all the installations for users on request, so they are thinking of giving full sudo rights to all users. This has obvious security disadvantages. So I'm wondering if there's a way to allow normal users to install software - and to upgrade and remove it?
|
You can specify the allowed commands with sudo;
you don't have to allow unlimited access, e.g.,
username ALL = NOPASSWD : /usr/bin/apt-get , /usr/bin/aptitude
This would allow username to run sudo apt-get and sudo aptitude
without any password, but would not allow any other commands.
You can also use PackageKit combined with polkit
for some finer level of control than sudo.
Allowing users to install/remove packages can be a risk.
They can pretty easily render a system nonfunctional
just by uninstalling necessary software, like libc6, dpkg, rpm etc.
Installing arbitrary software from the defined archives may allow attackers
to install outdated or exploitable software and gain root access.
The main question in my opinion is how much do you trust your employees?
Of course your admin team could also start using a configuration management system like puppet, chef or look into spacewalk to manage your system. This would allow them to configure and manage the system from a central system.
| Allow non-admin users to install packages via apt or rpm? |
1,329,949,954,000 |
fdisk(8) says:
The device is usually /dev/sda, /dev/sdb or so. A device name refers to the entire disk. Old systems without libata (a library used inside the Linux kernel to support ATA host controllers and devices) make a difference between IDE and SCSI disks. In such cases the device name will be /dev/hd* (IDE) or /dev/sd* (SCSI).
The partition is a device name followed by a partition number. For example, /dev/sda1 is the first partition on the first hard disk in the system. See also Linux kernel documentation (the Documentation/devices.txt file).
Based on this, I understand that in the context of Linux, a string like /dev/hda or /dev/sda is a "device name". Otherwise, the second sentence I have emphasised above does not make sense: it would instead say, "For example, sda1 is the first partition on the first hard disk in the system."
This view is corroborated by the Linux Partition HOWTO:
By convention, IDE drives will be given device names /dev/hda to /dev/hdd.
Is there a technically correct (and, preferably, unambiguous and concise) English term for the substring hda or sda of such a device name? For example, would it be correct in this case to call sda the drive's:
"short name"; or
"unqualified device name"; or
something else?
(I am not asking for colloquialisms that are technically incorrect, even if they are in common use.)
|
There seem to be at least two valid answers
sda can correctly be called the "basename" for the drive.
sda can also correctly be called the "kernel disk name" for the drive.
How did you reach this conclusion?
By process of elimination on each of the plausible candidates:
"device name"
This cannot be the correct term. As noted in the original question, it refers to the fully qualified name (e.g. /dev/sda), not to the final fragment (e.g. sda).
Corroborating evidence exists in additional sources, such as p.68 of The Definitive Guide to SUSE Linux Enterprise Server 12:
You can also choose to use … a mount that is based on device name (such as /dev/sdb1) …
and p.94 of The Linux Bible 2008 Edition:
Click the Device tab and type the device name (such as /dev/cdrom) …
"filename" or "file name"
This cannot be the correct term either, as it is used in technical documentation as a synonym for the fully qualified name (e.g. /dev/sda), not just the final fragment (e.g. sda):
BASENAME(1):
basename - strip directory and suffix from filenames
DIRNAME(1):
dirname - strip last component from file name
"name"
This cannot be the correct term either, as it is used in technical documentation as a synonym for the fully qualified name (e.g. /dev/sda), not just the final fragment (e.g. sda):
GNU Coreutils: basename invocation:
basename removes any leading directory components from name.
GNU Coreutils: dirname invocation:
dirname prints all but the final slash-delimited component of each name.
"shortname" or "short name"
This cannot be the correct term either. I cannot find any technical documentation that refers to the last part of a device name as a "shortname" or a "short name". Those terms seem to be used, in Linux or GNU, only in the context of either VFAT mount options, or host names on networks.
"basename"
This term appears to be a valid answer, based upon p.149 of Installing Red Hat Linux 7:
Make absolutely sure that the basename of the disk you are planning to partition is not listed (this is hdb, in the case of the drive I added).
and the course notes for CST8207 (GNU/Linux Operating Systems) at Algonquin College:
Definition of basename: The basename of any pathname is its right-most name component, to the right of its right-most slash.
and p.1456 of A Practical Guide to Red Hat Linux 8:
basename: The name of a file that, in contrast to a pathname, does not mention any of the directories containing the file (and therefore does not contain any slashes [/]). For example, hosts is the basename of /etc/hosts.
Happily, GNU/Linux also has a basename command, which can be used to obtain the basename:
$ basename '/dev/sda'
sda
"kernel disk name"
This term also appears to be a valid answer, due to p.100 of Linux Kernel in a Nutshell:
/dev/<diskname>
Use the kernel disk name specified by <diskname> as the root disk.
Incidentally, "kernel disk name" also appears to be valid terminology in the context of Solaris:
For this version of the iostat command, the output shows extended statistics for only those disk devices with nonzero activity, by physical device path instead of the logical kernel disk name (that is, c0t0d0 instead of sd0).
| Drive name? What is the correct term for the "sda" part of "/dev/sda"? |
1,329,949,954,000 |
How can I get statistics about how much (in percents) the Linux kernel source code changes in one year?
|
What you are looking for can be found on the Ohloh website, which by the way indexes the Linux GIT repository.
There you will see a graph showing you how much the kernel has changed over 1 yr, 3 yrs, 5 yrs, 10 yrs or All. By default it will show you the statistics for the source code but you can also get statistics about Languages, Committers, Commits. You can then manually calculate the change %. The change in source code between 2010 and 2011 is up 11.4%.
| How much does the Linux kernel change in one year? |
1,329,949,954,000 |
The following Perl script can convert CSV file to XLS file
the problem is that I need to install on customer Linux machine many Perl modules
in order to run this Perl script , and actually I cant because the Linux machine is customer machine ( not allow to install modules )
So I need to find some other alternative for this Perl script
First customer have Linux red-hat machine version 5.X
And I want to find some bash/ksh/sh/awk scripts that can do the job the same as the perl script do
so I want to find other alternative that convert CSV to XLS file
Please advice how to find this script? or other advice to convert CSV to XLS on Linux machine
#!/usr/bin/perl -w
###############################################################################
#
# Example of how to use the WriteExcel module
#
# Simple program to convert a CSV comma-separated value file to an Excel file.
# This is more or less an non-op since Excel can read CSV files.
# The program uses Text::CSV_XS to parse the CSV.
#
# Usage: csv2xls.pl file.csv newfile.xls
#
#
# NOTE: This is only a simple conversion utility for illustrative purposes.
# For converting a CSV or Tab separated or any other type of delimited
# text file to Excel I recommend the more rigorous csv2xls program that is
# part of H.Merijn Brand's Text::CSV_XS module distro.
#
# See the examples/csv2xls link here:
# L<http://search.cpan.org/~hmbrand/Text-CSV_XS/MANIFEST>
#
# reverse('©'), March 2001, John McNamara, [email protected]
#
use strict;
use Spreadsheet::WriteExcel;
use Text::CSV_XS;
# Check for valid number of arguments
if ( ( $#ARGV < 1 ) || ( $#ARGV > 2 ) ) {
die("Usage: csv2xls csvfile.txt newfile.xls\n");
}
# Open the Comma Separated Variable file
open( CSVFILE, $ARGV[0] ) or die "$ARGV[0]: $!";
# Create a new Excel workbook
my $workbook = Spreadsheet::WriteExcel->new( $ARGV[1] );
my $worksheet = $workbook->add_worksheet();
# Create a new CSV parsing object
my $csv = Text::CSV_XS->new;
# Row and column are zero indexed
my $row = 0;
while (<CSVFILE>) {
if ( $csv->parse($_) ) {
my @Fld = $csv->fields;
my $col = 0;
foreach my $token (@Fld) {
$worksheet->write( $row, $col, $token );
$col++;
}
$row++;
} else {
my $err = $csv->error_input;
print "Text::CSV_XS parse() failed on argument: ", $err, "\n";
}
}
|
For automatically converting CSV files to XLS/XLSX ones you can also use ssconvert (which comes with Gnumeric) or unoconv (which uses LibreOffice).
SSConvert Example
$ echo -e 'surname,name,age\nCarlo,Smith,23\nJohn,Doe,46\nJane,Doe,69\nSarah,Meyer,23\n' \
> example.csv
$ unix2dos example.csv
$ ssconvert example.csv example.xlsx
$ ssconvert example.csv example.xls
Where the first ssconvert call creates a MS Excel 2007/2010 file and the second an old school Excel 2007 one.
You can check the files via file:
$ file example.csv
example.csv: ASCII text, with CRLF line terminators
$ file example.xls
example.xls: Composite Document File V2 Document, Little Endian, Os: Windows, Version 4.10,
Code page: 1252, Create Time/Date: Tue Sep 30 20:23:18 2014
$ file example.xlsx
example.xlsx: Microsoft Excel 2007+
You can list all supported output file formats via:
$ ssconvert --list-exporters
ID | Description
[..]
Gnumeric_Excel:xlsx2 | ISO/IEC 29500:2008 & ECMA 376 2nd edition (2008);
[MS Excel™ 2010]
Gnumeric_Excel:xlsx | ECMA 376 1st edition (2006); [MS Excel™ 2007]
Gnumeric_Excel:excel_dsf | MS Excel™ 97/2000/XP & 5.0/95
Gnumeric_Excel:excel_biff7 | MS Excel™ 5.0/95
Gnumeric_Excel:excel_biff8 | MS Excel™ 97/2000/XP
[..]
Unoconv Example
$ unoconv --format xls example.csv
which creates example.xls, which is a Excel 97/2000/XP file.
Check via file:
$ file example.xls
example.xls: Composite Document File V2 Document, Little Endian, Os: Windows, Version 1.0,
Code page: -535, Revision Number: 0
You can list all supported file formats via:
$ unoconv --show
[..]
The following list of spreadsheet formats are currently available:
csv - Text CSV [.csv]
dbf - dBASE [.dbf]
[..]
ooxml - Microsoft Excel 2003 XML [.xml]
[..]
xls - Microsoft Excel 97/2000/XP [.xls]
xls5 - Microsoft Excel 5.0 [.xls]
xls95 - Microsoft Excel 95 [.xls]
[..]
| convert CSV to XLS file on linux |
1,329,949,954,000 |
Does anyone know common reasons for such a large deficit difference in the number of files transferred when backing up my LARGE home directory using rsync on a Ubuntu 10.04 LTS setup? The machine is stable and all volumes are clean ext4 -- no errors from fsck.ext4.
Number of files: 4857743
Number of files transferred: 4203266
That's a difference of 654,477 files!!!
I want to backup my FULL home folder to an external disk so I can fully WIPE and reformat my system and then restore my home from this rsync'd backup, but I am concerned I am missing significant data files.
I was logged in as root and used rsync to backup my /home/hholtmann/* directory to a spare backup drive in /mnt/wd750/c51/home/
Here is the command line I used as root
root@c-00000051:~# pwd
/root
root@c-00000051:~# rsync -ah --progress --stats /home/hholtmann /mnt/wd750/c51/home/ -v
Captured summary output from rsync
Number of files: 4857743
Number of files transferred: 4203266
Total file size: 487.41G bytes
Total transferred file size: 487.41G bytes
Literal data: 487.41G bytes
Matched data: 0 bytes
File list size: 102.48M
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 487.75G
Total bytes received: 82.42M
Just to compare an important project sub-dir in my home after rsync:
Byte difference between a source and destination sub-dir using du
root@c-00000051:~# du -cs /home/hholtmann/proj/
18992676 /home/hholtmann/proj/
18992676 total
root@c-00000051:~# du -cs /media/wd750/c51/home/hholtmann/proj/
19006768 /mnt/wd750/c51/home/hholtmann/proj/
19006768 total
HOWEVER: NO FILE COUNT difference between the same source and destination sub-dirs
root@c-00000051:~# find /home/hholtmann/proj/ -type f -follow | wc -l
945937
root@c-00000051:~# find /mnt/wd750/c51/home/hholtmann/proj/ -type f -follow | wc -l
945937
why such unexpected results? A file is a file... especially in a user's home dir!
What am I missing? Or is this a sign I'm ready for management!?!
SOLUTION and ANSWERED:
The selected answer below explains for the byte count difference and my incorrect expectation of the rsync summary data. I was just surprised by this byte difference given that both volumes are ext4 with default block sizes. I just assumed every file would take the same space in terms of du numbers.
I DID find some files that were NOT rsync'd by adding more verbose output to rsync by adding -vv to rsync and running again.
What I saw was errors from rsync stating that it could NOT write any of my DROPBOX dir files to the destination due to the "extended attributes" on the files. rsync was skipping all my dropbox path files.
Ends up my /home volume was mounted with the user_xattr ext4 mount option in the /etc/fstab file:
/dev/mapper/vg1-lv_home /home ext4 nobarrier,noatime,user_xattr 0 2
# I HAD to add the ,user_xattr option to match my home volume
/dev/sda1 /mnt/wd750 ext4 nobarrier,noatime,user_xattr 0 2
After performing another full rsync for the 3rd time, I decided to let a file count run all night on my full home folder and rsync'd backup:
root@c-00000051:~# find /home/hholtmann/ -type f | wc -l
4203266
root@c-00000051:~# find /mnt/wd750/c51/home/hholtmann/ -type f | wc -l
4203266
** A PERFECT MATCH OF FILES **
CONCLUSION:
** Always ensure your backup volumes are mounted with the exact same file system mount options as the source AND turn on full logging with rsync for later grep analysis to search for any errors in long file listings! **
|
There are 2 parts to this question. First, why is there a difference between "Number of files" and "Number of files transferred". This is explained in the rsync manpage:
Number of files: is the count of all "files" (in the generic sense), which includes directories, symlinks, etc.
Number of files transferred: is the count of normal files that were updated via rsync’s delta-transfer algorithm, which does not include created dirs, symlinks, etc.
The difference here should be equal to the total amount of directories, symnlinks, other special files. Those were not "transferred" but just re-created.
Now for the second part, why is there a size difference with du. du shows the amount of disk space used by a file, not the size of the file. The same file can take up a different amount of disk space, if for example the filesystems blocksizes differ.
If you are still worried about data integrity, a easy way to be sure is to created hashes for all your files and compare:
( cd /home/hholtmann && find . -type f -exec md5sum {} \; ) > /tmp/hholtmann.md5sum
( cd /media/wd750/c51/home/ && md5sum -c /tmp/hholtmann.md5sum )
| Reasons for rsync NOT transferring all files? |
1,329,949,954,000 |
I am working on Ubuntu14.04 server and it has 48 CPU cores. I am seeing there is high CPU usage on one core from sar information. So I want to know which processes are running on that core. How should I get all processes running on each CPU core in Ubuntu?
|
You can do that with ps -aeF, see the C column
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 2015 ? 00:08:07 /sbin/init
Or with htop, configure it to show the PROCESSOR column,
To set CPU affinity, you can use taskset command
| How to get all processes running on each CPU core in Ubuntu? |
1,329,949,954,000 |
How can I show the CPU usage side by side rather than a list?
I have this :
but I want to show it like this:
|
Go to settings (F2), under Meters, you select what is in the left column and what in the right column. Instead of CPUs (1/1) in the left column, select CPUs (1/2) for the left column and CPUs (2/2) for the right column. F10 to save the changes and it's done.
| htop, show cpu side by side |
1,329,949,954,000 |
I have a file, with a "KEYWORD" on line number n. How can I print all lines starting from line n+1 until the end?
For example, here I would liek to pro=int only lines DDD and EEE
AAA
BBB
CCC
KEYWORD
DDD
EEE
|
You can do this with sed:
sed '1,/^KEYWORD$/d'
This will delete (omit) all lines from the beginning of the stream until "KEYWORD", inclusive.
| cat all lines from file, which are after "KEYWORD" [duplicate] |
1,329,949,954,000 |
You can put "panic=N" on the kernel command line to make the system reboot N seconds after a panic.
But is there a config option to specify this (other than the default kernel command line option) before even boot loader comes into a play? Some kernel option may be?
|
There does not seem to be such a config option. The default timeout is 0 which according to http://www.mjmwired.net/kernel/Documentation/kernel-parameters.txt#1898 is "wait forever".
The option is defined in kernel/panic.c, you can write a patch that sets the initial value to something different.
To hardcode a reboot after 3 seconds, change:
int panic_timeout;
to:
int panic_timeout = 3;
| How to early configure Linux kernel to reboot on panic? |
1,329,949,954,000 |
As far as I know, device driver is a part of SW that is able to communicate with a particular type of device that is attached to a computer.
In case of a USB webcam, the responsible driver is UVC that supports any UVC compliant device. This means that enables OS or other computer program to access hardware functions without needing to know precise details of the hardware being used.
For this reason, I installed UVC Linux device driver by running:
opkg install kernel-module-uvcvideo
Webcam has been recognised by Linux kernel: dev/video0. However, I still wasn't able to perform video streaming with FFmpeg, as I was missing V4L2 API. I installed V4L2, by configuring kernel.
My queries are:
How UVC driver and V4L2 are linked together?
What is the purpose of V4L2 API?
If I haven't installed UVC first, it would be installed with V4L2?
LinuxTV refers: The uvcvideo driver implementation is adherent only to the V4L2 API. This means that UVC is part of V4L2 API?
|
The USB video class (UVC) is a specification to which USB webcams, etc., are supposed to conform. This way, they can be used on any system which implements support for UVC compliant devices.
V4L2 is the linux kernel video subsystem upon which the linux UVC implementation depends. In other words, in the kernel UVC support requires V4L2, but not the other way around.
The V4L2 API refers to a userspace programming interface, documented here.
| Understanding webcam 's Linux device drivers |
1,329,949,954,000 |
I get an error when I try to execute this command on Red Hat Linux.
$ ss -s
-bash: ss: command not found
It is supposed to be for checking socket statistics. How do I execute this?
|
As per comment above: Try with the full path;
/usr/sbin/ss
| 'ss' command for checking sockets not found |
1,329,949,954,000 |
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won't fail to fork() or suffer from any of the other usual OOM issues).
I found the OOM killer, which I understand is useful, but which doesn't really do what I'd need to do.
Ideally, if I'm running out of memory, I want to know why.
I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question...
Is there even a syscall to be informed of something like that?
A way of saying to the kernel "hey, wake me up when we've only got 128 MB of memory left"?
I searched around the web and on here but I didn't find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you'll be able to know which process(es) caused the problem.
|
What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can bring the machine to an halt.
Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following:
monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email;
monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email.
Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds)
| How to trigger action on low-memory condition in Linux? |
1,329,949,954,000 |
The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. What is the default setting and how to configure that to improve overall performance?
|
The Linux kernel provides a tweakable setting that controls swappiness
$ cat /proc/sys/vm/swappiness
60
open /etc/sysctl.conf as root. Then, change or add this line to the file:
vm.swappiness = 10
for changing the swappiness value temporarily try this command:
$ echo 50 > /proc/sys/vm/swappiness
| How to configure swappiness in Linux Memory Management? |
1,329,949,954,000 |
I usually install Linux on a single partition since I only use it as a personal desktop.
However, every now and then I reinstall the box. And what I do is to simply move my files around with an external hard disk.
So how could I prevent that when reinstalling my box (e.g. switching to another distro)?
|
Keep your /home on a separate partition. This way, it will not be overwritten when you switch to another distro or upgrade your current one. It's also a good idea to have your swap on its own partition. But that should be done automatically by your distro's installer.
The way my laptop is setup, I have the following partitions:
/
/home
/boot
swap
| What's the best way to partition your drive? |
1,329,949,954,000 |
If you tar a directory recursively, it just uses the order from the os's readdir.
But in some cases it's nice to tar the files sorted.
What's a good way to tar a directory sorted alphabetically?
Note, for the purpose of this question, gnu-tar on a typical Linux system is fine.
|
For a GNU tar:
--sort=ORDER
Specify the directory sorting order when reading directories.
ORDER may be one of the following:
`none'
No directory sorting is performed. This is the default.
`name'
Sort the directory entries on name. The operating system may
deliver directory entries in a more or less random order, and
sorting them makes archive creation reproducible.
`inode'
Sort the directory entries on inode number. Sorting
directories on inode number may reduce the amount of disk
seek operations when creating an archive for some file
systems.
You'll probably also want to look at --preserve-order.
| How to tar files with a sorted order? |
1,329,949,954,000 |
I want to run mplayer with higher priority than any other processes, including the IO-processes. How can I do that?
|
To set niceness (CPU bound) use nice. To set IO niceness (IO bound) use ionice. Refer to the respective man pages for more information. You can use them together as follow:
ionice -c 2 -n 0 nice -n -20 mplayer
Note: the lowest level of niceness (lower means more favorable) you can define is determined by limits.conf. On my computer the file is located at /etc/security/limits.conf.
| Run process with higher priority |
1,329,949,954,000 |
I have a Anaconda Python Virtual Environment set up and if I run my project while that virutal environment is activated everything runs great.
But I have a cronjob configured to run it every hour. I piped the output to a log because it wasn't running correctly.
crontab -e:
10 * * * * bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1
I get this error in the cronlog.log:
Traceback (most recent call last):
File "__parallel_workflow.py", line 10, in <module>
import yaml
ImportError: No module named yaml
That is indicative of the cronjob somehow not running the file without the virtual environment activated.
To remedy this I added a line to the /home/user/.bash_profile file:
conda activate ~/anaconda3/envs/sql_server_etl/
Now when I login the environment is activated automatically.
However, the problem persists.
I tried one more thing. I changed the cronjob, (and I also tried this in the bash file the cronjob runs) to explicitly manually activate the environment each time it runs, but to no avail:
10 * * * * conda activate ~/anaconda3/envs/sql_server_etl/ && bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1
Of course, nothing I've tried has fixed it. I really know nothing about linux so maybe there's something obvious I need to change.
So, is there anyway to specify that the cronjob should run under a virutal environment?
|
Posted a working solution (on Ubuntu 18.04) with detailed reasoning on SO.
The short form is:
1. Copy snippet appended by Anaconda in ~/.bashrc (at the end of the file) to a separate file ~/.bashrc_conda
As of Anaconda 2020.02 installation, the snippet reads as follows:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/USERNAME/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/USERNAME/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/USERNAME/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/USERNAME/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
Make sure that:
The path /home/USERNAME/anaconda3/ is correct.
The user running the cronjob has read permissions for ~/.bashrc_conda (and no other user can write to this file).
2. In crontab -e add lines to run cronjobs on bash and to source ~/.bashrc_conda
Run crontab -e and insert the following before the cronjob:
SHELL=/bin/bash
BASH_ENV=~/.bashrc_conda
3. In crontab -e include at beginning of the cronjob conda activate my_env; as in example
Example of entry for a script that would execute at noon 12:30 each day on the Python interpreter within the conda environment:
30 12 * * * conda activate my_env; python /path/to/script.py; conda deactivate
And that's it.
You may want to check from time to time that the snippet in ~/.bashrc_conda is up to date in case conda updates its snippet in ~/.bashrc.
| cron job to run under conda virtual environment |
1,329,949,954,000 |
How can I describe or explain "buffers" in the output of free?
$ free -h
total used free shared buff/cache available
Mem: 501M 146M 19M 9.7M 335M 331M
Swap: 1.0G 85M 938M
$ free -w -h
total used free shared buffers cache available
Mem: 501M 146M 19M 9.7M 155M 180M 331M
Swap: 1.0G 85M 938M
I don't have any (known) problem with this system. I am just surprised and curious to see that "buffers" is almost as high as "cache" (155M v.s. 180M). I thought "cache" represented the page cache of file contents, and tended to be the most significant part of "cache/buffers". I'm not sure what "buffers" are though.
For example, I compared this to my laptop which has more RAM. On my laptop, the "buffers" figure is an order of magnitude smaller than "cache" (200M v.s. 4G). If I understood what "buffers" were then I could start to look at why the buffers grew to such a larger proportion on the smaller system.
From man proc (I ignore the hilariously outdated definition of "large"):
Buffers %lu
Relatively temporary storage for raw disk blocks that shouldn't get tremendously large (20MB or so).
Cached %lu
In-memory cache for files read from the disk (the page cache). Doesn't include SwapCached.
$ free -V
free from procps-ng 3.3.12
$ uname -r # the Linux kernel version
4.9.0-6-marvell
$ systemd-detect-virt # this is not inside a virtual machine
none
$ cat /proc/meminfo
MemTotal: 513976 kB
MemFree: 20100 kB
MemAvailable: 339304 kB
Buffers: 159220 kB
Cached: 155536 kB
SwapCached: 2420 kB
Active: 215044 kB
Inactive: 216760 kB
Active(anon): 56556 kB
Inactive(anon): 73280 kB
Active(file): 158488 kB
Inactive(file): 143480 kB
Unevictable: 10760 kB
Mlocked: 10760 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 513976 kB
LowFree: 20100 kB
SwapTotal: 1048572 kB
SwapFree: 960532 kB
Dirty: 240 kB
Writeback: 0 kB
AnonPages: 126912 kB
Mapped: 40312 kB
Shmem: 9916 kB
Slab: 37580 kB
SReclaimable: 29036 kB
SUnreclaim: 8544 kB
KernelStack: 1472 kB
PageTables: 3108 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1305560 kB
Committed_AS: 1155244 kB
VmallocTotal: 507904 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
$ sudo slabtop --once
Active / Total Objects (% used) : 186139 / 212611 (87.5%)
Active / Total Slabs (% used) : 9115 / 9115 (100.0%)
Active / Total Caches (% used) : 66 / 92 (71.7%)
Active / Total Size (% used) : 31838.34K / 35031.49K (90.9%)
Minimum / Average / Maximum Object : 0.02K / 0.16K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
59968 57222 0% 0.06K 937 64 3748K buffer_head
29010 21923 0% 0.13K 967 30 3868K dentry
24306 23842 0% 0.58K 4051 6 16204K ext4_inode_cache
22072 20576 0% 0.03K 178 124 712K kmalloc-32
10290 9756 0% 0.09K 245 42 980K kmalloc-96
9152 4582 0% 0.06K 143 64 572K kmalloc-node
9027 8914 0% 0.08K 177 51 708K kernfs_node_cache
7007 3830 0% 0.30K 539 13 2156K radix_tree_node
5952 4466 0% 0.03K 48 124 192K jbd2_revoke_record_s
5889 5870 0% 0.30K 453 13 1812K inode_cache
5705 4479 0% 0.02K 35 163 140K file_lock_ctx
3844 3464 0% 0.03K 31 124 124K anon_vma
3280 3032 0% 0.25K 205 16 820K kmalloc-256
2730 2720 0% 0.10K 70 39 280K btrfs_trans_handle
2025 1749 0% 0.16K 81 25 324K filp
1952 1844 0% 0.12K 61 32 244K kmalloc-128
1826 532 0% 0.05K 22 83 88K trace_event_file
1392 1384 0% 0.33K 116 12 464K proc_inode_cache
1067 1050 0% 0.34K 97 11 388K shmem_inode_cache
987 768 0% 0.19K 47 21 188K kmalloc-192
848 757 0% 0.50K 106 8 424K kmalloc-512
450 448 0% 0.38K 45 10 180K ubifs_inode_slab
297 200 0% 0.04K 3 99 12K eventpoll_pwq
288 288 100% 1.00K 72 4 288K kmalloc-1024
288 288 100% 0.22K 16 18 64K mnt_cache
287 283 0% 1.05K 41 7 328K idr_layer_cache
240 8 0% 0.02K 1 240 4K fscrypt_info
|
What is the difference between "buffers" and the other type of cache?
Why is this distinction so prominent? Why do some people say "buffer cache" when they talk about cached file content?
What are Buffers used for?
Why might we expect Buffers in particular to be larger or smaller?
1. What is the difference between "buffers" and the other type of cache?
Buffers shows the amount of page cache used for block devices. "Block devices" are the most common type of data storage device.
The kernel has to deliberately subtract this amount from the rest of the page cache when it reports Cached. See meminfo_proc_show():
cached = global_node_page_state(NR_FILE_PAGES) -
total_swapcache_pages() - i.bufferram;
...
show_val_kb(m, "MemTotal: ", i.totalram);
show_val_kb(m, "MemFree: ", i.freeram);
show_val_kb(m, "MemAvailable: ", available);
show_val_kb(m, "Buffers: ", i.bufferram);
show_val_kb(m, "Cached: ", cached);
2. Why is this distinction made so prominent? Why do some people say "buffer cache" when they talk about cached file content?
The page cache works in units of the MMU page size, typically a minimum of 4096 bytes. This is essential for mmap(), i.e. memory-mapped file access.[1][2] It is designed to share pages of loaded program / library code between separate processes, and allow loading individual pages on demand. (Also for unloading pages when something else needs the space, and they haven't been used recently).
[1] Memory-mapped I/O - The GNU C Library manual.
[2] mmap - Wikipedia.
Early UNIX had a "buffer cache" of disk blocks, and did not have mmap(). Apparently when mmap() was first added, they added the page cache as a new layer on top. This is as messy as it sounds. Eventually, UNIX-based OS's got rid of the separate buffer cache. So now all file cache is in units of pages. Pages are looked up by (file, offset), not by location on disk. This was called "unified buffer cache", perhaps because people were more familiar with "buffer cache".[3]
[3] UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD
("One interesting twist that Linux adds is that the device block numbers where a page is stored on disk are cached with the page in the form of a list of buffer_head structures. When a modified page is to be written back to disk, the I/O requests can be sent to the device driver right away, without needing to read any indirect blocks to determine where the page's data should be written."[3])
In Linux 2.2 there was a separate "buffer cache" used for writes, but not for reads. "The page cache used the buffer cache to write back its data, needing an extra copy of the data, and doubling memory requirements for some write loads".[4] Let's not worry too much about the details, but this history would be one reason why Linux reports Buffers usage separately.
[4] Page replacement in Linux 2.4 memory management, Rik van Riel.
By contrast, in Linux 2.4 and above, the extra copy does not exist. "The system does disk IO directly to and from the page cache page."[4] Linux 2.4 was released in 2001.
3. What are Buffers used for?
Block devices are treated as files, and so have page cache. This is used "for filesystem metadata and the caching of raw block devices".[4] But in current versions of Linux, filesystems do not copy file contents through it, so there is no "double caching".
I think of the Buffers part of the page cache as being the Linux buffer cache. Some sources might disagree with this terminology.
How much buffer cache the filesystem uses, if any, depends on the type of filesystem. The system in the question uses ext4. ext3/ext4 use the Linux buffer cache for the journal, for directory contents, and some other metadata.
Certain file systems, including ext3, ext4, and ocfs2, use the jbd or
jbd2 layer to handle their physical block journalling, and this layer
fundamentally uses the buffer cache.
-- Email article by Ted Tso, 2013
Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data.
...
The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache.
-- A pair of Quora answers by Robert Love, last updated 2013.
Both writers are Linux developers who have worked with Linux kernel memory management. The first source is more specific about technical details. The second source is a more general summary, which might be contradicted and outdated in some specifics.
It is true that filesystems may perform partial-page metadata writes, even though the cache is indexed in pages. Even user processes can perform partial-page writes when they use write() (as opposed to mmap()), at least directly to a block device. This only applies to writes, not reads. When you read through the page cache, the page cache always reads full pages.
Linus liked to rant that the buffer cache is not required in order to do block-sized writes, and that filesystems can do partial-page metadata writes even with page cache attached to their own files instead of the block device. I am sure he is right to say that ext2 does this. ext3/ext4 with its journalling system does not. It is less clear what the issues were that led to this design. The people he was ranting at got tired of explaining.
ext4_readdir() has not been changed to satisfy Linus' rant. I don't see his desired approach used in readdir() of other filesystems either. I think XFS uses the buffer cache for directories as well. bcachefs does not use the page cache for readdir() at all; it uses its own cache for btrees. I'm not sure about btrfs.
4. Why might we expect Buffers in particular to be larger or smaller?
In this case it turns out the ext4 journal size for my filesystem is 128M. So this explains why 1) my buffer cache can stabilize at slightly over 128M; 2) buffer cache does not scale proportionally with the larger amount of RAM on my laptop.
For some other possible causes, see What is the buffers column in the output from free? Note that "buffers" reported by free is actually a combination of Buffers and reclaimable kernel slab memory.
To verify that journal writes use the buffer cache, I simulated a filesystem in nice fast RAM (tmpfs), and compared the maximum buffer usage for different journal sizes.
# dd if=/dev/zero of=/tmp/t bs=1M count=1000
...
# mkfs.ext4 /tmp/t -J size=256
...
# LANG=C dumpe2fs /tmp/t | grep '^Journal size'
dumpe2fs 1.43.5 (04-Aug-2017)
Journal size: 256M
# mount /tmp/t /mnt
# cd /mnt
# free -w -m
total used free shared buffers cache available
Mem: 7855 2521 4321 285 66 947 5105
Swap: 7995 0 7995
# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done
# free -w -m
total used free shared buffers cache available
Mem: 7855 2523 3872 551 237 1223 4835
Swap: 7995 0 7995
# dd if=/dev/zero of=/tmp/t bs=1M count=1000
...
# mkfs.ext4 /tmp/t -J size=16
...
# LANG=C dumpe2fs /tmp/t | grep '^Journal size'
dumpe2fs 1.43.5 (04-Aug-2017)
Journal size: 16M
# mount /tmp/t /mnt
# cd /mnt
# free -w -m
total used free shared buffers cache available
Mem: 7855 2507 4337 285 66 943 5118
Swap: 7995 0 7995
# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done
# free -w -m
total used free shared buffers cache available
Mem: 7855 2509 4290 315 77 977 5086
Swap: 7995 0 7995
History of this answer: How I came to look at the journal
I had found Ted Tso's email first, and was intrigued that it emphasized write caching. I would find it surprising if "dirty", unwritten data was able to reach 30% of RAM on my system. sudo atop shows that over a 10 second interval, the system in question consistently writes only 1MB. The filesystem concerned would be able to keep up with over 100 times this rate. (It's on a USB2 hard disk drive, max throughput ~20MB/s).
Using blktrace (btrace -w 10 /dev/sda) confirms that the IOs which are being cached must be writes, because there is almost no data being read. Also that mysqld is the only userspace process doing IO.
I stopped the service responsible for the writes (icinga2 writing to mysql) and re-checked. I saw "buffers" drop to under 20M - I have no explanation for that - and stay there. Restarting the writer again shows "buffers" rising by ~0.1M for each 10 second interval. I observed it maintain this rate consistently, climbing back to 70M and above.
Running echo 3 | sudo tee /proc/sys/vm/drop_caches was sufficient to lower "buffers" again, to 4.5M. This proves that my accumulation of buffers is a "clean" cache, which Linux can drop immediately when required. This system is not accumulating unwritten data. (drop_caches does not perform any writeback and hence cannot drop dirty pages. If you wanted to run a test which cleaned the cache first, you would use the sync command).
The entire mysql directory is only 150M. The accumulating buffers must represent metadata blocks from mysql writes, but it surprised me to think there would be so many metadata blocks for this data.
| 30% of RAM is "buffers". What is it? |
1,329,949,954,000 |
I know that mounting the same disk with an ext4 filesystem from two different servers (it's an iSCSI vloume) will likely corrupt data on the disk. My question is will it make any difference if one of the servers mounts the disk read-only while the other mounts it read-write?
I know OCFS2 or the likes could be used for this and that I could export the disk with NFS to be accesible to the other server, but I would like to know if the setup I propose will work.
|
No. It won't give consistent results on the read-only client, because of caching. It's definitely not designed for it. You could expect to see IO errors returned to applications. There's probably still some number of oversights in the code, that could cause a kernel crash or corrupt memory used by any process.
But most importantly, ext4 replays the journal even on readonly mounts. So a readonly mount will still write to the underlying block device. It would be unsafe even if both the mounts were readonly :).
| Can the same ext4 disk be mounted from two hosts, one readonly? |
1,329,949,954,000 |
I want an isolated (guest) Linux environment on my computer that I can mess up without worrying about the host. E.g. install a lot of stuff from source without package management, pollute environment environment variables etc., then spawn another guest environment when the old guest gets too cluttered.
I've had some fun using Virtualbox with Tinycore linux, but at least the way I use it, I don't think the Virtualbox overhead is entirely necessary. For one thing, if possible, I would like to use the same kernel as my host.
Also, as I've run through the Linux From Scratch tutorial, I learned a little about chroot, which seemed like it might be what I am looking for. To be honest though, there was a lot I didn't really understand in LFS, chroot being one of them. I would try playing around with chroot if I wasn't so afraid it might mess up my current environment.
So I'm looking for a virtualization program that uses the fact that I'm on a linuxbox (I'm using PinguyOS btw), to speed up virtualization, or a reference on how to use chroot as an isolated playground.
|
Chroot is the lightest weight environment that could suit you. It allows you to install another distribution (or another installation of the same distribution), with the same users, with the same network configuration, etc. Chroot only provides some crude isolation at the filesystem level. Browsing this site for chroot might help, if you're still not sure what chroot can and can't do.
If you're looking for the next step up, LXC is now included in the kernel mainline. An LXC guest (called a container) has its own filesystem, process and network space. Root in the container is also root on the host; LXC protects against many accidental actions by a guest root but not against a malicious guest root (this is a planned feature, watch this space).
Other technologies that are somewhat similar to LXC are VServer and OpenVZ. An important feature that OpenVZ provides but not VServer or LXC is checkpoints: you can take a snapshot of a running machine and restore it later. Yet another candidate is User-mode Linux, which runs a complete Linux system inside a process that runs as an ordinary user in the host.
For the purposes of experimenting with another OS installation, chroot is fine. If you want to run services in the experimental installation or play with networking, go for LXC. If you want snapshots, use OpenVZ. If you want a completely separate kernel but little memory overhead, user User-mode Linux. If you want snapshots and a separate kernel, use VirtualBox.
| Lightweight isolated linux environment |
1,329,949,954,000 |
There is some development I need to do on some remote box. Fortunately, I have shell access, but I need to go through a gateway that has AllowTcpForwarding set to false.
I took a peak at the docs and it says:
AllowTcpForwarding Specifies whether TCP forwarding is permitted. The
default is ''yes''. Note that disabling TCP forwarding does not
improve security unless users are also denied shell access, as they
can always install their own forwarders.
How would I go about installing (or building) my own forwarder? My goal here is to setup a remote interpreter using Pycharm via SSH and binding it to some local port, that data fed through ssh, that through the gateway, and then to the development box where the code is actually run. I imagine I could somehow utilize nc or some other unix utility that'll help get the job done.
I know I can ssh to my remote box by doing:
ssh -t user1@gateway ssh user2@devbox
But obviously this option isn't available in pycharm. I'll have to be able to open some local port such that
ssh -p 12345 localhost
(or variant)
will connect me to user2@devbox. This will allow me to configure the remote interpreter to use port 12345 on localhost to connect to the remote box.
|
As long as one can execute socat locally and on gateway (or even just bash and cat on gateway, see last example!) and is allowed to not use a pty to be 8bits clean, it's possible to establish a tunnel through ssh. Here are 4 examples, improving upon the previous:
Basic example working once
(having it fork would require one ssh connection per tunnel, not good). Having to escape the : for socat to accept the exec command:
term1:
$ socat tcp-listen:12345,reuseaddr exec:'ssh user1@gateway exec socat - tcp\:devbox\:22',nofork
term2:
$ ssh -p 12345 user2@localhost
term1:
user1@gateway's password:
term2:
user2@localhost's password:
Reversing first and second addresses makes the socket immediately available
socat has to stay in charge, so no nofork:
term1:
$ socat exec:'ssh user1@gateway exec socat - tcp\:devbox\:22' tcp-listen:12345,reuseaddr
user1@gateway's password:
term2:
$ ssh -p 12345 user2@localhost
user2@localhost's password:
Using a ControlMaster ssh
Aallows to fork while using only a single SSH connection to the gateway, thus giving a behaviour similar to the usual port forwarding:
term1:
$ ssh -N -o ControlMaster=yes -o ControlPath=~/mysshcontrolsocket user1@gateway
user1@gateway's password:
term2:
$ socat tcp-listen:12345,reuseaddr,fork exec:'ssh -o ControlPath=~/mysshcontrolsocket user1@gateway exec socat - tcp\:devbox\:22'
term3:
$ ssh -p 12345 user2@localhost
user2@localhost's password:
Having only bash and cat available on gateway
By using bash's built-in tcp redirection, and two half-duplex cat commands (for a full-duplex result) one doesn't even need a remote socat or netcat. Handling of multiple layers of nested and escaped quotes was a bit awkward and can perhaps be done better, or simplified by the use of a remote bash script. Care has to be taken to have the forked cat for output only:
term1 (no change):
$ ssh -N -o ControlMaster=yes -o ControlPath=~/mysshcontrolsocket user1@gateway
user1@gateway's password:
term2:
$ socat tcp-listen:12345,reuseaddr,fork 'exec:ssh -T -o ControlPath=~/mysshcontrolsocket user1@gateway '\''exec bash -c \'\''"exec 2>/dev/null 8<>/dev/tcp/devbox/22; cat <&8 & cat >&8"\'\'\'
term3:
$ ssh -p 12345 user2@localhost
user2@localhost's password:
| How to ssh forwarding with AllowTcpForwarding set to no? |
1,329,949,954,000 |
If I know that a partition is for example /dev/sda1 how can I get the disk name (/dev/sda in this case) that contains the partition ?
The output should be only a path to disk (like /dev/sda).
It shouldn't require string manipulation, because I need it to work for different disk types.
|
You can observe in /sys the block device for a given partition name. For example, /dev/sda1:
$ ls -l /sys/class/block/sda1
lrwxrwxrwx 1 root root /sys/class/block/sda1 -> \
../../devices/pci0000:00/.../ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1
A script to take arg /dev/sda1 and print /dev/sda is:
part=$1
part=${part#/dev/}
disk=$(readlink /sys/class/block/$part)
disk=${disk%/*}
disk=/dev/${disk##*/}
echo $disk
I don't have lvm etc to try out, but there is probably some similar path.
There is also lsblk:
$ lsblk -as /dev/sde1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sde1 8:65 1 7.4G 0 part
`-sde 8:64 1 7.4G 0 disk
and as @don_crissti said you can get the parent directly by using -o pkname to get just the name column, -n to remove the header, and -d to not include holder devices or slaves:
lsblk -ndo pkname /dev/sda1
| How to get disk name that contains a specific partition |
1,329,949,954,000 |
I am trying to create a script which will start many background command.
For each background command I need to get the return code.
I have been trying the following script :
#!/bin/bash
set -x
pid=()
return=()
for i in 1 2
do
echo start $i
ssh mysql "/root/test$i.sh" &
pid[$i]=$!
done
for i in ${#pid[@]}
do
echo ${pid[$i]}
wait ${pid[$i]}
return[$i]=$?
if [ ${return[$i]} -ne 0 ]
then
echo mail error
fi
done
echo ${return[1]}
echo ${return[2]}
My issue is during the wait loop, if the second pid finish before the first one, I'll not be able to get the return code.
I know that I can run wait pid1 pid2, but with this command I can't get the return code of all commands.
Any idea ?
|
You can do this by using a temporary directory.
# Create a temporary directory to store the statuses
dir=$(mktemp -d)
# Execute the backgrouded code. Create a file that contains the exit status.
# The filename is the PID of this group's subshell.
for i in 1 2; do
{ ssh mysql "/root/test$i.sh" ; echo "$?" > "$dir/$BASHPID" ; } &
done
# Wait for all jobs to complete
wait
# Get return information for each pid
for file in "$dir"/*; do
printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")"
done
# Remove the temporary directory
rm -r "$dir"
| Bash script wait for processes and get return code |
1,329,949,954,000 |
2017 WARNING! The accepted answer appears to work, but with recent kernels I discovered that the system would hang as soon as it started swapping. If you attempt using an encrypted swap file, make sure that it actually swaps properly. It took me a long time to figure out why my system kept locking up for no apparent reason. I've gone back to using an encrypted swap partition, which does work correctly.
How do I set up an encrypted swap file (not partition) in Linux? Is it even possible? All the guides I've found talk about encrypted swap partitions, but I don't have a swap partition, and I'd rather not have to repartition my disk.
I don't need suspend-to-disk support, so I'd like to use a random key on each boot.
I'm already using a TrueCrypt file-hosted volume for my data, but I don't want to put my swap in that volume. I'm not set on using TrueCrypt for the swap file if there's a better solution.
I'm using Arch Linux with the default kernel, if that matters.
|
Indeed, the page describes setting up a partition, but it's similar for a swapfile:
dd if=/dev/urandom of=swapfile.crypt bs=1M count=64
loop=$(losetup -f)
losetup ${loop} swapfile.crypt
cryptsetup open --type plain --key-file /dev/urandom ${loop} swapfile
mkswap /dev/mapper/swapfile
swapon /dev/mapper/swapfile
The result:
# swapon -s
Filename Type Size Used Priority
/dev/mapper/swap0 partition 4000176 0 -1
/dev/mapper/swap1 partition 2000084 0 -2
/dev/mapper/swapfile partition 65528 0 -3
swap0 and swap1 are real partitions.
| How do I set up an encrypted swap file in Linux? |
1,329,949,954,000 |
Possible Duplicate:
Can I create a user-specific hosts file to complement /etc/hosts?
In short: I would like to know if it is possible to get a ~/hosts file that could override the /etc/hosts file, since I don't have any privileged access.
A machine I am working on does seem to be properly configured with a correct DNS server. When I try to ping usual machine name I am working with, it fails. But when I try to ping them by IP address it works as expected.
I want to avoid changing any scripts and other musuculary memorized handcrafted commmand line ™ that I made because of a single unproperly configured machine. I contacted sys admin, but they have other fish to fry.
How can I implement that?
|
Beside the LD_PRELOAD tricks. A simple alternative if you're not using nscd is to copy libnss_files.so to some location of your own like:
mkdir -p -- ~/lib &&
cp /lib/x86_64-linux-gnu/libnss_files.so.2 ~/lib
Binary-edit the copy to replace /etc/hosts in there to something the same length like /tmp/hosts.
perl -pi -e 's:/etc/hosts:/tmp/hosts:g' ~/lib/libnss_files.so.2
Edit /tmp/hosts to add the entry you want. And use
export LD_LIBRARY_PATH=~/lib
for nss_files to look in /tmp/hosts instead of /etc/hosts.
Instead of /tmp/hosts, you could also make it /dev/fd//3, and do
exec 3< ~/hosts
For instance which would allow different commands to use different hosts files.
$ cat hosts
1.2.3.4 asdasd
$ LD_LIBRARY_PATH=~/lib getent hosts asdasd 3< ~/hosts
1.2.3.4 asdasd
If nscd is installed and running, you can bypass it by doing the same trick, but this time for libc.so.6 and replace the path to the nscd socket (something like /var/run/nscd/socket) with some nonexistent path.
| How can I override the /etc/hosts file at user level? [duplicate] |
1,329,949,954,000 |
May you have ever used filesystem defragmentation tools (like Norton SpeedDisk or Piriform Defraggler) on Windows, you have probably seen such a diagram:
It displays a filesystem sectors map, painting (as for this particular example) sectors (sets of sectors actually, to fit the whole partition in the screen) occupied by non-fragmented (contiguous) files in blue, the opposite in red and free sectors in white (and some more colours for some more particular cases which can happen to be of interest). You can click on a "sector" and see what particular files "live" there.
Is there such a visualization tool for Linux?
|
I had the same question, but there was no appropriate software. I tried to build davl, but did not succeed in that. So I ended up writing my own tool. You can find it here: https://github.com/i-rinat/fragview
Use Ctrl + mouse scroll to change map scale.
| Is there a tool to visualize a filesystem allocation map on Linux? |
1,329,949,954,000 |
In the following example:
$ ip a | grep scope.global
inet 147.202.85.48/24 brd 147.202.85.255 scope global dynamic enp0s3
What does the 'brd' mean?
|
brd is short for broadcast.
147.202.85.255 is the broadcast address for whatever interface that line belongs to.
| meaning of "brd" in output of IP commands |
1,329,949,954,000 |
I know how to create and use a swap partition but can I also use a file instead?
How can I create a swap file on a Linux system?
|
Let's start with the basic information, you should inspect what vm.swappiness is set, and if set wisely to a more acceptable value than 60, which is the default, you should have no problem. This command can be run as normal user:
sysctl vm.swappiness
For instance, I have 32GB RAM server with 32GB swap file with vm.swappiness = 1. Quoting the Wikipedia:
vm.swappiness = 1: Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
In this example, we create a swap file:
8GB in size,
Located in / (the root directory).
Change these two things accordingly to your needs.
Open terminal and become root (su); if you have sudo enabled, you may also do for example sudo -i; see man sudo for all options):
sudo -i
Allocate space for the swap file:
dd if=/dev/zero of=/swapfile bs=1G count=8
Optionally, if your system supports it, you may add status=progress to that command line.
Note, that the size specified here in G is in GiB (multiples of 1024).
Change permissions of the swap file, so that only root can access it:
chmod 600 /swapfile
Make this file a swap file:
mkswap /swapfile
Enable the swap file:
swapon /swapfile
Verify, whether the swap file is in use:
cat /proc/swaps
Open a text editor you are skilled in with this file, e.g. nano if unsure:
nano /etc/fstab
To make this swap file available after reboot, add the following line:
/swapfile none swap sw 0 0
| How can I create a swap file? |
1,329,949,954,000 |
I know how to create a swap file and use it as swap. But I have to configure the size of the file beforehand and the space is used on the disk, if the swap is used or not.
How do I create a swap that has an initial size of 0 and grows on demand?
|
Swapspace is old and unmaintained and could lead, one day, to problems in modern systems. I think that the best solution for dynamic swap is to:
sudo apt install dphys-swapfile
sudo update-rc.d dphys-swapfile enable
then setting CONF_SWAPFACTOR=2 in /etc/dphys-swapfile and finally
sudo service dphys-swapfile start
| Dynamically growing swap file on Debian |
1,329,949,954,000 |
can anybody explain how does ping 0 works and it translate to 127.0.0.1.
[champu@testsrv ]$ ping 0
PING 0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.013 ms
--- 0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.013/0.026/0.039/0.013 ms
|
Special (and AFAICT) slightly under-documented behaviour in iputils ping: you ping yourself.
If you ping 0 this is what happens (heavily edited and commented for clarity):
if (inet_aton(target, &whereto.sin_addr)) == 1) {
// convert string to binary in_addr
}
// inet_aton returns 1 (success) and leaves the `in_addr` contents all zero.
if (source.sin_addr.s_addr == 0) {
// determine IP address of src interface, via UDP connect(), getsockname()
}
// special case for 0 dst address
if (whereto.sin_addr.s_addr == 0)
whereto.sin_addr.s_addr = source.sin_addr.s_addr;
inet_aton() isn't POSIX, but I'm assuming it copies the behaviour of inet_addr() when less than 4 dotted-decimals are being converted. In the case of a dot-less single number, it's simply stored into the binary network address, and 0x00000000 is equivalent to the dotted form 0.0.0.0.
You can see this if you strace (as root):
# strace -e trace=network ping 0
socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = 3
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4
connect(4, {sa_family=AF_INET, sin_port=htons(1025),
sin_addr=inet_addr("0.0.0.0")}, 16) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(58056),
sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
...
PING 0 (127.0.0.1) 56(84) bytes of data.
You can also see the change if you bind to a specific interface instead:
# strace -e trace=network ping -I eth0 0
socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = 3
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4
setsockopt(4, SOL_SOCKET, SO_BINDTODEVICE, "eth0\0", 5) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(1025),
sin_addr=inet_addr("0.0.0.0")}, 16) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(58408),
sin_addr=inet_addr("192.168.0.123")}, [16]) = 0
setsockopt(3, SOL_RAW, ICMP_FILTER, ...)
[...]
PING 0 (192.168.0.123) from 192.168.0.123 eth0: 56(84) bytes of data.
While 0 may be treated as 0.0.0.0 and a broadcast address in many cases that's clearly not what ping is doing. It special-cases this to mean "the primary IP of the interface in question" (with some extra handling for multicast/broadcast cases).
RFC 1122 §3.2.1.3 explains the behaviour: both 0.0.0.0 and the IP address with the network masked off (the "host number", e.g. 0.0.0.1 in the case of loopback) mean "this host on this network".
(a) { 0, 0 }
This host on this network. MUST NOT be sent, except as
a source address as part of an initialization procedure
by which the host learns its own IP address.
See also Section 3.3.6 for a non-standard use of {0,0}.
(b) { 0, <Host-number> }
Specified host on this network. It MUST NOT be sent,
except as a source address as part of an initialization
procedure by which the host learns its full IP address.
At least in the case of 0 or 0.0.0.0 that is how iputils ping behaves, other pings and other OSs may behave differently. For example FreeBSD pings 0.0.0.0 via the default route (which I don't think is "correct" behaviour).
ping 1 or 0.0.0.1 don't quite work as hoped though (not for me anyway, iputils-sss20101006).
| how does ping zero works? |
1,329,949,954,000 |
virt-manager uses BIOS as the default option for firmware. There is an option to change this to UEFI just before installation, after the volume is set up.
However, after installation, the dropdown menu for changing the firmware disappears.
The installed system boots only with UEFI and not BIOS. The installation was a cumbersome procedure and I would like to avoid doing it all over again if possible.
Is there a way to convert the firmware to UEFI while keeping the contents of the system (disk) intact?
|
I just found out that you can move the disk (the file with the qcow2 extension) from one VM to another. So simply create another VM and add this disk. You can then do whatever you need to before the installation.
| virt-manager - Change firmware AFTER installation |
1,329,949,954,000 |
How do you install a self signed cert chain into Alpine Linux?
I've a self signed cert chain that I've been using in Ubuntu, for example:
bacon.crt
-----BEGIN CERTIFICATE-----
328FjQIFJNVBLAHBLAH
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
7CJAMIDDLEBLAH80A
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
328FjOTHERVBLAHBLAH
-----END CERTIFICATE-----
And in Ubuntu, I run the following commands to install this cert chain:
cp /tmp/certs/bacon.crt /usr/local/share/ca-certificates/bacon.crt
update-ca-certificates
Easy!
However, on Alpine Linux:
# cp /tmp/certs/bacon.crt /usr/local/share/ca-certificates/bacon.crt
/usr/local/share/ca-certificates # update-ca-certificates
WARNING: ca-cert-bacon.crt.pem does not contain exactly one certificate or CRL: skipping
And if I try to break my certs into 3 chunks to spoon-feed this distro:
/tmp/certs/1.crt
-----BEGIN CERTIFICATE-----
328FjQIFJNVBLAHBLAH
-----END CERTIFICATE-----
/tmp/certs/2.crt
-----BEGIN CERTIFICATE-----
328FjOTHERVBLAHBLAH
-----END CERTIFICATE-----
/tmp/certs/3.crt
-----BEGIN CERTIFICATE-----
328FjQIFJNVBLAHBLAH
-----END CERTIFICATE-----
Now it doesn't throw an error during cert installation but still can't authenticate against other self-signed endpoints.
|
Figured it out. Gosh.
/etc/ssl/certs/ca-certificates.crt is actually appending each individual cert from /usr/local/share/ca-certificates.
Get a clean environment (This was my first major issue)
Break your certs chain into a separate parts for each BEGIN/END pair you have.
company-Root.crt
company-X.crt
company-Y.crt
company-Z.crt
company-Issuing.crt
If you're being extra careful, load one at a time, starting with the company-Root.crt cert, then run update-ca-certificates.
Repeat until all certs have been processed.
Verify that /etc/ssl/certs/ca-certificates.crt contains the updates at the bottom of the file.
| Install Self Signed Certificate to Alpine Linux |
1,329,949,954,000 |
I converted a simple binary file into a text file with:
od –t x1 Check.tar | cut –c8- > Check.txt
Which gives a content similar to:
64 65 76 2f 6e 75 6c 6c 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[...]
What is the opposite way -- to convert Check.txt to Check.tar as the original file?
|
od -An -vtx1 Check.tar > Check.txt
You need -v or od will condense sequences of identical bytes.
For the reverse:
LC_ALL=C tr -cd 0-9a-fA-F < Check.txt | xxd -r -p > Check.tar
Or:
perl -ape '$_=pack "(H2)*", @F' Check.txt > Check.tar
If your purpose is to transfer files over a channel that only supports ASCII text, then there are dedicated tools for that like uuencode:
tar cf - myfiles.* | xz | uuencode myfiles.tar.xz | that-channel
And to recover those files on the other end:
uudecode < file.uu
would recreate myfiles.tar.xz.
Or:
uudecode -o - < file.uu | xz -d | tar xf -
To extract the files.
| Convert binary mode to text mode and the reverse option |
1,329,949,954,000 |
Instead of doing man chmod and then /a+x to jump to the first section in the chmod man page that mentions a+x, I would like to know if there is a way to open the man page to a specific search string, similar to how you can do vi +string filename.txt in vi(m).
|
Try this trick:
man chmod | less +'/a\+x'
or
man chmod | more +'/a\+x'
With a backslash before the + sign because what comes after / is an extended regular expression.
| Open man page and search for string in a single command |
1,329,949,954,000 |
Is there a way to invoke syscalls directly from Java, or is it necessary to first call a native method?
|
You need to use a native method, but you don't need to implement it yourself. Java has a variation on JNI called JNA (Java Native Access), which lets you access shared libraries directly without needing a JNI interface wrapped around them, so you can use that to interface directly with glibc:
import com.sun.jna.Library;
import com.sun.jna.Native;
public class Test {
public interface CStdLib extends Library {
int syscall(int number, Object... args);
}
public static void main(String[] args) {
CStdLib c = (CStdLib)Native.loadLibrary("c", CStdLib.class);
// WARNING: These syscall numbers are for x86 only
System.out.println("PID: " + c.syscall(20));
System.out.println("UID: " + c.syscall(24));
System.out.println("GID: " + c.syscall(47));
c.syscall(39, "/tmp/create-new-directory-here");
}
}
| Invoke Syscalls from Java |
1,329,949,954,000 |
If you have (in Linux) these two routes:
default via 192.168.1.1 dev enp58s0f1
default via 192.168.16.1 dev wlp59s0 proto static metric 600
I would expect that the first one is used, but that's not the case: the second one is used instead.
If I change that to this:
default via 192.168.1.1 dev enp58s0f1 proto static metric 100
default via 192.168.16.1 dev wlp59s0 proto static metric 600
Then it works as expected. It seems that "no metric" is a worse (higher) metric than any number, instead of metric 0.
What is this happening? Is it specific to Linux, or a networking standard?
Thanks in advance.
|
Are you sure about your first observation? What does ip route show or route -n show then? Does the result change if you add proto static in first case?
I have found at least two resources that explicitely says that 0 is the default value in Linux:
http://0pointer.de/lennart/projects/ifmetric/ : The default metric for a route in the Linux kernel is 0, meaning the highest priority.
http://www.man7.org/linux/man-pages/man8/route.8.html : If this option is not specified the metric for inet6 (IPv6) address family defaults to '1', for inet (IPv4) it defaults to '0'. (it then hints that the default may be different when using iproute2 but analysis of these sources do not show what it is)
A Linux kernel hacker would surely be needed to sort that out.
Also whatever default is chosen is clearly OS specific.
This article (https://support.microsoft.com/en-us/help/299540/an-explanation-of-the-automatic-metric-feature-for-ipv4-routes) for example shows that Windows choose the default metric based on the bandwidth of the link.
| In Linux, what metric has a route with no metric? |
1,329,949,954,000 |
According to this Wikipedia article:
OS X is a series of Unix-based graphical interface operating systems developed and marketed by Apple Inc.
so i was thinking:
is there any similar application to Wine but runs Mac applications?
is it possible at to run Mac OS X applications on a Linux machine?
|
Darling (link) is a project that aims to become analogous to wine. Currently it only runs some command-line OSX programs, though. As of mid-2019, it can run many command-line programs, and according to their homepage appears to be approaching the point where it can run some rudimentary graphical software as well. It probably won't run what you want just yet, unless it's text-based.
As long as the developers of the OS X program released their source code and used cross-platform libraries (such as QT, GTK, X11, GNUStep or WxWidgets) you should be able to re-compile an OS X program for linux. OS X and Linux are much more compatible at the API level than the ABI level.
GNUStep implements the Cocoa APIs of NeXTStep and OS X. It was shockingly complete when I tried it, in terms of how much it seemed capable of doing versus how little seems to use it in the wild. GNUStep only works on the source-code (API) level, so it works if a program is open-source and uses Apple's Cocoa GUI (NOT "Aqua" which is proprietary). It depends on being able to compile and link the code.
Think of the API, or Application Programming Interface, as something like a car's dashboard - everything is visible to the driver of the car, and you can get into someone else's car and find his different dashboard just as easy to figure out.
Think of the ABI, or Application Binary Interface, as the engine of the car - it can vary greatly between makes and models, and you probably won't be able to trade your Chevy engine into a Volvo very easily.
Darling would in this analogy be putting the Chevy engine in a Volvo's chassis, and compiling from source would be like just getting out of your Chevy and getting into the Volvo. One is much simpler to do than the other from a programmers' perspective.
But Apple has some proprietary user interface libraries that no one else has, too. If the developer used one of these (such as Aqua), you'll have to wait and hope that Darling grows up like Wine did, or port it yourself. If there is no source code released, it'd be like if the engine was made so big that it could not fit in the Volvo's engine bay, or designed for connecting to a front wheel drive car where your Volvo was rear wheel drive. Unless someone is an absolutely insane maniac (in the best possible way) who has months of free time and ridiculous amount of dedication, it's not likely to happen.
Additionally, GNUStep is not 100% complete in terms of coverage of the Cocoa API's, so some shoehorning is likely still going to be necessary for complex programs. And GNUStep does not provide an xcode-equivalent build system - that is, if the original developer used the XCode IDE's "build" system exclusively, you may be left writing makefiles for it. This was the most frustrating part for me, since while I have experience with compiling and linking software, it's hard to wrestle useful information out of a format like a .xcodeproj that I have no prior backend experience with.
| run Mac OS X applications on Linux |
1,329,949,954,000 |
How do you set the default color for top? Right now there is a red that I can barely read. You can toggle from mono to color with z or set it up more thoroughly with Z. But none of those settings stick.
How do you set the colors permanently?
|
Use W (capital w) to save the top configuration after you made your changes.
| Set default color for top |
1,429,636,472,000 |
I have done
chmod -R 644 .
inside the directory dir
My user's permissions are drw-r--r-- and i'm the owner of the directory
When trying chmod 755 dir, error is popped
chmod: changing permissions of dir Operation not permitted
The same error is popped when doing ls even as root
How to change permission back to 755 and allow its deletion and modification?
|
from the level above dir:
chmod -R a+x *dir*
to give all users (a) execute permission to all subdirectories and files (+x) or:
chmod -R a+X *dir*
to give all users execute permission to all subdirectories only (+X)
| chmod: changing permissions of directory Operation not permitted |
1,429,636,472,000 |
I use this command for locking screen:
i3lock -i /home/freyja/pics/owl.jpg
The screen is locked, but there is no picture (only white background).
When called from console the command says:
Could not load image /home/freyja/pics/owl.jpg: out of memory.
What can I do about this? Is memory lacking in whole system (does not seem like it) or just i3-lock has some internal restriction? The picture is big (HD), but the resolution exactly matches my screen, so I wouldn't like to use a smaller one.
|
The solution was to convert image to PNG (thought you would think that for a photo it would actually take more memory, so maybe the error message wasn't very accurate).
I found the solution here: http://archive.rebeccablacktech.com/g/thread/44391920#p44393721
But I thought it would be good if the answer could be also found on a bit more... focused place.
| Setting image for i3-lock: "Could not load image x: out of memory" |
1,429,636,472,000 |
Various places suggest to use ULOG or NFLOG instead of LOG for getting dedicated netfilter logging (see for example here or here).
From looking at man iptables those two look quite alike. Except that NFLOG talks about some "nfnetlink_log backend" while ULOG doesn't talk about any backend.
What's the difference?
Are there typical situations for using one or the other?
|
ULOG was the original user space logging added in Kernel 2.4 for ipv4.
NFLOG is the newer, generic (layer3 independent) logging framework for 2.6 kernels based on the original ULOG but implemented via libnfnetlink
Both will send logs to ulogd which will then log via whatever output plugin you choose.
Use ULOG if you are stuck with ulogd-1.x as 1.x might not play nicely with NFLOG. You really should be using ulogd-2.x as 1.x is considered legacy and is EOL.
Otherwise, just use NFLOG
| What's the difference between ULOG and NFLog? |
1,429,636,472,000 |
This article claims that the -m flag to ulimit does nothing in modern Linux. I can find nothing else to corroborate this claim. Is it accurate?
You may try to limit the memory usage of a process by setting the maximum resident set size (ulimit -m). This has no effect on Linux. man setrlimit says it used to work only in ancient versions. You should limit the maximum amount of virtual memory (ulimit -v) instead.
If it's true that it worked in older versions of Linux, which version stopped supporting this?
|
It says right there in the article:
This has no effect on Linux. man setrlimit says it used to work only
in ancient versions.
The setrlimit man page says:
RLIMIT_RSS
Specifies the limit (in pages) of the process's resident set
(the number of virtual pages resident in RAM). This limit has
effect only in Linux 2.4.x, x < 30, and there affects only
calls to madvise(2) specifying MADV_WILLNEED.
So it stopped working in 2.4.30.
The changelog for 2.4.30 says something about this:
Marcelo Tosatti:
o Ake Sandgren: Fix RLIMIT_RSS madvise calculation bug
o Hugh Dickins: remove rlim_rss and this RLIMIT_RSS code from madvise. Presumably the code crept in by mistake
| Does 'ulimit -m' not work on (modern) Linux? |
1,429,636,472,000 |
I recently installed Fedora 14 on my home pc and have been working on setting up different server related features such as apache, mysql, ftp, vpn, ssh, etc. I ran extremely quickly in to a barrier it felt like when I discovered SELinux which I had not previously heard of. After doing some research it seemed as though most people were of the opinion that you should just disable it and not deal with the hassle. Personally if it really does add more security I'm not opposed to dealing with the headaches of learning how to appropriately set it up. Eventually I plan on opening my network up so that this pc can be access remotely but I don't want to do that until such time as I'm confident that its secure (more or less). If you have set it up and gotten it functioning correctly do you feel that it was worth the time and hassle? Is it really more secure? If you have opted out of using it was that decision founded on any research worth considering in my situation as well?
|
SELinux enhanced local security by improving the isolation between processes and providing more fine-grained security policies.
For multi-user machines, this can be useful because of the more flexible policies, and it raises more barriers between users so it adds protection against malicious local users.
For servers, SELinux can reduce the impact of a security vulnerability in a server. Where the attacker might be able to gain local user or root privileges, SELinux might only allow him to disable one particular service.
For typical home use, where you'll be the only user and you'll want to be able to everything remotely once authenticated, you won't gain any security from SELinux.
| Does SELinux provide enough extra security to be worth the hassle of learning/setting it up? |
1,429,636,472,000 |
I started a hash check of a large file and don't want to restart it using time. How can I get the wall clock time without using time at the beginning or using date right before I invoke a command?
|
For a running process you can do this:
PID=5462
command ps -p "$PID" -o etime
command ps -p "$PID" --no-headers -o etime
As a general feature you can modify your shell prompt. This is my bash prompt definition:
TERM_RED_START=$'\033[1m\033[31m'
TERM_RED_END=$'\033(B\033[m'
PS1='\nec:$(ec=$?; if [ 0 -eq $ec ];
then printf %-3d $ec;
else echo -n "$TERM_RED_START"; printf %-3d $ec; echo "$TERM_RED_END";
fi) \t \u@\h:\w\nstart cmd:> '
PS2="cont. cmd:> "
The relevant part for you is the \t for the time.
This does not solve all problems, though. The new prompt will show when the process has ended but the former prompt may have been quite old when the command to be measured was started. So either you remember to renew the prompt before starting long running commands or you have to remember the current time when you want to know how long the current process will have taken.
For a complete solution you need an audit feature (which logs the start and end time of processes). But that may cause a huge amount of data if it cannot be restricted to the shell.
| How can I get the wall clock time of a running process? |
1,429,636,472,000 |
I'm in the midst of learning about ACL's for CentOS/Red Hat 6; when I run getfacl using an absolute path, I get among the output:
getfacl: Removing leading '/' from absolute path names
Why does it need to do this? In what situations would you need to use the -p or --absolute-names switch?
My books by Wale Soyinka and Michael Jang don't make even a passing mention of this, I'm not seeing any clues in the man page, and I can't seem to find any sites that directly address this warning.
|
From man page of getfacl:
-p, --absolute-names
Do not strip leading slash characters (`/'). The default behavior
is to strip leading slash characters.
A warning message is emitted when you supply absolute path without using -p switch.
Outputs are different when absolute path is given to the getfacl command.
Without -p switch:
$ getfacl /path/foo/bar
getfacl: Removing leading '/' from absolute path names
# file: path/foo/bar
[Output truncated...]
Note the leading slash in file path shows only when -p switch is used.
$ getfacl -p /path/foo/bar
# file: /path/foo/bar
[Output truncated...]
-p is useful to keep the leading slash when you piped the output for further processing.
Outputs are the same when relative path is given to the getfacl command.
$ getfacl bar
# file: bar
[Output truncated...]
No changes:
$ getfacl -p bar
# file: bar
[Output truncated...]
| Why does getfacl remove the leading / from absolute pathnames? |
1,429,636,472,000 |
In the iostat manpage I have found these two similar columns:
await
The average time (in milliseconds) for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing them.
svctm
The average service time (in milliseconds) for I/O requests that were issued to the device.
Warning! Do not trust this field any more. This field will be removed in a future sysstat
version.
Are these columns meant to represent the same thing? I seem that sometimes they agree, but sometimes not:
avg-cpu: %user %nice %system %iowait %steal %idle
4.44 0.02 1.00 0.36 0.00 94.19
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.07 0.96 0.28 1.28 8.98 47.45 72.13 0.02 11.36 11.49 11.34 5.71 0.89
avg-cpu: %user %nice %system %iowait %steal %idle
8.00 0.00 2.50 2.50 0.00 87.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 9.00 2.00 6.00 12.00 68.00 20.00 0.05 6.00 2.00 7.33 6.00 4.80
avg-cpu: %user %nice %system %iowait %steal %idle
4.57 0.00 0.51 0.00 0.00 94.92
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
13.93 0.00 1.99 1.49 0.00 82.59
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 29.00 0.00 4.00 0.00 132.00 66.00 0.03 7.00 0.00 7.00 7.00 2.80
Other than the obvious warning that svctm is depreciated, what is the difference between these two columns?
|
On linux iostat, the await column (average wait) is showing the average time spent by an I/O request computed from its very beginning toward its end.
The svctm column (service time) should display the average time spent servicing the request, i.e. the time spent "outside" the OS. It should be equal or smaller than the previous one as the request might have lost time waiting in a queue if the device is already busy and doesn't accept more concurrent requests.
Unlike most if not all other Unix / Unix like implementations, the Linux kernel doesn't measure the actual service time so iostat on that platform is trying to derive it from existing statistics but fails as this just cannot be done outside trivial use cases.
See this blog and the interesting discussions that follows for details.
| iostat: await vs. svctm |
1,429,636,472,000 |
I need a program to compile python source code; as I found out at first I need to make a binary file from my python script.
I've already checked a lot of links, but still I haven't found something for Linux.
I found py2bin for OS/X, but there are no versions for Linux.
|
In my opinion your problem in Google stems for calling a compiler capable of producing binaries from python a "disassembler".
I have not found a true compiler, however I have found in Google a python compiler packager, which packs all the necessary files in a directory, obfuscating them, with an executable frontend: pyinstaller at http://www.pyinstaller.org/ ; it appears to be actively supported, as the last version 3.4 which was released on 2018-09-09, contrary to py2bin which seems to be not actively maintained.
Features:
Packaging of Python programs into standard executables, that work on
computers without Python installed.
Multi-platform, works under:
Windows (32-bit and 64-bit),
Linux (32-bit and 64-bit),
Mac OS X
(32-bit and 64-bit),
contributed suppport for FreeBSD, Solaris, HPUX,
and AIX.
Multi-version:
supports Python 2.7 and Python 3.3—3.6.
To install:
pip install pyinstaller
Then, go to your program’s directory and run:
pyinstaller yourprogram.py
This will generate the bundle in a subdirectory called dist.
| How can I get a binary from a .py file |
1,429,636,472,000 |
I noticed while answering another question that test and [ are different binaries, but the [ manpage pulls up test's. Besides the requirement for an ending ], is there any difference? If not, why are they separate binaries instead of being symlinked? (They are also bash builtins, and bash doesn't show a difference either.)
|
The source code explains the difference as being how it handles the --help option.
/* Recognize --help or --version, but only when invoked in the
"[" form, when the last argument is not "]". Use direct
parsing, rather than parse_long_options, to avoid accepting
abbreviations. POSIX allows "[ --help" and "[ --version" to
have the usual GNU behavior, but it requires "test --help"
and "test --version" to exit silently with status 0. */
Demonstrating
$ /usr/bin/test --help
$
$ /usr/bin/[ --help
Usage: test EXPRESSION
or: test
or: [ EXPRESSION ]
or: [ ]
or: [ OPTION
Exit with the status determined by EXPRESSION.
[...]
In the bash builtin version, the only difference is that [ requires ] at the end, as you said.
| `test` and `[` - different binaries, any difference? |
1,429,636,472,000 |
I am trying to understand the stuff.
I have a machine with 80G storage.
It looks like that:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 7.1G 43G 15% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 1.4M 3.9G 1% /dev/shm
tmpfs 3.9G 409M 3.5G 11% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 494M 125M 370M 26% /boot
/dev/mapper/centos-home 26G 23G 3.5G 87% /home
tmpfs 782M 0 782M 0% /run/user/0
Now, from what I read the tmpfs doesn't take physical storage, but uses the virtual memory of the machine. Is it correct? Does it affect the physical storage in any way?
Is there a reality where the tmpfs will be written to the physical storage?
Next, do all the mounted (/dev/sda1, /dev/sda1, etc...) dirs share the tmpfs? Or each of them gets a different one?
Also, I tried to resize the tmpfs.
I did :
mount -o remount,size=1G /dev/shm
On restart it went back to original size.
I changed /etc/fstab like this:
tmpfs /dev/shm tmpfs defaults,size=1G
And then:
mount -o remount /dev/shm
it did the trick, but on restart it again went to it's original size.
I think I am missing something.
|
Now, from what I read the tmpfs doesn't take physical storage, but uses the virtual memory of the machine. Is it correct?
Correct. tmpfs appears as a mounted file system, but it's stored in volatile memory instead of a persistent storage device. So this could answer your other questions.
In reality you cannot assign physical storage to tmpfs since it only relies on virtual memory. Everything stored in tmpfs is temporary in the sense that no files will be created on the hard drive. Swap space is used as backing store in case of low memory situations. On reboot, everything in tmpfs will be lost.
Many Unix distributions enable and use tmpfs by default for the /tmp branch of the file system or for shared memory.
Depending of your distribution you can use tmpfs for the /tmp. By default, a tmpfs partition has its maximum size set to half of the available RAM, however it is possible to overrule this value and explicitly set a maximum size. In this example, to override the default /tmp mount, use the size mount option:
/etc/fstab
tmpfs /tmp tmpfs nodev,nosuid,size=2G 0 0
source: https://wiki.archlinux.org/index.php/tmpfs
| tmpfs usage and resizing |
1,429,636,472,000 |
What's the best way to find a laptop with hardware that is amenable to installing and running GNU/Linux? I'm looking for something that will ideally work with GPL'd drivers and/or firmware.
I took a quick look at linux-on-laptops.com, but they don't list some of the newer models for Lenovo, for example. Most of the test cases seem to be quite dated. Also, It's not clear where to start looking given so much information.
Unfortunately, the FSF website doesn't list many laptop possibilities. LAC, referenced from the FSF website, doesn't mention wireless connectivity as being a feature of their laptops, probably because of firmware issues.
I've been looking for a laptop that will accept the ath9k driver because those cards don't require firmware, but getting model type from generic specs pages is not always possible. Searching for lspci dumps online can be can be a roll of the dice.
And then there's the issue of what kind of graphics card is ideal from a FSF perspective. From the FSF website:
This page is not an exhaustive list. Also, some video cards may work with free software, but without 3D acceleration.
|
Gluglug and other RYF vendors sell laptops running LibreBoot, a free software, microcode-free bios replacement. LibreBoot supports hardware on which it is possible to remove the Intel Management Engine, a small proprietary operating system on modern Intel machines that has been the attack vector of major security exploits.
There is some initial work toward creating a free software replacement for embedded controller firmware, but is apparently not ready for use. SSDs, hard drives and other components unfortunately contain non-free software as well. Systems with the most modern AMD and Intel processors currently cannot be made as freedom respecting as LibreBoot-supported hardware.
It is currently necessary to depend upon non-free software if you want to use a laptop. Libreboot does greatly reduce the amount of critical non-free software required to use laptops, desktops and servers.
| Ideal Hardware for GNU/Linux Laptop |
1,429,636,472,000 |
I've been trying to modernise my way with Linux by, for one thing, ditching netstat for ss. I looked up my favourite command line flag for netstat in the ss man pages, and was very glad to find that netstat -lnp is more or less the same command as ss -lnp. Or so I thought...
# ss -lnp | grep 1812
Turns up nothing, but
# netstat -lnp | grep 1812
udp 0 0 0.0.0.0:1812 0.0.0.0:* 11103/radiusd
does. A fact that made that particular troubleshooting unnecessarily harder.
Now I'm trying to understand how I should have used ss to verify that the daemon was listening.
Can someone please explain?
EDIT:
# ss --version
ss utility, iproute2-ss090324
# ss -aunp | grep radi
UNCONN 0 0 *:50482 *:* users:(("radiusd",11103,11))
UNCONN 0 0 127.0.0.1:18120 *:* users:(("radiusd",11103,9))
UNCONN 0 0 *:1812 *:* users:(("radiusd",11103,6))
UNCONN 0 0 *:1813 *:* users:(("radiusd",11103,7))
UNCONN 0 0 *:1814 *:* users:(("radiusd",11103,10))
# ss -lnp | grep radi
#
|
A recent version of ss should also display UDP listeners in that way. You can limit to UDP with ss -unlp.
I have tried a recent Debian version where ss --version reports ss utility, iproute2-ss140804 and that does work.
On A Red Hat 5 system with ss utility, iproute2-ss061002 it doesn't. You do get more info there using ss -aunp although that also shows connected ports.
You can also try:
ss -apu state unconnected 'sport = :1812'
| ss is replacing netstat, how can I get it to list ports similarly to what I am used to? |
1,429,636,472,000 |
I tried to use the ls command and got an error:
bash: /bin/ls: cannot execute binary file
What can I use instead of this command?
|
You can use the echo or find commands instead of ls:
echo *
or:
find -printf "%M\t%u\t%g\t%p\n"
| What to use when the "ls" command doesn't work? |
1,429,636,472,000 |
I'm working on a LAMP web app and there is a scheduled process somewhere which keeps creating a folder called shop in the root of the site. Every time this appears it causes conflicts with rewrite rules in the app, not good.
Until I find the offending script, is there a way to prevent any folder called shop being created in the root? I know that I can change the permissions on a folder to prevent it's contents being changed, but I have not found a way to prevent a folder of a certain name being created.
|
You can't, given the user creating the directory has sufficient permission to write on the parent directory.
You can instead leverage the inotify family of system calls provided by the Linux kernel, to watch for the creation (and optionally mv-ing) of directory shop in the given directory, if created (or optionally mv-ed), rm the directory.
The userspace program you need in this case is inotifywait (comes with inotify-tools, install it first if needed).
Assuming the directory shop would be residing in /foo/bar directory, let's set a monitoring for /foo/bar/shop creation, and rm instantly if created:
inotifywait -qme create /foo/bar | \
awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }'
inotifywait -qme create /foo/bar watches /foo/bar directory for any file/directory that might be created i.e. watch for any create event
If created, awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' checks if the file happens to be a directory and the name is shop (/,ISDIR shop$/), if so rm the directory (system("rm -r -- /foo/bar/shop"))
You need to run the command as a user that has write permission on directory /foo/bar for removal of shop from the directory.
If you want to monitor mv-ing operations too, add watch for moved_to event too:
inotifywait -qme create,moved_to /foo/bar | \
awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }'
Just to note, if you are looking for a file, not directory, named shop:
inotifywait -qme create /foo/bar | \
awk '$NF == "shop" { system("rm -- /foo/bar/shop") }'
inotifywait -qme create,moved_to /foo/bar | \
awk '$NF == "shop" { system("rm -- /foo/bar/shop") }'
| Can I prevent a folder of a certain name being created? |
1,429,636,472,000 |
My pc is dual boot. I have Red Hat Enterprise Linux 5 along with Windows 7 Ultimate installed. There are some common files which are required by me in both the os. Right now I access and manipulate these files via a secondary storage device(USB or DVD RW) attached to my system.
Is it possible to create a common folder/directory which is accessible to both the Linux as well as Windows os. Can the files, within such kind of folders/directories, be manipulated via both the os. How?
|
Of course, and it's very easy. The simplest way is to have a shared partition that uses a filesystem both OSs can understand. I usually have an NTFS-formatted partition which I mount at /data on Linux. This will be recognized as a regular partition on Windows and be assigned a letter (D: for example) just like any other.
You can then use it from both systems and the files will be available to both your OSs.
| Is it possible to share files between 2 different os on the same computer? |
1,429,636,472,000 |
In How does Linux “kill” a process? it is explained that Linux kills a process by returning its memory to the pool.
On a single-core machine, how does it actually do this? It must require CPU time to kill a process, and if that process is doing some extremely long running computation without yielding, how does Linux gain control of the processor for long enough to kill off that process?
|
The kernel gains control quite frequently in normal operations: whenever a process calls a system call, and whenever an interrupt occurs. Interrupts happen when hardware wants the CPU’s attention, or when the CPU wants the kernel’s attention, and one particular piece of hardware can be programmed to request attention periodically (the timer). Thus the kernel can ensure that, as long as the system doesn’t lock up so hard that interrupts are no longer generated, it will be invoked periodically.
As a result,
if that process is doing some extremely long running computation without yielding
isn’t a concern: Linux is a preemptive multitasking operating system, i.e. it multitasks without requiring running programs’ cooperation.
When it comes to killing processes, the kernel is involved anyway. If a process wants to kill another process, it has to call the kernel to do so, so the kernel is in control. If the kernel decides to kill a process (e.g. the OOM killer, or because the process tried to do something it’s not allowed to do, such as accessing unmapped memory), it’s also in control.
Note that the kernel can be configured to not control a subset of a system’s CPUs itself (using the deprecated isolcpus kernel parameter), or to not schedule tasks on certains CPUs itself (using cpusets without load balancing, which are fully integrated in cgroup v1 and cgroup v2); but at least one CPU in the system must always be fully managed by the kernel. It can also be configured to reduce the number of timer interrupts which are generated, depending on what a given CPU is being used for.
There’s also not much distinction between single-CPU (single-core, etc.) systems and multi-CPU systems, the same concerns apply to both as far as kernel control is concerned: each CPU needs to call into the kernel periodically if it is to be used for multitasking under kernel control.
| How does Linux retain control of the CPU on a single-core machine? |
1,429,636,472,000 |
I used shred to wipe my external hard disk:
sudo shred -vz /dev/sdb
I should also add that the disk had 5 bad sectors.
I want to verify the disk has been zeroed, per https://superuser.com/questions/1510233/is-there-a-faster-way-to-verify-that-a-drive-has-been-fully-zeroed
I'm not that familiar with dd, but I believe that these show it's been zeroed:
sudo dd if=/dev/sdb status=progress | hexdump
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
5000916670976 bytes (5.0 TB, 4.5 TiB) copied, 45754 s, 109 MB/s
9767541167+0 records in
9767541167+0 records out
5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45756.7 s, 109 MB/s
48c61b35e00
sudo dd if=/dev/sdb status=progress | od | head
5000952267264 bytes (5.0 TB, 4.5 TiB) copied, 45739 s, 109 MB/s
9767541167+0 records in
9767541167+0 records out
5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45741.1 s, 109 MB/s
0000000 000000 000000 000000 000000 000000 000000 000000 000000
*
110614154657000
But using a simple cmp shows an exception:
sudo cmp /dev/zero /dev/sdb
cmp: EOF on /dev/sdb after byte 5000981077504, in line 1
Has the disk been zeroed?
|
Has the disk been zeroed?
Yes. The output of your dd command shows that it has written 5000981077504 bytes. Your cmp command says that it's reached EOF (end of file) after 5000981077504 bytes, which is the same.
Be aware that this only works well with hard drives. For solid-state devices, features such as wear leveling and overprovisioning space may result in some data not being erased. Furthermore, your drive must not have any damaged sectors, as they will not be erased.
Note that cmp will not be very efficient for this task. You would be better off with badblocks:
badblocks -svt 0x00 /dev/sdb
From badblocks(8), the -t option can be used to verify a pattern on the disk. If you do not specify -w (write) or -n (non-destructive write), then it will assume the pattern is already present:
-t test_pattern
Specify a test pattern to be read (and written) to disk blocks.
The test_pattern may either be a numeric value between 0 and
ULONG_MAX-1 inclusive, or the word "random", which specifies
that the block should be filled with a random bit pattern. For
read/write (-w) and non-destructive (-n) modes, one or more test
patterns may be specified by specifying the -t option for each
test pattern desired. For read-only mode only a single pattern
may be specified and it may not be "random". Read-only testing
with a pattern assumes that the specified pattern has previously
been written to the disk - if not, large numbers of blocks will
fail verification. If multiple patterns are specified then all
blocks will be tested with one pattern before proceeding to the
next pattern.
Also, using dd with the default block size (512) is not very efficient either. You can drastically speed it up by specifying bs=256k. This causes it to transfer data in chunks of 262,144 bytes rather than 512, which reduces the number of context switches that need to occur. Depending on the system, you can speed it up even more by using iflag=direct, which bypasses the page cache. This can improve read performance on block devices in some situations.
Although you didn't ask, it should be pointed out that shred overwrites a target using three passes by default. This is unnecessary. The myth that multiple overwrites is necessary on hard disks comes from an old recommendation by Peter Gutmann. On ancient MFM and RLL hard drives, specific overwrite patterns were require to avoid theoretical data remanence issues. In order to ensure that all types of disks could be overwritten, he recommended using 35 patterns so that at least one of them would be right for your disk. On modern hard drives using modern data encoding techniques such as EPRML and NPML, there is no need to use multiple patterns. According to Gutmann himself:
In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes.
In your position, I would recommend something along this line instead:
dd if=/dev/urandom of=/dev/sdb bs=256k oflag=direct conv=fsync
When it finishes, just make sure it has written enough bytes after it says "no space left on device".
You can also use ATA Secure Erase which initiates firmware-level data erasure. I would not use it on its own because you would be relying on the firmware authors to have implemented the standard securely. Instead, use it in addition to the above in order to make sure dd didn't miss anything (such as bad sectors and the HPA). ATA Secure Erase can be managed by the command hdparm:
hdparm --user-master u --security-set-pass yadayada /dev/sdb
hdparm --user-master u --security-erase yadayada /dev/sdb
Note that this doesn't work on all devices. Your external drive may not support it.
| How can I verify that my hard disk has been zeroed / wiped? |
1,429,636,472,000 |
Are there any GUI's for Linux that doesn't use X11?
Since X has very poor security :O
e.g.: Ubuntu, Fedora - what else are there?
Goal: having a Desktop Environment without X. - what are the solutions? (e.g.: watch Flash with Google Chrome, Edit docs with LibreOffice, etc., not using text-based webbrowsers)
Maybe with framebuffers? But how? :O
|
Note: apart from this paragraph, this answer was last updated in 2016. Since then, Wayland has become a viable alternative to X11, although it's still mostly used as a backend for X11.
No. X is the only usable GUI on Linux.
There have been competing projects in the past, but none that gained any traction. Writing something like X is hard, and it takes a lot of extra work to get something usable in practice: you need hardware drivers, and you need applications. Since existing applications speak X11, you need either a translation layer (so… have you written something new, or just a new X server?) or to write new applications from scratch.
There is one ongoing project that aims to supplant X: Mir. It's backed by Canonical, who want to standardize on it for Ubuntu — but it hasn't gained a lot of traction outside Ubuntu, so it may not succeed more than Wayland (which was designed for 3D performance, not for security) did. Mir does aim to improve on the X security model by allowing applications limited privileges (e.g. applications have to have some kind of privilege to mess with other applications' input and output); whether that scales when people want to take screenshots and define input methods remains to be seen.
You can run a few graphical applications on Linux without X with SVGAlib. However that doesn't bring any extra security either (in addition to numerous other problems, such as poor hardware support, poor usability, and small number of applications). SVGAlib has had known security holes, and it doesn't get much attention, so it probably has many more. X implementations get a lot more attention, so you can at least mostly expect that the implementation matches the security model.
X has a very easily understood security model: any application that's connected to the X server can do anything. (That's a safe approximation, but a fairly realistic one.) You can build a more secure system on top of this, simply by isolating untrusted applications: put them in their own virtual environment, displaying on their own X server, and show that X server's display in a window. You'll lose functionality from these applications, for example you have to run things like window managers and clipboard managers in the host environment. There's at least one usable project based on this approach: Qubes.
| Are there any GUI's for Linux that doesn't use X11? |
1,429,636,472,000 |
Why does file xxx.src lead to cannot open `xxx.src' (No such file or directory) but has an exit status of 0 (success)?
$ file xxx.src ; echo $?
xxx.src: cannot open `xxx.src' (No such file or directory)
0
Note: to compare with ls:
$ ls xxx.src ; echo $?
ls: cannot access 'xxx.src': No such file or directory
2
|
This behavior is documented on Linux, and required by the POSIX standard. From the file manual on an Ubuntu system:
EXIT STATUS
file will exit with 0 if the operation was successful or >0 if an error was encoun‐
tered. The following errors cause diagnostic messages, but don't affect the pro‐
gram exit code (as POSIX requires), unless -E is specified:
• A file cannot be found
• There is no permission to read a file
• The file type cannot be determined
With -E (as noted above):
$ file -E saonteuh; echo $?
saonteuh: ERROR: cannot stat `saonteuh' (No such file or directory)
1
The non-standard -E option on Linux is documented as
On filesystem errors (file not found etc), instead of handling the error as
regular output as POSIX mandates and keep going, issue an error message and
exit.
The POSIX specification for the file utility says (my emphasis):
If the file named by the file operand does not exist, cannot be read, or the type of the file named by the file operand cannot be determined, this shall not be considered an error that affects the exit status.
| Why does "file xxx.src" lead to "cannot open `xxx.src' (No such file or directory)" but has an exit status of 0 (success)? |
1,429,636,472,000 |
When trying to call /dev/tcp/www.google.com/80, by typing
/dev/tcp/www.google.com/80
Bash says no such file or directory. When looking at other people's code online, they use syntax such as
3<>/dev/tcp/www.google.com/80
I noticed that this works as well:
</dev/tcp/www.google.com/80
Why are these symbols required to call certain things in bash?
|
Because that's a feature of the shell (of ksh, copied by bash), and the shell only.
/dev/tcp/... are not real files, the shell intercepts the attempts to redirect to a /dev/tcp/... file and then does a socket(...);connect(...) (makes a TCP connection) instead of a open("/dev/tcp/..."...) (opening that file) in that case.
Note that it has to be spelled like that. cat < /dev/./tcp/... or ///dev/tcp/... won't work, and will attempt to open those files instead (which on most systems don't exist and you'll get an error).
The direction of the redirection also doesn't matter. Whether you use 3< /dev/tcp/... or 3> /dev/tcp/... or 3<> /dev/tcp/... or even 3>> /dev/tcp/... won't make any difference, you'll be able to both read and write from/to that file descriptor to receive/send data over that TCP socket.
When you do cat /dev/tcp/..., that doesn't work because cat doesn't implement that same special handling, it does a open("/dev/tcp/...") like for every file (except -), only the shell (ksh, bash only) does, and only for the target of redirections.
That cat - is another example of a file path handled specially, this time, by cat, not the shell.
Instead of doing a open("-") and reading the input from the resulting file descriptor, cat reads directly from the file descriptor 0 (stdin). cat and many text utilities do that, the shell doesn't for its redirections. To read the content of the - file, you need cat ./-, or cat < - (or cat - < -). On systems that don't a have /dev/stdin, bash will however do something similar for redirections from that (virtual) file. GNU awk does the same for /dev/stdin, /dev/stdout, /dev/stderr even on systems that do have such files which can cause some surprises on systems like Linux where those files behave differently.
zsh also has TCP (and Unix domain stream) socket support, but that's done with a ztcp (and zsocket) builtins, so it's less limited than the ksh/bash approach. In particular, it can also act as a server which ksh/bash can't do. It's still much more limited than what you can do in a real programming language though.
| Why are < or > required to use /dev/tcp |
1,429,636,472,000 |
Sometimes I need to add more disk to a database; for that, I need to list the disks to see what disks already exist.
The problem is that the output is always sorted as 1,10,11,12...2,20,21...3 etc.
How can I sort this output the way I want it? A simple sort does not work; I've also tried using sort -t.. -k.. -n.
Example of what I need to sort:
[root@server1 ~]# oracleasm listdisks
DATA1
DATA10
DATA11
DATA12
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
FRA1
FRA10
FRA11
FRA2
FRA3
..
OCR1
OCR2
OCR3
....
How I'd like to see the output:
DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DATA10
DATA11
DATA12
FRA1
FRA2
FRA3
..
..
FRA10
FRA11
..
OCR1
OCR2
OCR3
....
|
Your best bet is piping to GNU sort, with GNU sort's --version-sort option enabled
so that would be oracleasm listdisks | sort --version-sort
From the info page
--version-sort’
Sort by version name and number. It behaves like a standard sort,
except that each sequence of decimal digits is treated numerically
as an index/version number. (*Note Details about version sort::.)
On your input it gives me
DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DATA10
DATA11
DATA12
FRA1
FRA2
FRA3
FRA10
FRA11
OCR1
OCR2
OCR3
| How to sort this output 1,10,11..2 |
1,429,636,472,000 |
I have a problem which is reproducible on Linux Ubuntu VMs (14.04 LTS) created in Azure.
After installing systemd package through script, the system refuses new ssh connections, infinitely.
System is booting up.
Connection closed by xxx.xxx.xxx.xxx
The active ssh connection is maintained though. There is no /etc/nologin file present in the system.
The only option I see is a hard reset which solves the problem. But how do I avoid it?
Here is the script I am using:
#!/bin/bash
# Script input arguments
user=$1
server=$2
# Tell the shell to quote your variables to be eval-safe!
printf -v user_q '%q' "$user"
printf -v server_q '%q' "$server"
#
SECONDS=0
address="$user_q"@"$server_q"
function run {
ssh "$address" /bin/bash "$@"
}
run << SSHCONNECTION
# Enable autostartup
# systemd is required for the autostartup
sudo dpkg-query -W -f='${Status}' systemd 2>/dev/null | grep -c "ok installed" > /home/$user_q/systemd-check.txt
systemdInstalled=\$(cat /home/$user_q/systemd-check.txt)
if [[ \$systemdInstalled -eq 0 ]]; then
echo "Systemd is not currently installed. Installing..."
# install systemd
sudo apt-get update
sudo apt-get -y install systemd
else
echo "systemd is already installed. Skipping this step."
fi
SSHCONNECTION
|
I suspect there is a /etc/nologin file (whose content would be "System is booting up.") that is not removed after the systemd installation.
[update] What affects you is a bug that was reported on Ubuntu's BTS last December. It is due to a /var/run/nologin file (= /run/nologin since /var/run is a symlink to /run) that is not removed at the end of the systemd installation.
/etc/nologin is the standard nologin file. /var/run/nologin is an alternate file that may be used by the nologin PAM module (man pam_nologin).
Note that none of the nologin files affect connections by user root, only regular users are prevented from logging in.
| System refuses SSH and stuck on 'booting up' after systemd installation |
1,429,636,472,000 |
Whenever I try to start an NFS mount I get:
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Version 1.3.2 starting
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Flags: TI-RPC
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Running as root. chown /var/lib/nfs to choose different user
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: failed to create RPC listeners, exiting
Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: rpc-statd.service: control process exited, code=exited status=1
Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: Unit rpc-statd.service entered failed state.
Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: rpc-statd.service failed.
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Version 1.3.2 starting
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Flags: TI-RPC
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Running as root. chown /var/lib/nfs to choose different user
Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: failed to create RPC listeners, exiting
I tried to chown /var/lib/nfs to rpc, which just gives me the error minus the "Running as root" line:
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: Version 1.3.2 starting
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: Flags: TI-RPC
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: failed to create RPC listeners, exiting
Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: rpc-statd.service: control process exited, code=exited status=1
Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: Unit rpc-statd.service entered failed state.
Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: rpc-statd.service failed.
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: Version 1.3.2 starting
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: Flags: TI-RPC
Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: failed to create RPC listeners, exiting
I have tried to reinstall nfs-utils:
$ pacman -R nfs-utils
$ rm -r /var/lib/nfs
$ pacman -S nfs-utils
It then re-creates the directory with the permission of the root user. I'm not even sure if this error even related to rpc.statd not starting.
I also tried to run rpc.statd -F --no-notify in my shell, but that just exits with code 1. No error, no nothing. There's no verbose or debug flag documented in the manpage.
I also tried to empty my /etc/exports, and my system is up to date (pacman -Syu). I didn't change anything, it just stopped working a few hours ago.
Note that using mount -o nolock /data works; so the rest of the NFS/rpc daemons seem to be fine.
|
It would apear that the rpcbind systemd unit files went missing:
$ find /usr/lib/systemd -name 'rpcbind*'
# no output
Reinstalling this solved the issue:
$ pacman -S rpcbind
# [...]
$ find /usr/lib/systemd -name 'rpcbind*
/usr/lib/systemd/system/rpcbind.service
/usr/lib/systemd/system/rpcbind.target
/usr/lib/systemd/system/rpcbind.socket
$ systemctl enable rpcbind
$ systemctl start rpcbind
$ systemctl restart nfs-server
Not sure how these files were missing; perhaps a FS corruption issue?
The strange thing is that nfsd was still running, but statd wasn't. After a reboot, nfsd also didn't work (because it needs rpcbind). It's almost like these files disappeared while the system was running.
Unfortunately systemd doesn't give a clear error message on these kind of errors (i.e. dependency rpcbind failed to load), which would make it miuch easier to debug :-(
| NFS no longer mounts: rpc-statd fails to start |
1,429,636,472,000 |
Is there any keyboard shortcut for the "task manager" (like Alt+Ctrl+Del in windows) when my machine goes into a crashed state?
|
I am going to assume by "my machine go into crashed state" you mean that whatever task is taking up the display you are looking at has stopped responding. (In general, when something crashes on Linux, only that thing crashes and everything else keeps running. It's very rare that the entire machine comes to a halt.)
When all else fails, I like to switch back to a standard terminal interface (text mode as opposed to GUI) by hitting CTRL+Alt+F1. This brings up a login prompt. I then login, and enter the command top to see what is running. The process at the top of the list is the one using the most CPU and usually the problem, so I kill it by pressing k, and entering the process ID (the numbers on the left). I then go back to the GUI by pressing CTRL+Alt+F7 (or sometimes CTRL+Alt+F8, one of those two will work, but it might change). If things are now working, I continue on, if not, I'll try again or may just force a reboot.
| Task manager keyboard shortcut in Linux? |
1,429,636,472,000 |
I understand this is somewhat less Ubuntu related, but it affects it.
So,what is so new about it that Linus decided to name it 3.0? I'm not trying to get information about the drivers that got into it or stuff that always gets improved. I want to know what really made it 3.0. I read somewhere that Linus wanted to get rid of the code that supports legacy hardware. Hm, not sure what that really meant because 3.0 is bigger (in MB), not smaller, than, say, 2.6.38.
What was the cause of naming it 3.0?
|
Nothing new at all. Citation below are from https://lkml.org/lkml/2011/5/29/204
I decided to just bite the bullet, and call the next version 3.0. It
will get released close enough to the 20-year mark, which is excuse
enough for me, although honestly, the real reason is just that I can
no longe rcomfortably count as high as 40.
I especially like:
The whole renumbering was discussed at last years Kernel Summit, and
there was a plan to take it up this year too. But let's face it -
what's the point of being in charge if you can't pick the bike shed
color without holding a referendum on it? So I'm just going all
alpha-male, and just renumbering it. You'll like it.
And finally:
So what are the big changes?
NOTHING. Absolutely nothing. Sure, we have the usual two thirds driver
changes, and a lot of random fixes, but the point is that 3.0 is
just about renumbering, we are very much not doing a KDE-4 or a
Gnome-3 here. No breakage, no special scary new features, nothing at
all like that. We've been doing time-based releases for many years
now, this is in no way about features. If you want an excuse for the
renumbering, you really should look at the time-based one ("20 years")
instead.
| What is new in Kernel 3.0? |
1,429,636,472,000 |
I want to test some physical links in a setup. The software tooling that I can use to test this require a block device to read/write from/to. The block devices I have available can't saturate the physical link so I can't fully test it.
I know I can setup a virtual block device which is backed by a file. So my idea was to somehow setup a virtual block device to /dev/null but the problem is of course that I can't read from it. Is there a way I could setup a virtual block device that writes to /dev/null but just returns always zero when read?
Thank you for any help!
| ERROR: type should be string, got "\nhttps://wiki.gentoo.org/wiki/Device-mapper#Zero\n\nSee Documentation/device-mapper/zero.txt for usage. This target has no target-specific parameters.\nThe \"zero\" target create that functions similarly to /dev/zero: All reads return binary zero, and all writes are discarded. Normally used in tests [...]\nThis creates a 1GB (1953125-sector) zero target:\nroot# dmsetup create 1gb-zero --table '0 1953125 zero'\n\n\n" | Create virtual block device which writes to /dev/null |
1,429,636,472,000 |
I want to automatically test if a piece of software reacts as expected if an essential SQLite DB file fails to be read (causing an I/O error). Exactly that happened some days ago at a client. We manually fixed it but now I want to create automatic code to fix it and need access to a broken file to test that.
As everything in Unix's a file, I suspected that there might be a special file that always causes I/O errors when one tries to read it (e.g. in /dev).
Some similar files (imo) would be:
/dev/full which always says "No space left on device" if you try to write it
/dev/null and /dev/zero
so I assumed there just has to be a file like that (but haven't found one yet).
Does anyone know such a file or any other method for me to get the desired result (a intentionally faulty partition image, a wrapper around open() using LD_PRELOAD, ...)?
What's the best way to go here?
|
You can use dmsetup to create a device-mapper device using either the error or flakey targets to simulate failures.
dmsetup create test --table '0 123 flakey 1 0 /dev/loop0'
Where 123 is the length of the device, in sectors and /dev/loop0 is the original device that you want to simulate errors on. For error, you don't need the subsequent arguments as it always returns an error.
| Special File that causes I/O error |
1,429,636,472,000 |
#!/bin/bash
function back()
{
sleep $1
exit $2
}
back $1 $2 &
b=$!
if `wait $!`;then
echo success
else
echo failure
fi
bash-3.00# ./back 300 0
failure
bash-3.00# ./back 300 1
failure
I was expecting success as exit status when I send 0, but I am still getting failure.
Also, wait doesn't wait for 300 seconds. Instead, I get the message immediately. I assume $! is the immediate child of $$ in my script. Isn't it?
Is it possible to capture the exit status of wait like exit_status=$(wait $!)?
if ! ((exit_status));then
echo sucess
else
failure
fi
|
The problem is that you're issuing wait in a subshell:
if `wait $!`;then
Because wait is a builtin, not a command, it's operating on the subshell, not your current shell.
The output that you would see but aren't is:
wait: pid 12344 is not a child of this shell
...with a return status of 1.
To perform your test you will need to do it without using a subshell.
#!/bin/bash
function back()
{
sleep $1
exit $2
}
back $1 $2 &
b=$!
wait $b && echo success || echo failure
This gives the result you expect, and waits as long as you expect:
$ time ./test.sh 3 0
success
./test.sh 3 0 0.00s user 0.01s system 0% cpu 3.012 total
$ time ./test.sh 3 1
failure
./test.sh 3 1 0.00s user 0.01s system 0% cpu 3.012 total
You can check the exit status of any command with $?:
$ /bin/true
$ echo $?
0
$ /bin/false
$ echo $?
1
There were a couple of other errors in your script. Your #! line was malformed, which I fixed. You assign $! to $b, but don't use $b.
| Wait command usage in Linux? |
1,429,636,472,000 |
Issue
I have a Linux Mint installation. Every time that I boot, I need to manually mount the two partitions on my computer(New volume D and Drive C). If I don't do this, these drives don't show up anywhere. I want to know if there is some way to automate this process.
Goal
Automatically mounting all the partitions on the hard disk each time I boot.
Specs
Linux Mint 14 dual boot with Windows XP SP3
|
You can do this through the file /etc/fstab. Take a look at this link. This tutorial also has good details.
Example steps
First you need to find out the UUID of the hard drives. You can use the command blkid for this. For example:
% sudo blkid
/dev/sda1: TYPE="ntfs" UUID="A0F0582EF0580CC2"
/dev/sda2: UUID="8c2da865-13f4-47a2-9c92-2f31738469e8" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda3: TYPE="swap" UUID="5641913f-9bcc-4d8a-8bcb-ddfc3159e70f"
/dev/sda5: UUID="FAB008D6B0089AF1" TYPE="ntfs"
/dev/sdb1: UUID="32c61b65-f2f8-4041-a5d5-3d5ef4182723" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb2: UUID="41c22818-fbad-4da6-8196-c816df0b7aa8" SEC_TYPE="ext2" TYPE="ext3"
The output from the blkid command above can be used to identify the hard drive when adding entries to /etc/fstab.
Next you need to edit the /etc/fstab file. The lines in this file are organized as follows:
UUID={YOUR-UID} {/path/to/mount/point} {file-system-type} defaults,errors=remount-ro 0 1
Now edit the file:
% sudo vi /etc/fstab
And add a file like this, for example:
UUID=41c22818-fbad-4da6-8196-c816df0b7aa8 /disk2p2 ext3 defaults,errors=remount-ro 0 1
Save the file and then reprocess the file with the mount -a command.
Windows partitions
To mount an ntfs partition you'll need to do something like this in your /etc/fstab file:
/dev/sda2 /mnt/excess ntfs-3g permissions,locale=en_US.utf8 0 2
| Mounting all partitions on hard disk automatically on Linux Mint |
1,396,055,476,000 |
For example, the cut command can take a parameter -f, which according to man
select only these fields; also print any line that contains no
delimiter character, unless the -s option is specified
In this context, what is a field?
|
The term "field" is often times associated with tools such as cut and awk. A field would be similar to a columns worth of data, if you take the data and separate it using a specific character. Typically the character used to do this is a Space.
However as is the case with most tools, it's configurable. For example:
awk = awk -F"," ... - would separate by commas (i.e. ,).
cut = cut -d"," ... - would separate by commas (i.e. ,).
Examples
This first one shows how awk automatically will split on spaces.
$ echo "The rain in Spain." | awk '{print $1" "$4}'
The Spain.
This one shows how cut will split on spaces too.
$ echo "The rain in Spain." | cut -d" " -f1,4
The Spain.
Here we have a CSV list of column data that we're using cut to return columns 1 & 4.
$ echo "col1,col2,col3,co4" | cut -d"," -f1,4
col1,co4
Awk too can do this:
$ echo "col1,col2,col3,co4" | awk -F"," '{print $1","$4}'
col1,co4
Awk is also a little more adept at dealing with a variety of separation characters. Here it's dealing with Tabs along with Spaces where they're inter-mixed at the same time:
$ echo -e "The\t rain\t\t in Spain." | awk '{print $1" "$4}'
The Spain.
What about the -s switch to cut?
With respect to this switch, it's simply telling cut to not print any lines which do not contain the delimiter character specified via the -d switch.
Example
Say we had this file.
$ cat sample.txt
This is a space string.
This is a space and tab string.
Thisstringcontainsneither.
NOTE: There are spaces and tabs in the 2nd string above.
Now when we process these strings using cut with and without the -s switch:
$ cut -d" " -f1-6 sample.txt
This is a space string.
This is a space
Thisstringcontainsneither.
$ cut -d" " -f1-6 -s sample.txt
This is a space string.
This is a space
In the 2nd example you can see that the -s switch has omitted any strings from the output that do not contain the delimiter, Space.
| What constitutes a 'field' for the cut command? |
1,396,055,476,000 |
I have a program. When it is running, the CPU temperature raise from
50 to 80 Celcius, which is my major concern.
I can control the CPU frequency to slow it down, but other processes
will be slowed down as well which I don't want.
Is it possible to slow down a particular process without affecting
other processes to keep the CPU cool?
My OS is Ubuntu 10.10.
I tried to set the priority of the process by nice -n 15
myprogram, and am not sure if that will work. The CPU is 77 Celcius
high.
Does nice only set relative priority of a process wrt other
processes? I.e., if other processes are not running, will this niced
process run fast? I would like to set the process to be running
slow all through.
|
CPULimit is exactly what you need. You start the program, then run cpulimit against the program name or PID, specifying what percentage you want it limited.
The following command limits the process at PID 7777 to 5% CPU usage.
cpulimit -p 7777 -l 5
Alternatively, you can use the name of the executable:
cpulimit -e myprogram -l 5
Or the absolute path of the executable:
cpulimit -P /path/to/myprogram -l 5
Note the percentage is of all cores; so if you have 4 cores, you could use 400%.
| Slow down just one process to regulate CPU temperature |
1,396,055,476,000 |
When I'm trying to connect to x11vnc server started on Ubuntu 16.10
x11vnc
The "Screen Sharing" app on on OS X 10.11.6 just hangs.
How can I fix this?
|
If you want to connect to x11vnc server using "Screen Sharing" app on OS X, you need to tweak the x11vnc starting command:
x11vnc -display :0 -noxrecord -noxfixes -noxdamage -forever -passwd 123456
You can't use -ncache
You have to use -passwd
[source]
| How to connect to x11vnc server on Linux from OS X (macOS)? |
1,396,055,476,000 |
Is there any way to add an application/script to the Linux startup so every time the system boots it will be executed?
I am looking for some automated way, i.e. the user should not add this by cron job or something like that.
|
Something like Cron?
Note the @reboot entry
This is the most flexible approach, and the one most like Windows' "Scheduled Tasks" (better actually).
| What is the Linux equivalent of Windows Startup? |
1,396,055,476,000 |
I want to know whether there is any easier way to run a job every 25 minutes. In cronjob, if you specify the minute parameter as */25, it'll run only on 25th and 50th minute of every hour
|
The command in crontab is executed with /bin/sh so you can use arithmetic expansion to calculate whether the current minute modulo 25 equals zero:
*/5 * * * * [ $(( $(date +\%s) / 60 \% 25 )) -eq 0 ] && your_command
cron will run this entire entry every 5 minutes, but only if the current minute (in minutes since the epoch) modulo 25 equals zero will it run your_command.
As others have pointed out, 1 day is not evenly divisible by 25 minutes, so this will not cause your_command to run at the same time every day, but it will run every 25 minutes.
| CronJob every 25 minutes |
1,396,055,476,000 |
I have a linux (debian based) server which is configured to allow SSH session to the user 'admin', but not the user 'root'. Both these accounts are linked somehow because they share the same password.
During an SSH session as admin, 'sudo' is required to run commands, unless I switch to the user 'root'.
I have some services on which I need to run now and then, or even at system startup. I'm currently using private/public key mechanism to remote execute commands on the server. Some of the commands are manually typed, others are shell scripts that I execute.
Currently the server still asks for password when a command has uses sudo.
Question:
How can remote execute as user 'admin' without supplying the password?
Is it possible to use a private/public key to satisfy sudo?
Or perhaps even a way to start shell scripts as the user 'root'?
Is it even possible to avoid having to type the password using sudo? If not, are they other alternatives for situation like mine?
|
you can tell sudo to skip password for some command.
e.g. in /etc/sudoers
archemar ALL = (www-data) NOPASSWD: /bin/rm -rf /var/www/log/upload.*
this allow me to use
sudo -u www-data /bin/rm -rf /var/www/log/upload.*
as archemar without password.
Note that
sudo -u www-data rm -rf /var/www/log/upload.*
won't work (will ask a password) as rm differ from /bin/rm. (*)
Be sure to edit /etc/sudoers using visudo command.
Once you've reach advanced level, you might whish to have your own sudo files in /etc/sudoers.d.
(*) this change in modern OS (redhat 7.x circa 2022) if rm in your path match /bin/rm in sudoers.conf you might use rm.
| How to remote execute ssh command a sudo command without password |
1,396,055,476,000 |
I have an ubuntu 15.10 server which utilizes wpa_supplicant to connect to wireless network profiles created with wpa_passphrase. On a fresh reboot, the first time I call sudo wpa_supplicant -B -i wlp2s0 -c ./MVS (where MVS is the name of a saved profile for a network) I get the output
Successfully initialized wpa_supplicant
Could not read interface p2p-dev-wlp2s0 flags: No such device
but the exit code is zero, and I can confirm that I am in fact connected to the wireless network by running sudo iw wlp2s0 link
However, subsequent calls to wpa_supplicant (for the other profiles or even the same one) yield a more verbose output:
Successfully initialized wpa_supplicant
Could not read interface p2p-dev-wlp2s0 flags: No such device
nl80211: Could not set interface 'p2p-dev-wlp2s0' UP
nl80211: deinit ifname=p2p-dev-wlp2s0 disabled_11b_rates=0
p2p-dev-wlp2s0: Failed to initialize driver interface
P2P: Failed to enable P2P Device interface
wpa_supplicant still returns an exit code of zero, but the wireless device is most definitely not connected to any network this time. Any advice or thoughts would be greatly appreciated, I don't know how to debug this or fix it.
|
I'm embarrassed to say the solution was to kill the already running wpa_supplicant process. The -B argument causes the program to fork into the background, and trying to run it again will fail as long as it is already running. I'm still not sure why it prints that first error message, but it connects to wireless networks without issue.
sudo killall wpa_supplicant
| Subsequent calls to wpa_supplicant fail - can't connect to wifi |
1,396,055,476,000 |
I'm working on an embedded Linux project where I will be developing a program that will run automatically on bootup and interact with the user via a character display and some sort of button array. If we go with a simple GPIO button array, I can easily write program that will look for keypresses on those GPIO lines. However, one of our thoughts was to use a USB number pad device instead for user input. My understanding is that those devices will present themselves to the OS as a USB keyboard. If go down this path, is there a way for my program to look for input on this USB keyboard from within Linux, keeping in mind that there is no virtual terminal or VGA display. When a USB keyboard is plugged in, is there an entity in '/dev' that appears that I can open a file descriptor for?
|
Devices most likely get a file in /dev/input/ named eventN where N is the various devices like mouse, keyboard, jack, power-buttons etc.
ls -l /dev/input/by-{path,id}/
should give you a hint.
Also look at:
cat /proc/bus/input/devices
Where Sysfs value is path under /sys.
You can test by e.g.
cat /dev/input/event2 # if 2 is kbd.
To implement use ioctl and check devices + monitor.
EDIT 2:
OK. I'm expanding on this answer based on the assumption /dev/input/eventN is used.
One way could be:
At startup loop all event files found in /dev/input/. Use ioctl() to request event bits:
ioctl(fd, EVIOCGBIT(0, sizeof(evbit)), &evbit);
then check if EV_KEY-bit is set.
IFF set then check for keys:
ioctl(fd, EVIOCGBIT(EV_KEY, sizeof(keybit)), &keybit);
E.g. if number-keys are interesting, then check if bits for KEY_0 - KEY9 and KEY_KP0 to KEY_KP9.
IFF keys found then start monitoring event file in thread.
Back to 1.
This way you should get to monitor all devices that meet the wanted criteria. You can't only check for EV_KEY as e.g. power-button will have this bit set, but it obviously won't have KEY_A etc. set.
Have seen false positives for exotic keys, but for normal keys this should suffice. There is no direct harm in monitoring e.g. event file for power button or a jack, but you those won't emit events in question (aka. bad code).
More in detail below.
EDIT 1:
In regards to "Explain that last statement …". Going over in stackoverflow land here … but:
A quick and dirty sample in C. You'll have to implement various code to check that you actually get correct device, translate event type, code and value. Typically key-down, key-up, key-repeat, key-code, etc.
Haven't time, (and is too much here), to add the rest.
Check out linux/input.h, programs like dumpkeys, kernel code etc. for mapping codes. E.g. dumpkeys -l
Anyhow:
Run as e.g.:
# ./testprog /dev/input/event2
Code:
#include <stdio.h>
#include <string.h> /* strerror() */
#include <errno.h> /* errno */
#include <fcntl.h> /* open() */
#include <unistd.h> /* close() */
#include <sys/ioctl.h> /* ioctl() */
#include <linux/input.h> /* EVIOCGVERSION ++ */
#define EV_BUF_SIZE 16
int main(int argc, char *argv[])
{
int fd, sz;
unsigned i;
/* A few examples of information to gather */
unsigned version;
unsigned short id[4]; /* or use struct input_id */
char name[256] = "N/A";
struct input_event ev[EV_BUF_SIZE]; /* Read up to N events ata time */
if (argc < 2) {
fprintf(stderr,
"Usage: %s /dev/input/eventN\n"
"Where X = input device number\n",
argv[0]
);
return EINVAL;
}
if ((fd = open(argv[1], O_RDONLY)) < 0) {
fprintf(stderr,
"ERR %d:\n"
"Unable to open `%s'\n"
"%s\n",
errno, argv[1], strerror(errno)
);
}
/* Error check here as well. */
ioctl(fd, EVIOCGVERSION, &version);
ioctl(fd, EVIOCGID, id);
ioctl(fd, EVIOCGNAME(sizeof(name)), name);
fprintf(stderr,
"Name : %s\n"
"Version : %d.%d.%d\n"
"ID : Bus=%04x Vendor=%04x Product=%04x Version=%04x\n"
"----------\n"
,
name,
version >> 16,
(version >> 8) & 0xff,
version & 0xff,
id[ID_BUS],
id[ID_VENDOR],
id[ID_PRODUCT],
id[ID_VERSION]
);
/* Loop. Read event file and parse result. */
for (;;) {
sz = read(fd, ev, sizeof(struct input_event) * EV_BUF_SIZE);
if (sz < (int) sizeof(struct input_event)) {
fprintf(stderr,
"ERR %d:\n"
"Reading of `%s' failed\n"
"%s\n",
errno, argv[1], strerror(errno)
);
goto fine;
}
/* Implement code to translate type, code and value */
for (i = 0; i < sz / sizeof(struct input_event); ++i) {
fprintf(stderr,
"%ld.%06ld: "
"type=%02x "
"code=%02x "
"value=%02x\n",
ev[i].time.tv_sec,
ev[i].time.tv_usec,
ev[i].type,
ev[i].code,
ev[i].value
);
}
}
fine:
close(fd);
return errno;
}
EDIT 2 (continued):
Note that if you look at /proc/bus/input/devices you have a letter at start of each line. Here B means bit-map. That is for example:
B: PROP=0
B: EV=120013
B: KEY=20000 200 20 0 0 0 0 500f 2100002 3803078 f900d401 feffffdf ffefffff ffffffff fffffffe
B: MSC=10
B: LED=7
Each of those bits correspond to a property of the device. Which by bit-map means, 1 indicate a property is present, as defined in linux/input.h. :
B: PROP=0 => 0000 0000
B: EV=120013 => 0001 0010 0000 0000 0001 0011 (Event types sup. in this device.)
| | | ||
| | | |+-- EV_SYN (0x00)
| | | +--- EV_KEY (0x01)
| | +------- EV_MSC (0x04)
| +----------------------- EV_LED (0x11)
+--------------------------- EV_REP (0x14)
B: KEY=20... => OK, I'm not writing out this one as it is a bit huge.
B: MSC=10 => 0001 0000
|
+------- MSC_SCAN
B: LED=7 => 0000 0111 , indicates what LED's are present
|||
||+-- LED_NUML
|+--- LED_CAPSL
+---- LED_SCROLL
Have a look at /drivers/input/input.{h,c} in the kernel source tree. A lot of good code there. (E.g. the devices properties are produced by this function.)
Each of these property maps can be attained by ioctl. For example, if you want to check what LED properties are available say:
ioctl(fd, EVIOCGBIT(EV_LED, sizeof(ledbit)), &ledbit);
Look at definition of struct input_dev in input.h for how ledbit are defined.
To check status for LED's say:
ioctl(fd, EVIOCGLED(sizeof(ledbit)), &ledbit);
If bit 1 in ledbit are 1 then num-lock are lit. If bit 2 is 1 then caps lock is lit etc.
input.h has the various defines.
Notes when it comes to event monitoring:
Pseudo-code for monitoring could be something in the direction of:
WHILE TRUE
READ input_event
IF event->type == EV_SYN THEN
IF event->code == SYN_DROPPED THEN
Discard all events including next EV_SYN
ELSE
This marks EOF current event.
FI
ELSE IF event->type == EV_KEY THEN
SWITCH ev->value
CASE 0: Key Release (act accordingly)
CASE 1: Key Press (act accordingly)
CASE 2: Key Autorepeat (act accordingly)
END SWITCH
FI
END WHILE
Some related documents:
Documentation/input/input.txt, esp. note section 5.
Documentation/input/event-codes.txt, description of various events etc. Take note to what is mentioned under e.g. EV_SYN about SYN_DROPPED
Documentation/input ... read up on the rest if you want.
| Is it possible for a daemon (i.e. background) process to look for key-presses from a USB keyboard? |
1,396,055,476,000 |
I am wondering what exactly the “Namespaces support” feature in the Linux kernel means. I am using kernel 3.11.1 (the newest stable kernel at this time).
If I decide to disable it, will I notice any change on my system?
And in case somebody decides to make use of namespaces, is it enough to just compile NAMESPACES=Y in the kernel, or does he need userspace tools as well?
|
In a nutshell, namespaces provide a way to build a virtual Linux system inside a larger Linux system. This is different from running a virtual machine that runs as an unprivileged process: the virtual machine appears as a single process in the host, whereas processes running inside a namespace are still running on the host system.
A virtual system running inside a larger system is called a container. The idea of a container is that processes running inside the container believe that they are the only processes in the system. In particular, the root user inside the container does not have root privileges outside the container (note that this is only true in recent enough versions of the kernel).
Namespaces virtualize one feature at a time. Some examples of types of namespaces are:
User namespaces — this allows processes to behave as if they were running as different users inside and outside the namespace. In particular, processes running as UID 0 inside the namespace have superuser privileges only with respect to processes running in the same namespace.
Since Linux kernel 3.8, unprivileged users can create user namespaces. This allows an ordinary user to make use of features that are reserved to root (such as changing routing tables or setting capabilities).
PID namespaces — processes inside a PID namespace cannot kill or trace processes outside that namespace.
Mount namespaces — this allows processes to have their own view of the filesystem. This view can be a partial view, allowing some pieces of the filesystem to be hidden and pieces to be recomposed so that directory trees appear in different places. Mount namespaces generalize the traditional Unix feature chroot, which allows processes to be restricted to a particular subtree.
Network namespaces — allow separation of networking resources (network devices) and thus enhance isolation of processes.
Namespaces rely on the kernel to provide isolation between namespaces. This is quite complicated to get right, so there may still be security bugs lying around. The risk of security bugs would be the primary reason not to enable the feature. Another reason not to enable it would be when you're making a small kernel for an embedded device. In a general-purpose kernel that you'd install on a typical server or workstation, namespaces should be enabled, like any other mature kernel feature.
There are still few applications that make use of namespaces. Here are a few:
LXC is well-established. It relies on cgroups to provide containers.
virt-sandbox is a more recent sandboxing project.
Recent versions of Chromium also use namespaces for sandboxing where available.
The uWSGI framework for clustered applications uses namespaces for improved sandboxing.
See the LWN article series by Michael Kerrisk for more information.
| kernel: Namespaces support |
1,396,055,476,000 |
I'd like to assign a friendly name to a port number, how should I do it?
For example: I'd like "0.0.0.0:my-service-name" translates to "0.0.0.0:1234"
|
Yes, you can do this, by adding your port definition to /etc/services. For a TCP service, you’d add
my-service-name 1234/tcp
Once that’s done, you’ll be able to write “0.0.0.0:my-service-name” instead of “0.0.0.0:1234”.
The canonical list of services is maintained by the IANA, but you can add local definitions; you might even see a “# Local services” comment at the end of your /etc/services file already.
| How to assign a friendly name to a port number in Linux? |
1,396,055,476,000 |
I want to check the dialect version in SMB connections.
On Windows, Get-SmbConnection will get it.
PS C:\Windows\system32> Get-SmbConnection
ServerName ShareName UserName Credential Dialect NumOpens
---------- --------- -------- ---------- ------- -------
savdal08r2 c$ SAVILLTEC... SAVILLTEC... 2.10 1
savdalfs01 c$ SAVILLTEC... SAVILLTEC... 3.00 1
on macOS, smbutil statshares -a works well.
What should I do on linux?
|
If you are running a Samba server on Linux, smbstatus should show the protocol version used by each client.
If Linux is the client, it depends on which client you're using: if you're using the kernel-level cifs filesystem support, in all but quite new kernels, the answer was that you look into /proc/mounts to see if the mount options for that filesystem include a vers= option; if not, assume it uses SMB 1.
SMB protocol autonegotiation in kernel-level CIFS/SMB support is rather recent development, and as far as I know, if you don't specify the protocol version you want, the autonegotiation will only indicate the result if you enable CIFS debug messages. but fortunately the developers made it so the negotiation result will always be shown in /proc/mounts.
If you use smbclient or other userspace SMB/CIFS clients (e.g. one integrated to your desktop environment), then it might have its own tools and diagnostics.
| How to check SMB connections and the dialect that is being used on linux? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.