date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,425,396,741,000 |
This is quite related to this question, but since it does not really have any satisfactory answers I figured I could ask a new question.
This screenshot shows htop indicating one core with 100% utilization, but with no process using any large amount of cpu:
I assume this means that the kernel is using this much cpu for some unknown reason, but I haven't found a very good way of investigating this. (Looking into using eBPF for this now) I thought it might have something to do with my disk encryption and disk access, but iotop does not show any significant disk usage. I am running Arch Linux with a completely standard kernel.
The problem has appeared a couple of times lately and always goes away if I reboot, and always takes at least a couple of hours of on-time to appear.
Any ideas and suggestions for how to debug this or what the underlying cause could be would be very welcome.
Edit:
So this new screenshot shows htop set to display both kernel and user threads, but there is still no clear explanation for the high cpu usage:
Edit 2:
Following screenshot shows results from bfptrace when running bpftrace -e 'profile:hz:99 /cpu == 0/ { @[kstack] = count(); }'. It seems that the kernel is spending a lot of time in acpi_os_execute_deferred for some reason.
|
Finally found the answer. It turns out that the issue is the same as this question with additional information here and here. None of these mention the problem with htop showing zero usage though so that might be a unrelated problem.
As explained in the links above the answer was to use sudo grep . -r /sys/firmware/acpi/interrupts/ and then using echo "disable" /sys/firmware/acpi/interrupts/gpe6D to disable the problematic interupt (the one with the biggest number attached, in my case gpe6D).
To figure out that this was the problem I used bfptrace as explained in the question to do a kernel stack trace and figure out where the cpu was spending time and then bpftrace -e 'kprobe:acpi_ps_parse_aml /cpu == 0/ { printf("%d\n", tid); }' to find the kernel thread ID of one of the listed functions. Turns out the offending thread was kworker/0:3-kacpi_notify and then from some googling it turns out others have also had similar problems with this kernel thread.
| Htop shows one core at 100% cpu usage, but no processes are using much cpu |
1,425,396,741,000 |
I have a Ubuntu 16.04 based HTPC/Media Server that's running 24/7. As far as I can remember using an official Ubuntu distro, I've always had issues with the avahi-daemon. The issue is pretty often discussed online. Some people decide to just delete daemon, however, I actually need it as I'm running a CUPS server and use Kodi as my AirPlay reciever.
The issue
mDNS/DNS-SD is inherently incompatible with unicast DNS zones .local. We strongly recommend not to use Avahi or nss-mdns in such a network setup. N.B.: nss-mdns is not typically bundled with Avahi and requires a separate download and install.
(avahi.org)
The symptoms are simple - after around 2-4 days of uptime the network connection will go down and this will be logged
Mar 17 18:33:27 15 avahi-daemon[1014]: Withdrawing address record for 192.168.1.200 on enp3s0.
Mar 17 18:33:27 15 avahi-daemon[1014]: Leaving mDNS multicast group on interface enp3s0.IPv4 with address 192.168.1.200.
Mar 17 18:33:27 15 avahi-daemon[1014]: Interface enp3s0.IPv4 no longer relevant for mDNS.
The network will go back up without issues if you physically reconnect the Ethernet plug, or if you reconnect software-side.
Possible solutions
There are three solutions listed on the official wiki, which has been non-functional since what appears to be June 2016, so I'm providing a non-direct archive.org link
1.) Edit /etc/nsswitch.conf from
"hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4"
to
hosts: files dns mdns4
2.) Modify /etc/avahi/avahi-daemon.conf
from
domain-name=.local
to
domain-name=alocal
3.) "Ask the administrator to move the .local zone" (as said on the wiki)
What I did
The first solution did not appear to work for me - the daemon still works, however, the network will go down the same way as before (to be fair, on the wiki it does say "Your Mileage May Vary")
The second solution causes the daemon to seemingly function properly (nothing wrong if you look at the logs) but the iOS devices fail to "see" the machine as a printer or an AirPlay reciever (as well as iTunes on my Windows machine)
The third solution is tricky, because I'm not well versed in the "ins and outs" of how a network is functioning; and I'm not sure I actually tried it. Here's what I mean: on my Asus Router running Asuswrt-Merlin I went into a settings subcategory /LAN/DHCP Server/Basic Config. There I set "RT-AC68U's Domain Name" as "lan" (a domain name I saw advised on the web, because it doesn't conflict with anything, unlike "local"). As far as I can understand, that's what "moving the .local zone" means. If this is in fact correct, than this solution does not work for me as well.
Conclusion
So, what should I do? I've been battling with this problem for over 4 months now, and every answer online comes down to the those I've already tried; frankly, I'm completely lost.
Thanks in advance!
|
So I tried to change the "host-name" parameter in "avahi-daemon.conf" to something that's not the machines hostname, and I've been running 2 weeks without any issues.
Maybe this had to do with the machine also running samba and Windows using the ".local" domain for it's own purposes?
| avahi-daemon and ".local" domain issues |
1,425,396,741,000 |
I want to connect several LANs located on remote buildings.
The "central" site have a Linux computer running OpenVPN. Each remote site also run OpenVPN.
the central site has a LAN numbered 192.168.0.0/24
several remote sites are also numbered 192.168.0.0/24
I can't/won't/don't want to/whatever modify LAN numbering
I don't have control on most remote OpenVPNs
I then need to :
1. define virtual LANs
2. configure a 1:1 NAT for each site
3. the 1:1 NAT has to be configured on the central router
.
So each site is seen to have a 10.10.x.0/24 LAN.
When a computer want to reach, say, 192.168.0.44 on site 12, it just have to send a paquet to 10.10.12.44
Operating a VPN is not a problem for me. I currently connect 60+ sites. But I don't find a simple way to do this 1:1 NAT.
Here is an example of a packet sent from the central site to a remote site, and its response packet :
I did some tests with iptables NETMAP but I can't manage to make it work because I don't find a way to modify source+destination after routing decision.
I prefer to avoid the new --client-nat OpenVPN's feature.
Maybe I have to force routing with ip route ? Or to loop twice into the network stack with veth ?
Note : I don't want to use masquerade. Only 1/1 NAT.
EDIT :
It's not possible with a regular openVPN setup. Because a packet from a remote site is indistinguishable from a packet from another site : both have similar source and destination addresses, and both come from the same tun (or tap) interface. So it's not possible to source-NAT it.
Solution 1 : do the NAT on the remote sites. Not possible in my case. I have to do it only on the central site.
Solution 2 : setup one VPN for each remote site. So I'll have one tun for each. I think this can be ok. Not very memory efficient but ok.
Solution 3 : setup a (unencrypted) tunnel inside the VPN for each site. This will give one interface for each. Simple tunnels are are not cross-platform (to my knoledge). For example GRE or ipip or sit are ok for Linux, but some distant sites are running only one Windows computer, so openVPN is installed on it. So impossible to setup a simple tunnel.
Other option is to use a more complicated tunnel (wich ?) but the overhead on the system and on the sysadmin may be bigger than having multiple VPNs
Solution 4 : compile the latest openVPN, because its include a 1:1 NAT feature.
I test this this week.
|
A very basic solution is :
1. use OpenVPN 2.3 or more (currently, the latest is 2.3-alpha) for server + clients
2. use the OpenVPN configuration option below
3. don't use anything else (no ipfilter, no tricks)
On the server side, you need to manually distribute VPN addresses (so no server option, you have to use ifconfig or ifconfig-push) :
# /etc/openvpn/server.conf
ifconfig 10.99.99.1 10.99.99.2
route 10.99.99.0 255.255.255.0
push "route 10.99.99.0 255.255.255.0"
push "client-nat dnat 10.99.99.11 255.255.255.255 10.10.111.11"
push "client-nat dnat 10.99.99.12 255.255.255.255 10.10.112.12"
push "client-nat dnat 10.99.99.13 255.255.255.255 10.10.113.13"
The route and push route and client-nat lines are required if you want to communicate directly between routers (ping 10.99.99.1 from a distant site throught the VPN). Else you can discard them.
.
.
Now you have to choose a virtual network address. I kept the same you used in your example : 10.10.0.0/16
You allow routing for this :
# /etc/openvpn/server.conf
route 10.10.0.0 255.255.0.0
push "route 10.10.0.0 255.255.0.0"
.
.
You have now to instruct the client to use the 1:1 NAT :
# /etc/openvpn/ccd/client_11
ifconfig-push 10.99.99.11 10.99.99.1
push "client-nat snat 10.99.99.11 255.255.255.255 10.10.111.11"
push "client-nat snat 192.168.0.0 255.255.255.0 10.10.11.0"
push "client-nat dnat 10.10.10.0 255.255.255.0 192.168.0.0"
iroute 10.10.11.0 255.255.255.0
iroute 10.10.111.0 255.255.255.0
The first line set the remote router address. Beware about Windows driver requiring special addresses.
The second and last lines allow the distant router to communicate from its 10.99.99.x interface.
Third and fourth lines do the source and destination 1:1 NAT
The fifth line tells OpenVPN what to do with the corresponding packets.
This method allow to connect sites with identical (or not) LAN addresses, without any shadowed host.
| 1:1 NAT with several identical LANs |
1,425,396,741,000 |
I spent quite some time tracking down a problem in production recently, where a database server disappearing would cause a hang of up to 2 hours (long wait for a poll() call in the libpq client library) for a connected client. Digging into the problem, I realized that these kernel parameters should be adjusted way down in order for severed TCP connections to be noticed in a timely fashion:
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_retries2 = 15
The four values above are from an Ubuntu 12.04 machine, and it looks like these defaults are unchanged from current Linux kernel defaults.
These settings seem to be heavily biased towards keeping an existing connection open, and being extremely stingy with keepalive probes. AIUI, the default tcp_keepalive_time of 2 hours means when we're waiting for a response for a remote host, we will wait patiently for 2 hours before initiating a keepalive probe to verify our connection is still valid. And then, if the remote host does not respond to a keepalive probe, we retry those keepalive probes 9 times (tcp_keepalive_probes), spaced 75 seconds apart (tcp_keepalive_intvl), so that's an extra 11 minutes before we decide the connection is really dead.
This matches what I've seen in the field: for example, if I start a psql session connected to a remote PostgreSQL instance, with some query waiting on a response, e.g.
SELECT pg_sleep(30);
and then have the remote server die a horrible death (e.g. drop traffic to that machine), I see my psql session waiting for up to 2 hours and 11 minutes before it figures out its connection is dead. As you might imagine, these default settings cause serious problems for code which we have talking to a database during, say, a database failover event. Turning these knobs down has helped a lot! And I see that I'm not alone in recommending these defaults be adjusted.
So my questions are:
How long have the defaults been like this?
What was the original rationale for making these TCP settings the default?
Do any Linux distros change these default values?
And any other history or perspective on the rationale for these settings would be appreciated.
|
RFC 1122 specifies in section 4.2.3.6 that the keep-alive period must not default to less than two hours.
| How were these Linux TCP default settings decided? |
1,425,396,741,000 |
bcache allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives.
If I understand correctly,
an SSD* could be assigned to cache multiple backing HDDs, and then the resulting cached devices could be RAIDed with mdadm
or
multiple HDDs could be RAIDed into a single backing md device and the SSD assigned to cache that
I'm wondering which is the saner approach. It occurs to me that growing a RAID5/6 may be simpler with one or other technique, but I'm not sure which!
Are there good reasons (eg growing the backing storage or anything else) for choosing one approach over the other (for a large non-root filesystem containing VM backing files)?
* by "an SSD" I mean some sort of redundant SSD device, eg a RAID1 of two physical SSDs
|
I think caching the whole md device make most sense.
Putting bcache to cache the whole md device sacrifices the whole idea of having raid, because it introduces another single point of failure.
OTH failurs of SSD disks are relatively rare, and bcache can be put into the writethrough/writearound mode (in contrast to the writeback mode), where there is no data stored only to the cache device, and failure of the cache doesn't kill the information in the raid makes it a relatively safe option.
Other fact is that there is significant computational overhead of soft RAID-5; when
caching each spinning raid member separately, computer still has to re-calculate all the parities, even on cache hits
Obviously, you'd sacrifice some expensive ssd space, if you cache each spinning drive separately. - Unless you plan to use raided ssd cache.
Both options relatively don't affect the time of growing process - although the option with spinning drives being cached separately has potential to be slower due to more bus traffic.
It is fast and relatively simple process to configure bcache to remove the ssd drive, when you need to replace it. Thanks to the blocks it should be possible to migrate the raid setup both ways on-place.
You should also remember, that at the moment most (all?) live-CD distributions don't support bcache, so you can't simply access your data with such tools regardless of the bcache-mdraid layout option you chose.
| bcache on md or md on bcache |
1,488,065,492,000 |
How to create a file even root user can't delete it ?
|
Simple answer: You can't, root can do everything.
You can set the "i" attribute with chattr (at least if you are on ext{2,3,4}) which makes a file unchangeable but root can just unset the attribute and delete the file anyways.
More complex (and ugly hackish workaround):
Put the directory you want unchangeable for root on remote server and mount it via NFS or SMB. If the server does not offer write permissions that locks out the local root account. Of course the local root account could just copy the files over locally, unmount the remote stuff, put the copy in place and change that.
You cannot lock out root from deleting your files. If you cannot trust your root to keep files intact, you are having a social problem, not a technical one.
| How to create a file even root user can't delete it |
1,488,065,492,000 |
Which distribution is the one in the picture below. More precisely, in which distribution can I find that top bar with the navigation numbers on the left ?
|
Some random distro that happens to be running i3 window manager.
https://i3wm.org/
Per i3wm site the window manager is distributed in Debian, Arch, Gentoo, Ubuntu, FreeBSD, NetBSD, OpenBSD, OpenSUSE, Megeia, Fedora, Exherbo, PiBang and Slackware.
| Which window manager or desktop environment is in this image? |
1,488,065,492,000 |
When I run top, I show CPU 0-7. When i do:
cat /proc/cpuinfo | grep "cpu cores" | uniq
I get:
cpu cores : 4
If I grep "physical id" I have 1.
I am thinking my command is wrong and top is right. This is not a VM and it a physical server, RedHat. What am I doing wrong?
I am not sure these answer it:
How to know number of cores of a system in Linux?
Number of processors in /proc/cpuinfo
Edit: Am I correct that Physical ID, if it only shows 1, then I have one physical chip in the motherboard?
Edit: It is a Intel(R) Xeon(R) CPU X5560 @ 2.80GHz but the physical id is 1, and I thought it would be 0, but there is no physical id 0 in cpuinfo.
Edit: If it matters, I am trying to figure out licensing where they do .5 the core count.
|
What CPU are you using? How many thread present per physical core?
cat /proc/cpuinfo shows number of physical core whereas top shows total no of threads present.
I think your CPU has 4 physical core and 2 logical core per physical core. So it's top showing 8.
Moreover contents of /proc/cpuinfo is somewhat implementation dependent. Like in rooted android shell the cpuinfo file doesn't contain any term cpu cores.
However in cpuinfo each thread is named as processor : X, Where X is thread no. So the last thread no shall be same as top/htop output.
Result of nproc --all shall also be consistent with top/htop
| Why does top show a different number of cores than cpuinfo? |
1,488,065,492,000 |
When using the command strace with the flag -T, I would like to know what is the time unit used to display time spent in syscalls? I assume it should be in seconds, but I am not quite sure and it seems to be omitted from the manual.
|
From the source code:
if (Tflag) {
ts_sub(ts, ts, &tcp->etime);
tprintf(" <%ld.%06ld>",
(long) ts->tv_sec, (long) ts->tv_nsec / 1000);
}
This means that the time is shown in seconds, with microseconds (calculated from the nanosecond value) after the decimal point.
| What is the time unit that strace uses when displaying time spent in syscalls? |
1,488,065,492,000 |
I have a directory where lots of cached files are getting generated very quickly. Since these are very small files, it is consuming all my inodes very quickly.
Currently I am running the following command to find all the files older than 12 hours and delete them.
$ find ./cache -mtime +0.5 -exec rm {} \;
But the rate at which this command is deleting is slower than the rate at which files are being generated. Can someone tell me some alternative way to remove large number of files quickly.
|
Try using xargs:
find ./cache -mtime +0.5 -print0 | xargs -0 rm -f
Update explaination for @pradeepchhetri
If you use find with -exec, every file that find found will call rm one time. So if you found a huge of files, i.e 10000 files, you called rm 10000 times.
xargs will treat ouput of find as command argument to rm, so that, xargs will provide as many arguments as rm can handle at once, i.e rm -f file1 file2 ... So it makes less fork call, make program run faster.
| Faster way to delete large number of files [duplicate] |
1,488,065,492,000 |
When I fire command vmstat -s on my Linux box, I got stats as:
$ vmstat -s
16305800 total memory
16217112 used memory
9117400 active memory
6689116 inactive memory
88688 free memory
151280 buffer memory
I have skipped some details that is shown with this command.
I understand these terms: Active memory is memory that is being used by a particular process. Inactive memory is memory that was allocated to a process that is no longer running.
Just want to know, is there any way I can get the processes, with which inactive memory is allocated? Because top or vmstat command still shows the used memory as sum of active and inactive memory and I can see only processes that are using active memory but what processes are using inactive memory is still a question for me.
|
There are cases where looking at inactive memory is interesting, a high ratio of active to inactive memory can indicate memory pressure for example, but that condition is usually accompanied by paging/swapping which is easier to understand and observe. Another case is being able to observe a ramping up or saw-tooth for active memory over time – this can give you some forewarning of inefficient software (I've seen this with naïve software implementations exhibiting O(n) type behavior and performance degradation).
The file /proc/kpageflags contains a 64-bit bitmap for every physical memory page, you can get a summary with the program page-types which may come with your kernel.
Your understanding of active and inactive is incorrect however
active memory are pages which have been accessed "recently"
inactive memory are pages which have not been accessed "recently"
"recently" is not an absolute measure of time, but depends also on activity
and memory pressure (you can read some of the technical details in the free book Understanding the Linux Virtual Memory Manager, Chapter 10 is relevant here), or the kernel documentation (pagemap.txt).
Each list is stored as an LRU (more or less). Inactive memory pages are good candidates for writing to the swapfile, either pre-emptively (before free memory pages are required) or when free memory drops below a configured limit and free pages are (expected to be imminently) needed.
Either flag applies to pages allocated to running processes, with the exception of persistent or shared memory all memory is freed when a process exits, it would be considered a bug otherwise.
This low level page flagging doesn't need to know the PID (and a memory page can have more than one PID with it mapped in any case), so the information required to provide the data you request isn't in one place.
To do this on a per-process basis you need to extract the virtual address ranges from /prod/PID/maps, convert to PFN (physical page) with /proc/PID/pagemap, and index into /proc/kpageflags. It's all described in pagemap.txt, and takes about 60-80 lines of C. Unless you are troubleshooting the VM system, the numbers aren't very interesting. One thing you could do is count the inactive and swap-backed pages per-process, these numbers should indicate processes which have a low RSS (resident) size compared with VSZ (total VM size). Another thing might be to infer a memory leak, but there are better tools for that task.
| Linux inactive memory |
1,488,065,492,000 |
I need this for a unit test. There's a function that does lstat on the file path passed as its parameter. I have to trigger the code path where the lstat fails (because the code coverage has to reach 90%)
The test can run only under a single user, therefore I was wondering if there's a file in Ubuntu that always exists, but normal users have no read access to it, or to its folder. (So lstat would fail on it unless executed as root.)
A non-existent file is not a solution, because there's a separate code path for that, which I'm already triggering.
EDIT: Lack of read access to the file only is not enough. With that lstat can still be executed. I was able to trigger it (on my local machine, where I have root access), by creating a folder in /root, and a file in it. And setting permission 700 on the folder. So I'm searching for a file that is in a folder that is only accessible by root.
|
On modern Linux systems, you should be able to use /proc/1/fdinfo/0 (information for the file descriptor 1 (stdout) of the process of id 1 (init in the root pid namespace which should be running as root)).
You can find a list with (as a normal user):
sudo find /etc /dev /sys /proc -type f -print0 |
perl -l -0ne 'print unless lstat'
(remove -type f if you don't want to restrict to regular files).
/var/cache/ldconfig/aux-cache is another potential candidate if you only need to consider Ubuntu systems. It should work on most GNU systems as /var/cache/ldconfig is created read+write+searchable to root only by the ldconfig command that comes with the GNU libc.
| Is there a file that always exists and a 'normal' user can't lstat it? |
1,488,065,492,000 |
poweroff complains that it can't connect to systemd via DBus (of course, it's not alive). I did sync followed by kill $$, thinking that pid 1 dying would cue the kernel to poweroff, but that caused a kernel panic. I then held the power button to force the poweroff.
What's the most proper way to power-off in this scenario?
|
Unmount the filesystems that you had mounted. The root filesystem is a special case; for this you can use mount / -o remount,ro. On Linux, umount / also happens to work, because it is effectively converted to the former command.
That said, you don't need to worry about unmounting too much, unless
You have mounted an old filesystem like FAT - as used by the EFI system partition - or ext2, which does not implement journalling or equivalent. With a modern filesystem, sync is supposed to be enough, and the filesystem will repair itself very quickly on the next boot.
You might have left a running process that writes to the filesystem, and you had intended to shut it down cleanly. In that case it's useful to attempt to umount the filesystems, because umount would fail and show a busy error to remind you about the remaining writer.
The above is the important part. After that, you can also conveniently power off the hardware using poweroff -f. Or reboot with reboot -f.
There is a systemd-specific equivalent of poweroff -f: systemctl poweroff -f -f. However poweroff -f does the same thing, and systemd supports this command even if it has been built without SysV compatibility.
Technically, I remember my USB hard drive was documented as requiring Windows "safe remove" or equivalent. But this requirement is not powerfail safe, and Linux does not do this during a normal shutdown anyway. It's better interpreted as meaning that you shouldn't jog the hard drive while it is spinning - including by trying to unplug it. A full power off should stop the drive spinning. You can probably hear, feel, or see if it does not stop :-).
| How to poweroff when there's no systemd/init (e.g. using init=/bin/bash)? |
1,488,065,492,000 |
let's say that i've done a find for .gif files and got a bunch of files back. I now want to test them to see if they are animated gifs. Can i do this via the command line?
I have uploaded a couple of examples in the following, in case you want to experiment on them.
Animated GIF image
Static GIF Image
|
This can easily be done using ImageMagick
identify -format '%n %i\n' -- *.gif
12 animated.gif
1 non_animated.gif
identify -format %n prints the number of frames in the gif; for animated gifs, this number is bigger than 1.
(ImageMagick is probably readily available in your distro's repositories for an easy install)
| Find all animated gif files in a directory and its subdirectories |
1,488,065,492,000 |
I am configuring the security in my server. For easier the manage at the firewall, I installed the UFW. I did some settings in the UFW and I allowed some ports. Therefor when I enabled it the DNS services not responding.
I tried running the command DIG www.domain.com.br to test the DNS but it did not succeed. This command run without problems when the UFW is disabled. I already allowed the 53 port (TCP and UDP) but the DNS does not work.
My UFW settings:
Status: active
Logging: on (low)
Default: deny (incoming), deny (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
21/tcp ALLOW IN Anywhere
16/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
80 ALLOW IN Anywhere
53 ALLOW IN Anywhere
465 ALLOW IN Anywhere
25/tcp ALLOW IN Anywhere
22 ALLOW IN Anywhere
21/tcp (v6) ALLOW IN Anywhere (v6)
16/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
53 (v6) ALLOW IN Anywhere (v6)
465 (v6) ALLOW IN Anywhere (v6)
25/tcp (v6) ALLOW IN Anywhere (v6)
22 (v6) ALLOW IN Anywhere (v6)
|
I solved this problem. I allowed the outgoing for port 53 that is DNS service port.
Thanks.
sudo ufw allow out 53
| UFW is blocking DNS |
1,488,065,492,000 |
I was hiding some of the folders on my Ubuntu machine. By mistake, I have hidden bin folder too by using
cd /
mv bin .bin
Now I could cd to .bin, but I am not able to unhide the bin directory. Can someone help? I was trying the following command:
mv .bin bin
I am getting the following error
bash: /bin/mv: No such file or directory
I tried to login as root, but my machine is asking me to install login. On doing apt-get install login, I am getting a message login is currently the latest version.
|
If you still have a root shell open, run
cd /
/.bin/mv .bin bin
Your shell can’t find mv because it’s no longer on the path; giving the full path to it will allow it to run.
(As a general rule, it’s best not to rename directories outside of your home directory — they are managed by the package manager, and you are likely to confuse it and prevent updates from being applied in the future.)
| How to mv .bin bin |
1,488,065,492,000 |
Suppose I work in environment with number of domain changes and slow propagation.
I want to test domain configuration immediately after setting change but the propagation is slow.
So I want to add more nameservers to /etc/resolv.conf file in my Debian 6.0.3 laptop. Especially domain name servers of my domain name registrar.
I do it by adding:
append domain-name-servers 85.128.130.10, 194.204.152.34;
to my /etc/dhcp/dhclient.conf file. After reconnecting to the network my /etc/resolv.conf is updated properly, but with following message:
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
So is there a way to use more than 3 nameservers at the time?
|
Put only 127.0.0.1 as a name server in /etc/resolv.conf, and run a DNS cache locally. I recommend Dnsmasq, it's lightweight and easy to setup. On distributions such as Debian and Ubuntu, I also recommend installing the resolvconf package, which takes care of maintaining /etc/resolv.conf when you aren't running a local DNS cache, or of maintaining the DNS cache program's configuration when you are.
| How to overcome libc resolver limitation of maximum 3 nameservers? |
1,488,065,492,000 |
When printing with Linux is there an alternative to using CUPS?
After using it for a couple of weeks printing suddenly stopped working. The 'processing' light would come on and that would be the end. CUPS would list the job as finished. I cannot find any solution that works.
|
There are two traditional printing interfaces in the unix world: lp (the System V interface) and lpr (the BSD interface). There are three printing systems available on Linux: the traditional BSD lpr, the newer BSD system LPRng, and CUPS. LPRng and CUPS both provide a BSD-style interface and a System-V-style interface.
Nowadays, CUPS is the de facto standard printing system for unix; it's the default or only system under Mac OS X and most Linux distributions as well as recent versions of Solaris, and it's available as a package on all major BSD distributions. Nonetheless your distribution may provide lpr and LPRng, typically in packages with these names.
CUPS has better support for input and output filters (automatically converting various input format, giving access to printer features such as paper source selection and double-sided printing). If you install an alternative, you're likely to need to tune them quite a bit to get these extra features working. And there's no guarantee that these systems will work better than CUPS anyway. So I'd recommend fixing whatever's broken (given your description, it could be the printer itself!).
| Is there an alternative to CUPS? |
1,488,065,492,000 |
Zombie processes are created in Unix/Linux systems.
We can remove them via the kill command.
But is there any in-built clean-up mechanism in Linux to handle zombie processes?
|
Zombie processes are already dead. You cannot kill them. The kill command or system call has no effect on a zombie process. (You can make a zombie go away with kill, but you have to shoot the parent, not the zombie, as we'll see in a minute.)
A zombie process is not really a process, it's only an entry in the process table. There are no other resources associated with the zombie process: it doesn't have any memory or any running code, it doesn't hold any files open, etc.
When a process dies, the last thing to go, after all other resources are cleaned up, is the entry in the process table. This entry is kept around, forming a zombie, to allow the parent process to track the exit status of the child. The parent reads the exit status by calling one of the wait family of syscalls; at this point, the zombie disappears. Calling wait is said to reap the child, extending the metaphor of a zombie being dead but in some way still not fully processed into the afterlife. The parent can also indicate that it doesn't care (by ignoring the SIGCHLD signal, or by calling sigaction with the SA_NOCLDWAIT flag), in which case the entry in the process table is deleted immediately when the child dies.
Thus a zombie only exists when a process has died and its parent hasn't called wait yet. This state can only last as long as the parent is still running. If the parent dies before the child or dies without reading the child's status, the zombie's parent process is set to the process with PID 1, which is init. One of the jobs of init is to call wait in a loop and thus reap any zombie process left behind by its parent.
| How does linux handles zombie process |
1,488,065,492,000 |
I can list them use
sudo yum list installed
but how to make them display when each were installed?
|
As root (or using sudo), use the yum option history.
[root@fedora ~]# yum history list
Loaded plugins: langpacks, presto, refresh-packagekit
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
250 | -y update google-chrome- | 2013-01-30 18:02 | Update | 1 EE
249 | -y update | 2013-01-25 07:11 | Update | 22
248 | -y update | 2013-01-23 17:56 | Update | 12
247 | -y update | 2013-01-23 08:41 | Update | 9 EE
246 | -y update | 2013-01-20 21:49 | Update | 4
245 | -x kernel* update | 2013-01-07 08:11 | Update | 3
You can view the packages and changes for a specific yum transaction:
[root@fedora ~]# yum history info 250
Loaded plugins: langpacks, presto, refresh-packagekit
Transaction ID : 250
Begin time : Wed Jan 30 18:02:31 2013
Begin rpmdb : 1624:34a60f2e27ebe4d959f1473055da42645705b96f
End time : 18:02:59 2013 (28 seconds)
End rpmdb : 1624:f4ef7af3a97b1f922f41803ba6b9578a7abe3e71
User : User <user>
Return-Code : Success
Command Line : -y update google-chrome-stable.x86_64
Transaction performed with:
Installed rpm-4.9.1.3-1.fc16.x86_64 @updates
Installed yum-3.4.3-25.fc16.noarch @updates
Installed yum-metadata-parser-1.1.4-5.fc16.x86_64 @koji-override-0/$releasever
Installed yum-presto-0.7.1-1.fc16.noarch @koji-override-0/$releasever
Packages Altered:
Updated google-chrome-stable-24.0.1312.56-177594.x86_64 @google-chrome
Update 24.0.1312.57-178923.x86_64 @google-chrome
Scriptlet output:
1 Redirecting to /bin/systemctl start atd.service
You can view the history specific packages with:
[root@fedora ~]# yum history packages-list yum
Loaded plugins: langpacks, presto, refresh-packagekit
ID | Action(s) | Package
-------------------------------------------------------------------------------
148 | Updated | yum-3.4.3-24.fc16.noarch EE
148 | Update | 3.4.3-25.fc16.noarch EE
94 | Updated | yum-3.4.3-23.fc16.noarch
94 | Update | 3.4.3-24.fc16.noarch
52 | Updated | yum-3.4.3-7.fc16.noarch
52 | Update | 3.4.3-23.fc16.noarch
2 | Updated | yum-3.4.3-5.fc16.noarch EE
2 | Update | 3.4.3-7.fc16.noarch EE
1 | Install | yum-3.4.3-5.fc16.noarch
man 8 yum or yum help history will list more options that are possible with the history option.
| How to list all the installed package in fedora with the time of the installation |
1,488,065,492,000 |
> uname -r
FATAL: kernel too old
> cat /proc/cmdline
FATAL: kernel too old
There are 3 *.vmlinuz-linux files in /boot. How do I determine which kernel is currently running?
Note that I'm running in a limited environment with a minimal shell. I've also tried:
> sh -c 'read l < /proc/version; echo $l'
FATAL: kernel too old
> dd if=/proc/version
FATAL: kernel too old
Any thoughts?
|
You have upgraded your libc (the most basic system library) and now no program works. To be precise, no dynamically linked program works.
In your particular scenario, rebooting should work. The now-installed libc requires a newer kernel, and if you reboot, you should get that newer kernel.
As long as you still have a running shell, there's often a way to recover, but it can be tricky if you didn't plan for it. If you don't have a shell then usually there's no solution other than rebooting.
Here you may not be able to recover without rebooting, but you can at least easily find out what kernel is running. Just use a way to read /proc/version that doesn't require an external command.
read v </proc/version; echo $v
echo $(</proc/version) # in zsh/bash/ksh
If you still have a copy of the old libc around, you can run programs with it. For example, if the old libc is in /old/lib and you have executables that work with this old libc in /old/bin, you can run
LD_LIBRARY_PATH=/old/lib /old/lib/ld-linux.so.2 /old/bin/uname
If you have some statically linked binaries, they'll still work. I recommend installing statistically linked system utilities for this kind of problem (but you have to do it before the problem starts). For example, on Debian/Ubuntu/Mint/…, install one or more of busybox-static (collection of basic Linux command line tools including a shell), sash (shell with some extra builtins), zsh-static (just a shell but with quite a few handy tools built in).
busybox-static uname
sash -c '-cat /proc/version'
zsh-static -c '</proc/version'
| uname is broken: how do I determine the currently running kernel? |
1,488,065,492,000 |
On my personal machine, I often type sudo in front of certain commands in order to accomplish administrative tasks. I had hoped to avoid doing this throughout the day, by typing su root and providing the same password I usually do for sudo. However, the two passwords are not the same(I don't know how to log in to su root). Is running a command with sudo different than logging in with su root and running the same command?
I think sudo and su root are the same, because when I type sudo whoami, I get root, as opposed to just whoami where I get my user-name.
|
Contrary to what their most common use would lead you to think, su and sudo are not just meant for logging in (or performing actions) as root.
su allows you to switch your identity with that of someone else. For this reason, when you type su, the system needs to verify that you have the credentials for the target user you're trying to change into.
sudo is a bit different. Using sudo allows you to run certain (or all, depending on the configuration) commands as someone else. Your own identity is used to determine what types of commands sudo will run for you under someone else's identity: if you're a trusted user (in the sense that the sysadmin trusts you), you'll be allowed more free rein than, say, an intern. This is why sudo needs to verify your own identity rather than that of the target user.
In other words, trying to su to someone you're not is like attempting to charge your purchases to a stolen credit card while using sudo is like selling your friend's car by legal proxy.
As for what you were trying to do, just sudo su root, or even more simply sudo su and type your regular user password. This would roughly amount to replacing your friend's credit card credentials with your own using the legal proxy they gave you :). It of course assumes the sudo configuration allows you to run su with escalated privileges.
Also, systems that come pre-configured with sudo access typically have the root account disabled (no root password), you can enable that using the passwd command after becoming root via sudo su.
| Why is the 'sudo' password different than the 'su root' password |
1,488,065,492,000 |
Possible Duplicate:
Linux distribution geared towards developers
I don't know if this question qualifies to be a question to be asked here. But can anyone tell me what's the best (linux) distro out there for programmers? I program in multiple languages, including java and lisp. I would be happy to know of a distro that's good enough for programmers yet remaining small enough for a quick download+install. Any suggestions and constructive criticisms are welcome. This is my first question here, please help me out.
IDEs I use: Eclipse/Emacs/KDevelop
Languages : C, C++, Java, Python, Perl, Lisp
|
Well, the "best" thing about linux is that it's up to you how you going to use it,
and for whatever purposes, and it's all free :p
I would suggest you to start with debian, it's hairy enough to meet the begginner
linux programmers needs on software side and yet it's minimalistic and flexible enough for "users"
Would not suggest to use ubuntu since it's not free enough :P
| Best distro for programming [duplicate] |
1,488,065,492,000 |
root@macine:~# getcap ./some_bin
./some_bin =ep
What does "ep" mean? What are the capabilities of this binary?
|
# getcap ./some_bin
./some_bin =ep
That binary has ALL the capabilites permitted (p) and effective (e) from the start.
In the textual representation of capabilities, a leading = is equivalent to all=. From the cap_to_text(3) manpage:
In the case that the leading operator is =, and no list of capabilities is provided, the action-list is assumed to refer to all capabilities. For example, the following three clauses are equivalent to each
other (and indicate a completely empty capability set): all=; =;
cap_chown,<every-other-capability>=.
Such a binary can do whatever it pleases, limited only by the capability bounding set, which on a typical desktop system includes everything (otherwise setuid binaries like su wouldn't work as expected).
Notice that this is just a "gotcha" of the textual representation used by libcap: in the security.capability extended attribute of the file for which getcap will print /file/path =ep, all the meaningful bits are effectively on; for an empty security.capability, /file/path = (with the = not followed by anything) will be printed instead.
If someone is still not convinced, here is a small experiment:
# cp /bin/ping /tmp/ping # will wipe setuid bits and extented attributes
# su user -c '/tmp/ping localhost'
ping: socket: Operation not permitted
# setcap =ep /tmp/ping
# su user -c '/tmp/ping localhost' # will work because of cap_net_raw
PING localhost(localhost (::1)) 56 data bytes
64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.073 ms
^C
# setcap = /tmp/ping
# su user -c '/tmp/ping localhost'
ping: socket: Operation not permitted
Notice that an empty file capability is also different from a removed capability (capset -r /file/path), an empty file capability will block the Ambient set from being inherited when the file executes.
A subtlety of the =ep file capability is that if the bounding set is not a full one, then the kernel will prevent a program with =ep on it from executing (as described in the "Safety checking for capability-dumb binaries" section of the capabilities(7) manpage).
| What does the "ep" capability mean? |
1,488,065,492,000 |
I know that Ctrl+Alt+FX (X=1 to X=7) are 7 different ttys.
Suddenly, I tried to find out what other combinations Ctrl+Alt+FX (X=8 to X=12) leads to.
After pressing the combinations, I found that a black screen with just a cursor blinking. Can somebody please explain me what this means. After again pressing Ctrl+Alt+F7 I can go back to the XWindows.
|
All Alt + F-key combinations lead to different virtual terminals or virtual consoles (they're also ttys, but not all ttys are virtual terminals/consoles).
If you're in X, you need to add Ctrl to that by default. This combination also works on the console these days, presumably to keep things consistent. Additionally, you can cycle through all of your allocated virtual consoles using Alt← and Alt→ (only works on the console). If you're running X, this will eventually lead you back to your X session.
The only difference is what's running on each terminal. Generally, the first few terminals allow you to log in. If your distribution uses init (i.e. not recent Ubuntus), you can change what terminals do that by editing /etc/inittab, then typing sudo init q to activate the new configuration. Search for ‘tty1’ and you'll find the right place. Or do man 5 inittab to get all the information.
Unused consoles
A black (or white, depending on your terminal setup and platform) screen with a cursor blinking (or not, depending on your terminal setup and platform :) ) means that particular virtual terminal isn't virtually connected to anything. You can activate it by sending it something. Just type
ls -la >/dev/tty8 # if you re root
ls -la | sudo tee /dev/tty8 # if you're not
Then, with CtrlAltF8, you should see the output of ls -la.
Virtual consoles may also run other things than getty (a terminal manager program that initialises a virtual/physical terminal or modem and runs login to ask for your username and password). On some installations, one of the consoles outputs system logs. On most installations, the kernel also outputs its critical messages (or, if you're really unlucky, all of its messages) to one or more of these consoles — it could be console 1, or it could be whichever console is active.
Unallocated Consoles
The kernel saves memory by allocating a new virtual console when it's first used. If a console is unallocated, pressing its key combination does nothing, and using Alt and the arrow keys skips past it. This may make it seem like only a few of the Alt and F-key combinations are mapped to consoles, when in fact they all are.
More consoles than you know what to do with
When I first read the kernel code pertaining to this functionality, I found the kernel supported up to 63 virtual consoles. If your keyboard has more than 12 function keys, additional consoles may be mapped to the extra ones. Also, additional consoles are mapped to various key combinations. On my Debian box, 36 consoles are mapped to three sets of F-key combinations:
AltF1 – AltF12: tty1 – tty12
AltGrF1 – AltGrF12: tty13 – tty24
AltGrShiftF1 – AltGrShiftF12: tty25 – tty36
The rest can be made accessible via custom keymapping or using Alt and the arrow keys.
Graphically Challenged
Having lots of consoles used to be very useful. Many of us used to develop code on the consoles, not X (X was quite heavy on my i486/33 with its 16 megs of RAM), so several high-resolution consoles would replace the tabs on a modern, graphical terminal.
| Ctrl+Alt+F8 meaning |
1,488,065,492,000 |
I have a laptop with dual boot elementaryOS Loki and Windows 10. Until recently everything was fine, but now suddenly the wifi in elementaryOS is extremely slow (~0.5Mbit download, most speed tests don't even start the upload test). With Ethernet, I get the normal 80 MBit download. I also tried it with Windows where it's still 25 MBit via Wifi.
Edit:
lspci -knn | grep Net -A2
01:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6235 [8086:088e] (rev 24)
Subsystem: Intel Corporation Centrino Advanced-N 6235 AGN [8086:4060]
Kernel driver in use: iwlwifi
uname -a
Linux tobias-530U3BI-530U4BI-530U4BH 4.10.0-38-generic #42~16.04.1-Ubuntu SMP Tue Oct 10 16:32:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
To ameliorate the connection through the intel wifi card you can:
Disable 802.11n
Enable software encryption
Enable the transmission antenna aggregation
Disable bluetooth coexistence
Create a /etc/modprobe.d/iwlwifi.conf with the following content :
options iwlwifi 11n_disable=1
options iwlwifi swcrypto=1
options iwlwifi 11n_disable=8
options iwlwifi bt_coex_active=0
iwlwifi troubleshooting on arch-linux
| Wifi suddenly extremely slow |
1,488,065,492,000 |
I am running Oracle Linux 7 (CentOS / RedHat based distro) in a VirtualBox VM on a Mac with OS X 10.10. I have a Synology Diskstation serving as an iscsi target.
I have successfully connected to the Synology, partitioned the disk and created a filesystem. It is refernced as /dev/sdb and the partition is /dev/sdb1. Now, what I would like to do is create a mount point so I can easily access it:
mount /dev/sdb1 /mnt/www
That command works. But obviously, it isn't persistent across a reboot. No worries...into /etc/fstab we go.
First, I got the UUID of the partition to ensure I am always using the correct device:
blkid /dev/sdb1
Result:
/dev/sdb1: UUID="723eb295-8fe0-409f-a75f-a26eede8904f" TYPE="ext3"
Now, I inserted the following line into my /etc/fstab
UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 defaults 0 0
Upon reboot, the system crashes and goes into maintenance mode. If I remove the line I inserted, all works again. However, I am following the instructions verbatim from Oracle-Base
I know I am missing something..can anyone point me in the right direction?
|
Just change the parameter "defaults" by "_netdev", like this:
UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 _netdev 0 0
This way the mount point will be mounted only after the network start correctly.
| Mount iscsi drive at boot - system halts |
1,488,065,492,000 |
We have RH based Linux images; on which I have to "apply" some "special archive" in order to upgrade them to the latest development version of our product.
The person creating the archive figured that within our base image, some permissions are wrong; so we were told to run
sudo chgrp -R nobody /whatever
We did that; and later on, when our application is running, obscure problems came up.
What I found later on: the call to chgrp will clear the setuid bit information on our binaries within /whatever.
And the actual problem is: some of our binaries must have that setuid bit set in order to function properly.
Long story short: is there a way to run that "chgrp" command without killing my setuid bits?
I just ran the following on my local Ubuntu; leading to the same result:
mkdir sticky
cd sticky/
touch blub
chmod 4755 blub
ls -al blub
--> shows me file name with red background --> so, yep, setuid
chgrp -R myuser .
ls -al blub
--> shows me file name without red background --> setuid is gone
|
If you want to implement your chgrp -R nobody /whatever while retaining the setuid bit you can use these two find commands
find /whatever ! -type l -perm -04000 -exec chgrp nobody {} + \
-exec chmod u+s {} +
find /whatever ! -type l ! -perm -04000 -exec chgrp nobody {} +
The find ... -perm 04000 option picks up files with the setuid bit set. The first command then applies the chgrp and then a chmod to reinstate the setuid bit that has been knocked off. The second one applies chgrp to all files that do not have a setuid bit.
In any case, you don't want to call chgrp or chmod on symlinks as that would affect their targets instead, hence the ! -type l.
| Howto prevent chgrp from clearing “setuid bit”? |
1,488,065,492,000 |
On one of our MySQL master, OOM Killer got invoked and killed MySQL server which lead to big outage. Following is the kernel log:
[2006013.230723] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
[2006013.230733] Pid: 1319, comm: mysqld Tainted: P 2.6.32-5-amd64 #1
[2006013.230735] Call Trace:
[2006013.230744] [<ffffffff810b6708>] ? oom_kill_process+0x7f/0x23f
[2006013.230750] [<ffffffff8106bde2>] ? timekeeping_get_ns+0xe/0x2e
[2006013.230754] [<ffffffff810b6c2c>] ? __out_of_memory+0x12a/0x141
[2006013.230757] [<ffffffff810b6d83>] ? out_of_memory+0x140/0x172
[2006013.230762] [<ffffffff810baae8>] ? __alloc_pages_nodemask+0x4ec/0x5fc
[2006013.230768] [<ffffffff812fca02>] ? io_schedule+0x93/0xb7
[2006013.230773] [<ffffffff810bc051>] ? __do_page_cache_readahead+0x9b/0x1b4
[2006013.230778] [<ffffffff810652f8>] ? wake_bit_function+0x0/0x23
[2006013.230782] [<ffffffff810bc186>] ? ra_submit+0x1c/0x20
[2006013.230785] [<ffffffff810b4e53>] ? filemap_fault+0x17d/0x2f6
[2006013.230790] [<ffffffff810cae1e>] ? __do_fault+0x54/0x3c3
[2006013.230794] [<ffffffff812fce29>] ? __wait_on_bit_lock+0x76/0x84
[2006013.230798] [<ffffffff810cd172>] ? handle_mm_fault+0x3b8/0x80f
[2006013.230803] [<ffffffff8103a9a0>] ? pick_next_task+0x21/0x3c
[2006013.230808] [<ffffffff810168ba>] ? sched_clock+0x5/0x8
[2006013.230813] [<ffffffff81300186>] ? do_page_fault+0x2e0/0x2fc
[2006013.230817] [<ffffffff812fe025>] ? page_fault+0x25/0x30
This machine has 64GB RAM.
Following are the mysql config variables:
innodb_buffer_pool_size = 48G
innodb_additional_mem_pool_size = 512M
innodb_log_buffer_size = 64M
Except some of the nagios plugins and metric collection scripts, nothing else runs on this machine. Can someone help me to find out why OOM killer got invoked and how can i prevent it to get invoked in future. Is there any way I can tell OOM killer not to kill mysql server. I know we can set oom_adj value to very less for a process to prevent it from getting killed by OOM killer. But is there any other way to prevent this.
|
Linux does memory overcommit. That means it allows process to request more memory than really available on the system. When a program tries to malloc(), the kernel says "OK you got the memory", but don't reserve it. The memory will only be reserved when the process will write something in this space.
To see the difference, you have 2 indicators: Virtual Memory and Resident Memory.
Virtual is the memory requested by the process, Resident is the memory really used by the process.
With this system, you may go into "overbooking", kernel grants more memory than available. Then, when your system goes on 0 byte of free memory and Swap, he must sacrifice (kill) a process to gain free memory.
That's when OOM Killer goes into action. The OOM selects a process based on his memory consumption, and many other elements (parent gains 1/2 of the score of his children; if it's a root owned process, score is divided by 4, etc.. Have a look on Linux-MM.org/OOM_Killer
You can influence on the OOM scoring by tunning the /proc/MySQL_PID/oom_adj file. By setting it to -17, your process will never be killed. But before doing that, you should tweak your MySQL configuration file in order to limit MySQL memory usage. Otherwise, the OOM Killer will kill other system process (like SSH, crontab, etc...) and your server will be in a very unstable state, maybe leading to data corruption which is worse than anything.
Also, you may consider using more swap.
[EDIT]
You may also change it's overcommit behaviour via these 2 sysctls :
vm.overcommit_memory
vm.overcommit_ratio
As stated in Kernel Documentation
overcommit_memory:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel attempts to estimate the amount
of free memory left when userspace requests more memory.
When this flag is 1, the kernel pretends there is always enough
memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit"
policy that attempts to prevent any overcommit of memory.
Note that user_reserve_kbytes affects this policy.
This feature can be very useful because there are a lot of
programs that malloc() huge amounts of memory "just-in-case"
and don't use much of it.
The default value is 0.
See Documentation/vm/overcommit-accounting and
security/commoncap.c::cap_vm_enough_memory() for more information.
overcommit_ratio:
When overcommit_memory is set to 2, the committed address
space is not permitted to exceed swap plus this percentage
of physical RAM. See above.
[/EDIT]
| OOM Killer - killed MySQL server |
1,488,065,492,000 |
I would like to understand in detail the difference between fork() and vfork(). I was not able to digest the man page completely.
I would also like to clarify one of my colleagues comment "In current Linux, there is no vfork(), even if you call it, it will internally call fork()."
|
Man pages are usually terse reference documents. Wikipedia is a better place to turn to for conceptual explanations.
Fork duplicates a process: it creates a child process which is almost identical to the parent process (the most obvious difference is that the new process has a different process ID). In particular, fork (conceptually) must copy all the parent process's memory.
As this is rather costly, vfork was invented to handle a common special case where the copy is not necessary. Often, the first thing the child process does is to load a new program image, so this is what happens:
if (fork()) {
# parent process …
} else {
# child process (with a new copy of the process memory)
execve("/bin/sh", …); # discard the process memory
}
The execve call loads a new executable program, and this replaces the process's code and data memory by the code of the new executable and a fresh data memory. So the whole memory copy created by fork was all for nothing.
Thus the vfork call was invented. It does not make a copy of the memory. Therefore vfork is cheap, but it's hard to use since you have to make sure you don't access any of the process's stack or heap space in the child process. Note that even reading could be a problem, because the parent process keeps executing. For example, this code is broken (it may or may not work depending on whether the child or the parent gets a time slice first):
if (vfork()) {
# parent process
cmd = NULL; # modify the only copy of cmd
} else {
# child process
execve("/bin/sh", "sh", "-c", cmd, (char*)NULL); # read the only copy of cmd
}
Since the invention of vfork, better optimizations have been invented. Most modern systems, including Linux, use a form of copy-on-write, where the pages in the process memory are not copied at the time of the fork call, but later when the parent or child first writes to the page. That is, each page starts out as shared, and remains shared until either process writes to that page; the process that writes gets a new physical page (with the same virtual address). Copy-on-write makes vfork mostly useless, since fork won't make any copy in the cases where vfork would be usable.
Linux does retain vfork. The fork system call must still make a copy of the process's virtual memory table, even if it doesn't copy the actual memory; vfork doesn't even need to do this. The performance improvement is negligible in most applications.
| What's the difference between fork() and vfork()? |
1,488,065,492,000 |
I have a VM running Debian Wheezy on which some hostname lookups take several seconds to complete, even though the resolver replies immediately. Strangely, lookups with getaddrinfo() are affected, but gethostbyname() is not.
I've switched to the Google resolvers to exclude the possibility that the local ones are broken, so my /etc/resolv.conf looks like:
search my-domain.com
nameserver 8.8.4.4
nameserver 8.8.8.8
My nsswitch.conf has the line:
hosts: files dns
and my /etc/hosts doesn't contain anything unusual.
If I try telnet webserver 80, it hangs for several seconds before getting a name resolution. An ltrace output [1] shows that the hang is in a getaddrinfo() call:
getaddrinfo("ifconfig.me", "telnet", { AI_CANONNAME, 0, SOCK_STREAM, 0, 0, NULL, '\000', NULL }, 0x7fffb4ffc160) = 0 <5.020621>
However, tcpdump reveals that the nameserver replied immediately, and it was only on the second reply that telnet unblocked. The replies look identical:
05:52:58.609731 IP 192.168.1.75.43017 > 8.8.4.4.53: 54755+ A? ifconfig.me. (29)
05:52:58.609786 IP 192.168.1.75.43017 > 8.8.4.4.53: 26090+ AAAA? ifconfig.me. (29)
05:52:58.612188 IP 8.8.4.4.53 > 192.168.1.75.43017: 54755 4/0/0 A 219.94.235.40, A 133.242.129.236, A 49.212.149.105, A 49.212.202.172 (93)
[...five second pause...]
05:53:03.613811 IP 192.168.1.75.43017 > 8.8.4.4.53: 54755+ A? ifconfig.me. (29)
05:53:03.616424 IP 8.8.4.4.53 > 192.168.1.75.43017: 54755 4/0/0 A 219.94.235.40, A 133.242.129.236, A 49.212.149.105, A 49.212.202.172 (93)
05:53:03.616547 IP 192.168.1.75.43017 > 8.8.4.4.53: 26090+ AAAA? ifconfig.me. (29)
05:53:03.618907 IP 8.8.4.4.53 > 192.168.1.75.43017: 26090 0/1/0 (76)
I've checked host firewall logs and nothing on port 53 is being blocked.
What is causing the first DNS reply to be ignored?
[1] I've added a couple of lines to my ltrace.conf so I can see inside the addrinfo struct.
|
This was caused by an overly restrictive ruleset on a Juniper firewall that sits in front of the VMware infrastructure.
I built a test resolver so that I could see both sides of the conversation, and the missing packet identified by Kempniu in his excellent answer was indeed being dropped somewhere along the way. As noted in that answer, getaddrinfo() with no address family specified will wait for answers relating to all supported families before returning (or, in my case, timing out).
My colleague who runs the network noted that
The default behavior on the Juniper firewall is to close a DNS-related
session as soon as a DNS reply matching that session is received.
So the firewall was seeing the IPv4 response, noting that it answered the VM's query, and closing the inbound path for that port. The following IPv6 reply packet was therefore dropped. I've no idea why both packets made it through the second time, but disabling this feature on the firewall fixed the problem.
This is a related extract from the Juniper KB:
Here's a scenario where DNS Reply packets are dropped:
A session for DNS traffic is created when the first DNS query packet hits the firewall and there is a permitting policy configured.
The default timeout is 60 sec.
Immediately before the session is closed, a new DNS query is transmitted, and since it matches an existing session (since source
and destination port/IP pair is always the same), it is forwarded by
the firewall. Note that the session timeout is not refreshed
according to any newly arriving packet.
The created DNS session is aged out when the first DNS query response (reply) hits the device, regardless how much the timeout
remains.
When a DNS reply is passed through the firewall, the session is aged out.
All subsequent DNS replies are dropped by the firewall, since no session exists.
If you're thinking of upvoting this answer, please also upvote Kempniu's answer. Without it I'd still be thrashing around trying to find some configuration problem on the VM.
| DNS lookups sometimes take 5 seconds |
1,488,065,492,000 |
I've been having trouble with some network configuration lately which has been tricky to resolve.
It seems this would be much easier to diagnose if I knew which direction the traffic was failing to get through. Since all ping requests receive no responses back I'd like to know if the ping-request packets are getting through and the responses failing, or if it's the requests themselves that are failing.
To be clear, standard utilities like ping and traceroute rely on sending a packet out from one machine and receiving a packet in response back to that same machine. When no response comes back, it's always impossible to tell if the initial request failed to get through, or the response to it was blocked or even if the response to it was simply never sent. It's this specific detail, "which direction is the failure", that I'd like to analyse.
Are there any utilities commonly available for Linux which will let me monitor for incoming ICMP ping requests?
|
tcpdump can do this, and is available pretty much everywhere:
tcpdump -n -i enp0s25 icmp
will dump all incoming and outgoing ICMP packets on enp0s25.
To see only ICMP echo requests:
tcpdump -n -i enp0s25 "icmp[0] == 8"
(-n avoids DNS lookups, which can delay packet reporting and introduce unwanted traffic of their own.)
This allows you to find if it is receiving the packets from the other machine (from which you would e.g. ping it), so the problem is with the return path, or if they directly don't arrive.
| Is there any utility for performing ICMP testing ("ping") in only one direction? |
1,488,065,492,000 |
I run Linux Live CD and I need to extract a specific file from a wim-archive that is located on a disk drive. I know a full path to the file in the archive:
xubuntu@xubuntu:~$ 7z l winRE.wim | grep -i bootrec.exe
2009-08-28 15:02:29 ....A 299008 134388 Windows/System32/BootRec.exe
I am short on disk space and do not have a possibility to unpack the whole archive.
How could I extract that specific file from the archive?
I tried the -i option, but that did not work:
xubuntu@xubuntu:~$ 7z x -i Windows/System32/BootRec.exe winRE.wim
Error:
Incorrect command line
|
The man 7z page says:
-i[r[-|0]]{@listfile|!wildcard}
Include filenames
You need to explicitly specify ! before the file name and protect the switch from bash expansion with single quotes: 7z x '-i!Windows/System32/BootRec.exe' winRE.wim
xubuntu@xubuntu:~$ 7z x '-i!Windows/System32/BootRec.exe' winRE.wim
7-Zip [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,4 CPUs)
Processing archive: winRE.wim
Extracting Windows/System32/BootRec.exe
Everything is Ok
Size: 299008
Compressed: 227817568
(You can avoid keeping the full path by using the e function letter: 7z e '-i!Windows/System32/BootRec.exe' winRE.wim.)
BTW, if you do not protect the -i option with single quotes or protect it with double quotes, you get an error:
xubuntu@xubuntu:~$ 7z x "-i!Windows/System32/BootRec.exe" winRE.wim
bash: !Windows/System32/BootRec.exe: event not found
| Extracting a specific file from an archive using 7-Zip |
1,488,065,492,000 |
I have a requirement in the shell script like, have to download a file from "url" with curl command. before downloading that file I want to process that same file size in a method.
Can anyone help me with this?
|
Use the -I option to only retrieve the headers, and look for the “Content-Length” header. Add the -L option if necessary to follow redirects. For example:
$ curl -L -I https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-9.4.0-amd64-netinst.iso
HTTP/1.1 302 Found
Date: Mon, 18 Jun 2018 09:51:32 GMT
Server: Apache/2.4.29 (Unix)
Location: https://gensho.ftp.acc.umu.se/debian-cd/current/amd64/iso-cd/debian-9.4.0-amd64-netinst.iso
Cache-Control: max-age=300
Expires: Mon, 18 Jun 2018 09:56:32 GMT
Content-Type: text/html; charset=iso-8859-1
HTTP/1.1 200 OK
Date: Mon, 18 Jun 2018 09:51:32 GMT
Server: Apache/2.4.29 (Unix)
Last-Modified: Sat, 10 Mar 2018 11:56:52 GMT
Accept-Ranges: bytes
Content-Length: 305135616
Age: 228
Content-Type: application/x-iso9660-image
This shows that the file is 305,135,616 bytes in size.
You can filter this using Gawk for example:
$ curl -s -L -I https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-9.4.0-amd64-netinst.iso | gawk -v IGNORECASE=1 '/^Content-Length/ { print $2 }'
305135616
(The -s option tells curl not to print progress information, which it does by default when its output is redirected.)
Note that this information isn’t always available so your script should be prepared to deal with that.
| How to retrieve downloadable file size with curl command? |
1,488,065,492,000 |
I'm building a headless Steam gameserver which utilises Steam in-home streaming to let two people play at the same time. The multiseat part of the setup is done and functional, but getting it to work wireless is quite troublesome.
Only one Steam client can enable in-home streaming at a time. This is most likely due to using the same ports and IP address. How can I assign each user their own IP address?
Streaming will only be done from within the home network. The machine itself already has 3 IPs on a single interface.
|
You can assign different network configuration to a process using linux network namespaces. In theory it should be possible to configure PAM* to set each user in its own separate network namespace, but it is likely simpler to launch the application in question in its own namespace instead.
A common setup might describe creating a Linux bridge interface to connect the namespaces to network. A bit simpler setup can be archived using ipvlan (included in kernel versions 3.19 and above) or macvlan device (for wireless you can't use macvlan). Linux kernel documentation has a detailed example for setting up ipvlan in network namespace.
Following the example in the documentation:
Create a network namespace ns0
ip netns add ns0
Create ipvlan slave on eth0 (master device)
ip link add link eth0 ipvl0 type ipvlan mode l2
Assign slaves to the network namespace ns0
ip link set dev ipvl0 netns ns0
Configure the slave device in network namespace ns0
ip netns exec ns0 ip link set dev ipvl0 up
ip netns exec ns0 ip link set dev lo up
ip netns exec ns0 ip -4 addr add 127.0.0.1 dev lo
ip netns exec ns0 ip -4 addr add $IPADDR dev ipvl0
ip netns exec ns0 ip -4 route add default via $ROUTER dev ipvl0
Provide host and router addresses in $IPADDR and $ROUTER.
Run your application in network namespace using ip exec
ip netns exec ns0 <command>
To run the command as different user, use the usual su <user> -c -- <command>.
* EDIT: From theory to practice: I've written a simple PAM module to demonstrate how to change the network namespace per user. You need to configure a network namepsace with ip netns like above and map specific users to specific a namespaces. Afterwards all user processes will be in their configured namespace instead of the default one. The code is hosted on github. Use at your own peril.
| How can you assign one IP per user? |
1,488,065,492,000 |
I'd like to use the new codec x265 (libx265) to encode my video collection.
For this I created a lovely bash script under linux which works in general very well! But something is strange:
I prohibit the output of ffmpeg to echo on my own way. With x264 (the "old" one) everything works fine. But as soon as I use x265 I get always this kind of output on my terminal:
x265 [info]: HEVC encoder version 1.7
x265 [info]: build info [Linux][GCC 5.1.0][64 bit] 8bpp
x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
x265 [info]: Main profile, Level-2.1 (Main tier)
x265 [info]: Thread pool created using 2 threads
x265 [info]: frame threads / pool features : 1 / wpp(5 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2
x265 [info]: Keyframe min / max / scenecut : 25 / 250 / 40
x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 3
x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 1.0 / 64 / 1
x265 [info]: Rate Control / qCompress : CRF-28.0 / 0.60
x265 [info]: tools: rd=3 psy-rd=0.30 signhide tmvp strong-intra-smoothing
x265 [info]: tools: deblock sao
This is the way I encode my video with ffmpeg:
ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet /output/file.mp4 <>/dev/null 2>&1
I thought that the
<>/dev/null 2>&1
and the
-loglevel quiet
will do this but apparently I'm mistaken.
How can I solve this problem?
Thanks for your help!
|
Solution
You need to add an additional parameter
-x265-params log-level=xxxxx, as in
ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet -x265-params log-level=quiet \
/output/file.mp4 <>/dev/null 2>&1
Note that, while the FFmpeg option is -loglevel,
the x25 option is log-level, with a - between log and level;
see the x265 Command Line Options documentation.
Explanation
The FFmpeg command you wrote should have worked
(see: ffmpeg documentation);
however, it looks like FFmpeg doesn't tell the x265 encoder
to use the loglevel you're telling FFmpeg to use.
So, assuming you want to the whole FFmpeg command to run quietly
(i.e., suppress the messages
from both the main FFmpeg program and the x265 encoder),
you need to explicitly set the log level options for both of them.
Analogously, if you have an FFmpeg command that looks like this:
ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params parameter1=value:parameter2=value outputfile.xyz
You can add the log-level=error option
to the list of x265-params like this:
ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params log-level=error:parameter1=value:parameter2=value …
| bash: ffmpeg libx265 prevent output |
1,488,065,492,000 |
It is explained here: Will Linux start killing my processes without asking me if memory gets short? that the OOM-Killer can be configured via overcommit_memory and that:
2 = no overcommit. Allocations fail if asking too much.
0, 1 = overcommit (heuristically or always). Kill some process(es) based on some heuristics when too much memory is actually accessed.
Now, I may completely misunderstand that, but why isn't there an option (or why isn't it the default) to kill the very process that actually tries to access too much memory it allocated?
|
Consider this scenario:
You have 4GB of memory free.
A faulty process allocates 3.999GB.
You open a task manager to kill the runaway process. The task manager allocates 0.002GB.
If the process that got killed was the last process to request memory, your task manager would get killed.
Or:
You have 4GB of memory free.
A faulty process allocates 3.999GB.
You open a task manager to kill the runaway process. The X server allocates 0.002GB to handle the task manager's window.
Now your X server gets killed.
It didn't cause the problem; it was just "in the wrong place at the wrong time". It happened to be the first process to allocate more memory when there was none left, but it wasn't the process that used all the memory to start with.
| Why can't the OOM-Killer just kill the process that asks for too much? |
1,488,065,492,000 |
I found a guide that explains how to set a user's password. I'm trying to automate it and send an e-mail to the user like:
userid created with password XYZ.
request to change the initial password.
According to the doc above, an encrypted password needs to be created using Python and fed to the usermod command like this:
usermod -p "<encrypted-password>" <username>
Are there any other simpler ways to do this? I don't want to download any special utility to do it; it should be generalized as much as possible.
Edit: Even the method given in the above link doesn't seem to work for me:
bash-3.00# python
Python 2.4.6 (#1, Dec 13 2009, 23:43:51) [C] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> import crypt; print crypt.crypt("<password>","<salt>")
<sOMrcxm7pCPI
>>> ^D
bash-3.00# useradd -g other -p "sOMrcxm7pCPI" -G bin,sys -m -s /usr/bin/bash mukesh2
UX: useradd: ERROR: project sOMrcxm7pCPI does not exist. Choose another.
UX: useradd: sOMrcxm7pCPI name should be all lower case or numeric.
|
You can use chpasswd to do it, like this:
echo "username:newpassword" | chpasswd
You can pipe into chpasswd from programs other than echo, if convenient, but this will do the trick.
Edit: To generate the password within the shell script and then set it, you can do something like this:
# Change username to the correct user:
USR=username
# This will generate a random, 8-character password:
PASS=`tr -dc A-Za-z0-9_ < /dev/urandom | head -c8`
# This will actually set the password:
echo "$USR:$PASS" | chpasswd
For more information on chpasswd, see http://linux.die.net/man/8/chpasswd
(Command to generate password was from http://nixcraft.com/shell-scripting/13454-command-generate-random-password-string.html)
| How can I assign an initial/default password to a user in Linux? |
1,488,065,492,000 |
I am trying to find out how I can use:
grep -i
With multiple strings, after using grep on another command. For example:
last | grep -i abc
last | grep -i uyx
I wish the combine the above into one command, but when searching on the internet I can only find references on how to use multiple strings with grep, when grep is used with a file, and not a command. I have tried something like this:
last | grep -i (abc|uyx)
Or
last | grep -i 'abc|uyx'
But that doesn't work. What is the correct syntax to get the results I epxect?
Thanks in advance.
|
Many options with grep alone, starting with the standard ones:
grep -i -e abc -e uyx
grep -i 'abc
uyx'
grep -i -E 'abc|uyx'
With some grep implementations, you can also do:
grep -i -P 'abc|uyx' # perl-like regexps, sometimes also with
# --perl-regexp or -X perl
grep -i -X 'abc|uyx' # augmented regexps (with ast-open grep) also with
# --augmented-regexp
grep -i -K 'abc|uyx' # ksh regexps (with ast-open grep) also with
# --ksh-regexp
grep -i 'abc\|uyx' # with the \| extension to basic regexps supported by
# some grep implementations. BREs are the
# default but with some grep implementations, you
# can make it explicit with -G, --basic-regexp or
# -X basic
You can add (...)s around abc|uyx (\(...\) for BREs), but that's not necessary. The (s and )s, like | also need to be quoted for them to be passed literally to grep as they are special characters in the syntax of the shell language.
Case insensitive matching can also be enabled as part of the regexp syntax with some grep implementations (not standardly).
grep -P '(?i)abc|uyx' # wherever -P / --perl-regexp / -X perl is supported
grep -K '~(i)abc|uyx' # ast-open grep only
grep -E '(?i)abc|uyx' # ast-open grep only
grep '\(?i\)abc|uyx' # ast-open grep only which makes it non-POSIX-compliant
Those don't really bring much advantage over the standard -i option. Where it could be more interesting would be for instance if you want abc matching to be case sensitive and uyx not, which you could do with:
grep -P 'abc|(?i)uyx'
Or:
grep -P 'abc|(?i:uyx)'
(and equivalent variants with other regexp syntaxes).
The standard equivalent of that would look like:
grep -e abc -e '[uU][yY][xX]'
(bearing in mind that case-insensitive matching is often locale-dependent; for instance, whether uppercase i is I or İ may depend on the locale according to grep -i i).
| How to grep multiple strings when using with another command? |
1,488,065,492,000 |
I want to replace /home with a symlink to my nfs-mounted home dirs.
Only root is logged in, /home is not a separate filesystem, lsof shows no locks, selinux is permissive. What am I missing?
I'm logged in directly as root via ssh:
[root@usil01-sql01 /]# uname -a
Linux usil01-sql01 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@usil01-sql01 /]# w
15:30:33 up 1:41, 1 user, load average: 0.00, 0.02, 0.22
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/2 10.50.11.114 15:13 1.00s 0.19s 0.01s w
[root@usil01-sql01 /]# lsof | grep /home
[root@usil01-sql01 /]# lsof +D /home
[root@usil01-sql01 /]# df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 63G 4.1G 56G 7% /
[root@usil01-sql01 /]# mount | grep -w /
/dev/sda2 on / type ext4 (rw,relatime,seclabel,data=ordered)
[root@usil01-sql01 /]# ls -lFd /home
drwxr-xr-x. 3 root root 4096 Mar 7 13:36 /home/
[root@usil01-sql01 /]# getenforce
Permissive
[root@usil01-sql01 /]# mv /home /home-old
mv: cannot move "/home" to "/home-old": Device or resource busy
What else can I check?
More system info:
[root@usil01-sql01 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 836.6G 0 disk
|-sda1 8:1 0 768.6G 0 part /storage
|-sda2 8:2 0 64G 0 part /
`-sda3 8:3 0 4G 0 part [SWAP]
sr0 11:0 1 1024M 0 rom
[root@usil01-sql01 /]# blkid
/dev/sda2: UUID="5ba6a429-4c65-4023-82b4-3673bfcf6a88" TYPE="ext4"
/dev/sda3: UUID="b5eb680f-8789-43b2-9f7e-c52570b0eb73" TYPE="swap"
/dev/sda1: UUID="cb22d57d-4a5b-4963-a990-890abe0c56dc" TYPE="ext4"
|
mv: cannot move "/home" to "/home-old": Device or resource busy
The only "use"[*] I can think of, which holds the name of a file from changing, is a mount point.
What else can I check?
I am not certain, but perhaps this could happen if the mount still exists in another mount namespace. Because it's not getting unmounts propagated from the root namespace, for some reason? Or looking at the result on my system, maybe systemd services with ProtectHome?
$ grep -h home /proc/*/task/*/mountinfo | sort -u
121 89 0:22 /systemd/inaccessible/dir /home ro,nosuid,nodev shared:142 master:24 - tmpfs tmpfs rw,seclabel,mode=755
275 243 253:2 / /home ro,relatime shared:218 master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered
321 288 253:2 / /home rw,relatime shared:262 master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered
84 64 253:2 / /home rw,relatime shared:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered
85 46 253:2 / /home rw,relatime master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered
Note this issue - unable to rename /home despite it not showing as a mount point (in the current namespace) - should be fixed in Linux kernel version 3.18+.
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.18.y&id=8ed936b5671bfb33d89bc60bdcc7cf0470ba52fe
how to find out namespace of a particular process?
lsns might be useful if you can install it. More possible commands:
List mount namespaces:
# readlink /proc/*/task/*/ns/mnt | sort -u
Identify root mount namespace:
# readlink /proc/1/ns/mnt
Find processes with a given mount namespace
# readlink /proc/*/task/*/ns/mnt | grep 4026531840
Inspect the namespace of a given process:
# cat /proc/1/task/1/mountinfo
[*] EBUSY The rename fails because oldpath or newpath is a directory that
is in use by some process (perhaps as current working directory,
or as root directory, or because it was open for reading) or is
in use by the system (for example as mount point), while the
system considers this an error. (Note that there is no require‐
ment to return EBUSY in such cases—there is nothing wrong with
doing the rename anyway—but it is allowed to return EBUSY if the
system cannot otherwise handle such situations.)
| mv: cannot move "home" to "home-old": Device or resource busy |
1,488,065,492,000 |
I am looking for the Linux command and option combination to display the contents of a given file byte by byte with the character and its numerical representation.
I was under the impression that in order to do this, I would use the following:
od -c [file]
However, I have been told this is incorrect.
|
The key is
the character and its numerical representation
so -c only gives you half of that. One solution is
od -c -b file
but of course there are lots of different number representations to choose from.
| Linux command to display the contents of a given file byte by byte with the character and its numerical representation displayed for each byte [closed] |
1,488,065,492,000 |
I know about swap - this question isn't about that. In dmesg, the Linux (x86-64) kernel tells me this about how much memory I have:
[ 0.000000] Memory: 3890880k/4915200k available (6073k kernel code, 861160k absent, 163160k reserved, 5015k data, 1596k init)
cat /proc/meminfo tells me that I have
MemTotal: 3910472 kB
And by my calculations, I think I should have exactly 4*1024*1024=4194304k RAM. Which is way smaller than the second figure in the dmesg line above!
What's with all these different figures?
By the way, uname -a outputs:
Linux pavilion 3.2.2-1.fc16.x86_64 #1 SMP Thu Jan 26 03:21:58 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
|
You should read the dmesg values "Memory Akb/Bkb available" as:
There is A available for use right now, and the system's highest page frame number multiplied by the page size is B.
This is from arch/x86/mm/init_64.c:
printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
"%ldk absent, %ldk reserved, %ldk data, %ldk init)\n",
nr_free_pages() << (PAGE_SHIFT-10),
max_pfn << (PAGE_SHIFT-10),
codesize >> 10,
absent_pages << (PAGE_SHIFT-10),
reservedpages << (PAGE_SHIFT-10),
datasize >> 10,
initsize >> 10);
nr_free_pages() returns the amount of physical memory, managed by the kernel, that is not currently in use. max_pfn is the highest page frame number (the PAGE_SHIFT shift converts that to kb). The highest page frame number can be (much) higher than what you could expect - the memory mapping done by the BIOS can contain holes.
How much these holes take up is tracked by the absent_pages variable, displayed as kB absent. This should explain most of the difference between the second number in the "available" output and your actual, installed RAM.
You can grep for BIOS-e820 in dmesg to "see" these holes. The memory map is displayed there (right at the top of dmesg output after boot). You should be able to see at what physical addresses you have real, usable RAM.
(Other x86 quirks and reserved memory areas probably account for the rest - I don't know the details there.)
MemTotal in /proc/meminfo indicates RAM available for use. Right at the end of the boot sequence, the kernel frees init data it doesn't need any more, so the value reported in /proc/meminfo could be a bit higher than what the kernel prints out during the initial parts of the boot sequence.
(meminfo uses indirectly totalram_pages for that display. For x86_64, this is calculated in arch/x86/mm/init_64.c too via free_all_bootmem() which itself is in mm/bootmem.c for non-NUMA kernels.)
| Why does Linux show both more and less memory than I physically have installed? |
1,486,499,744,000 |
This question is associated with Where is core file with abrt-hook-cpp installed? .
While I was trying to generate a core file for an intentionally-crashing program, at first core file generation seemed to be stymied by abrt-ccpp. So I tried to manually editing /proc/sys/kernel/core_pattern with vim:
> sudo vim /proc/sys/kernel/core_pattern
When I tried to save the file, vim reported this error:
"/proc/sys/kernel/core_pattern" E667: Fsync failed
I thought this was a permission problem, so I tried to change permissions:
> sudo chmod 666 /proc/sys/kernel/core_pattern
chmod: changing permissions of '/proc/sys/kernel/core_pattern\': Operation not permitted
Finally, based on this post, I tried this:
>sudo bash -c 'echo /home/user/foo/core.%e.%p > /proc/sys/kernel/core_pattern'
This worked.
Based on the working solution, I also tried these, which failed:
> echo "/home/user/foo/core.%e.%p" > /proc/sys/kernel/core_pattern
-bash: /proc/sys/kernel/core_pattern: Permission denied
>
> sudo echo "/home/user/foo/core.%e.%p" > /proc/sys/kernel/core_pattern
-bash: /proc/sys/kernel/core_pattern: Permission denied
Question:
Why is it that editing, chmoding, and redirecting echo output to the file /proc/sys/kernel/core_pattern all failed, and only the noted invocation of sudo bash... was able to overwrite/edit the file?
Question:
Specifically, wrt the attempts to invoke sudo in the failed attempts above: why did they fail? I thought sudo executed the subsequent command with root privilege, which I thought let you do anything in Linux.
|
Entries in procfs are managed by ad hoc code. The code that would set permissions and ownership on the files under /proc/sys (proc_sys_setattr) rejects changes of permissions and ownership with EPERM. So it isn't possible to change the permissions or ownership of these files, full stop. Such changes are not implemented, so being root doesn't help.
When you try to write as a non-root user, you get a permission error. Even with sudo echo "/home/user/foo/core.%e.%p" > /proc/sys/kernel/core_pattern, you're trying to write as a non-root user: sudo runs echo as root, but the redirection happens in the shell from which sudo is executed, and that shell has no elevated privileges. With sudo bash -c '… >…', the redirection is performed in the bash instance which is launched by sudo and which runs as root, so the write succeeds.
The reason only root must be allowed to set the kernel.core_pattern sysctl is that it allows a command to be specified and, since this is a global setting, this command could be executed by any user. This is in fact the case for all sysctl settings to various degrees: they're all global settings, so only root can change them. kernel.core_pattern is just a particularly dangerous case.
| Why is editing core_pattern restricted? |
1,486,499,744,000 |
I accidentally wrote a 512 bytes binary to the wrong USB disk with dd and the device doesn't show any partitions with fdisk anymore.
I thought all the data was gone, but dd if=/dev/sdx | strings shows that the data seems to be still there, since dd fortunately limited itself to the first 512 bytes. Is there any way to recover it?
The disk had two partitions: an ext4 (~4GB) one and the remaining of 16GB were formatted as NTFS.
|
To restore the ext4 partition and its data, I thought about creating one, disk-wide ext4 partition. This allowed me to get access to the data and retrieve information about the partition with tune2fs -l, as suggested by @thkala. This information looks very feasible and, so, unaltered.
Very interestingly, gparted is somewhat able to figure out the actual partition size as turns out from this warning it shows:
10.96 GiB of unallocated space within the partition. To grow the file system to fill the partition, select the partition and choose the menu
item:
because 11GiB was roughly the dimension of the NTFS partition.
Notice the unused space that, IIRC, it was the space the ext4 had still free. The unallocated space gparted recognizes seems to be NTFS partition; now, how can I restore that one too, maybe by finding out where the first partition ends i.e. its total byte count?
Finally TestDisk handled that effortlessly.
| Deleted first 512 bytes of disk; how can I recover my data? |
1,486,499,744,000 |
Some processes spend most of their "lives" in a sleep status. Daemons, servers and general listener come to mind. I was wondering if they get the same CPU time in that state? On something like my laptop, that wouldn't be very optimal!
I vaguely remember from my operating system course that there are different approaches to scheduling. I am interested to find out more about my current Linux (Debian) box.
How can I find out about my current scheduling policies? Can I change them dynamically? With a pretty userland app?
|
Processes do not consume CPU resources while they are sleeping. They may add some overhead since the Kernel has to juggle them around, but that is very insignificant.
However, because of the way the question is worded, I should mention that when using Linux's CFS (Completely Fair Scheduler), it attempts to give programs increased CPU time in proportion to the time it sleeps - that is, if a process sleeps a lot, when it is resumed, it gets a higher priority.
See http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/ for a description of CFS.
| Do sleeping processes get the same CPU time? |
1,486,499,744,000 |
No im not looking to become a cracker or something like that, but Im trying to figure out the process (more from a programming perspective).
So im assuming (guessing) a cracker's main goal is to gain root access to install whatever software (or script) he's written up right? or maybe installing their own kernal module (thats devious for whatever reason)
How exactly does a person go about doing this?
I know people use scripts to check for exploits......but I don't see how, and I also don't exactly see what they do with them once they find them? Are they checking versions for known exploits......and then once they find one.......
I know this all sounds very newbish. but im just trying to get an idea of how it works since I know Linux/Unix systems are supposed to be very secure but im trying to figure out how someone would even go about (the process) of gaining root access.
|
There are countless reasons one might try to compromise a system's security. In broad strokes:
To use the system's resources (e.g. send spam, relay traffic)
To acquire information on the system (e.g. get customer data from an ecommerce site).
To change information on the system (e.g. deface a web site, plant false information, remove information)
Only sometimes do these things require root access. For example, entering a malformed search query on a site that doesn't properly sanitize user input can reveal information from the site's database, such as user names / passwords, email addresses, etc.
Many computer criminals are just "script kiddies"; i.e. people who don't actually understand systems security, and may not even code, but run exploits written by others. These are usually pretty easily defended against because they don't have the ability to adapt; they are limited to exploiting known vulnerabilities. (Though they may leverage botnets -- large groups of compromised computers -- which can mean a danger of DDoS attacks.)
For the skilled attacker, the process goes something like this:
Figure out what the goal is, and what the goal is worth. Security -- maintaining it or compromising it -- is a risk/reward calculation. The riskier and more costly something will be, the more inticing the reward must be to make an attack worthwhile.
Consider all the moving parts that effect whatever the goal is -- for example, if you want to send spam, you could attack the mail server, but it may make more sense to go after a different network-facing service, as all you really need is use of the target's net connection. If you want user data, you'd start looking at the database server, the webapp and web server that have the ability to access it, the system that backs it up, etc.
Never discount the human factor. Securing a computer system is far easier than securing human behavior. Getting someone to reveal information they shouldn't, or run code they shouldn't, is both easy and effective. In college, I won a bet with a friend that involved breaking into his uber-secure corporate network by donning a revealing outfit and running into a lecherous Vice President -- my friend's technical expertise far outweighed mine, but nothing trumps the power of a 17yo co-ed in a short skirt!
If you lack boobs, consider offering up a pointless game or something that idiots will download for fun without considering what it really might be doing.
Look at each part you've identified, and consider what it can do, and how that could be tweaked to do what you want -- maybe the help desk resets passwords for users frequently without properly identifying the caller, and calling them sounding confused will get you someone else's password. Maybe the webapp isn't checking what is put in the search box to make sure it isn't code before sticking it in a function it runs. Security compromises usually start with something purposely exposed that can be made to behave in a way it shouldn't.
| How exactly do people "crack" Unix/Linux Systems? |
1,486,499,744,000 |
I have to find out the type of compression of the linux kernel of my arch linux system, but I can't find a way to get it more than the theory: now bzip2 (bz), formerly gzip (z).
In my computer I run the command:
$ file /boot/vmlinuz-linux
/boot/vmlinuz-linux: Linux kernel x86 boot executable bzImage, version 5.3.11-arch1-1 (linux@archlinux) #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000, RO-rootFS, swap_dev 0x5, Normal VGA
Looking at the theory, I see that bzImage must be compressed by gzip (z), but I can't prove it:
The bzImage was compressed using gzip until Linux 2.6.30 which introduced more algorithms. Although there is the popular misconception that the bz prefix means that bzip2 compression is used (the bzip2 package is often distributed with tools prefixed with bz, such as bzless, bzcat, etc.), this is not the case.
Is there any way to prove it on my own machine? or is the theory itself, in this case, "empirical"?
|
To conclusively determine what compression was used for a given kernel image, without needing to run it or find its configuration, you can follow the approach used by the kernel’s own extract-vmlinux script:
look for the compressor’s signature in the image:
gunzip: \037\213\010
xz: \3757zXZ\000
bzip2: BZh
lzma: \135\0\0\0
lzo: \211\114\132
lz4: \002!L\030
zstd: (\265/\375
try to extract the data from the image, starting at the offset of any signature you’ve found;
check that the result (if any) is an ELF image.
I’ve adapted the script here so that it only reports the compression type. I’m not including it here because it is licensed under the GPL 2 only.
| How do I demonstrate the type of kernel compression in practice? |
1,486,499,744,000 |
I'm curious, how many folders can be nested, and why? Is there a limit?
What I mean by nested is when folders are in this structure:
folder
|_ folder
|_ folder
|_ folder
|_ ...
Not like this:
folder
|_ folder
|_ folder
|_ folder
|_ ...
If there is a limit, is it set by the operating system, or by the file system?
|
The limit will be the number of inodes on your partition since directories, like regular files, take an inode each.
Nothing would stop you from creating a directory inside a directory inside another directory and so on until you run out of inodes.
Note that the shell's command line does have a maximum length which can cause issues with really long paths, but it would still be possible to cd progressively towards the target file.
| How many directories can be nested? |
1,486,499,744,000 |
Whenever I'm trying to execute this line to configure SELinux to install xrdp from this tutorial:
# chcon --type=bin_t /usr/sbin/xrdp
# chcon --type=bin_t /usr/sbin/xrdp-sesman
I get these errors:
chcon: can't apply partial context to unlabeled file '/usr/sbin/xrdp'
chcon: can't apply partial context to unlabeled file '/usr/sbin/xrdp-sesman'
I'm on CentOS 7.2 64 bit.
|
Your command has to give more information. It has been discussed before (but I see no duplicates).
For example,
in chcon: can't apply partial context to unlabeled file while installing nagios with SELinux, Sergei Lomakov pointed out that it was first necessary to label the files using semanage.
in Linux chcon: can't apply partial context to unlabeled file, the suggested solution uses the complete type in the chcon command (but you would have to first determine the type using ls -Z). The complete type would usually have a colon (:) in the name, because it represents a hierarchy.
For example, ls -lZ gives these tags for a sample listing:
$ ls -lZ msginit msgmerge msgunfmt
-rwxr-xr-x. root root unconfined_u:object_r:bin_t:s0 msginit
-rwxr-xr-x. root root unconfined_u:object_r:bin_t:s0 msgmerge
-rwxr-xr-x. root root unconfined_u:object_r:bin_t:s0 msgunfmt
and chcon is expecting something like unconfined_u:object_r:bin_t:s0 in its argument. A bin_t is only partial information.
The referenced procedure should have worked, and the use of chcon redundant. Checking my CentOS7, I happen to have xrdp installed, and a listing shows
$ ls -lZ xrdp xrdp-chansrv xrdp-sesman xrdp-sessvc
-rwxr-xr-x. root root system_u:object_r:bin_t:s0 xrdp
-rwxr-xr-x. root root system_u:object_r:bin_t:s0 xrdp-chansrv
-rwxr-xr-x. root root system_u:object_r:bin_t:s0 xrdp-sesman
-rwxr-xr-x. root root system_u:object_r:bin_t:s0 xrdp-sessvc
The system_u field is the SELinux user, the object_r field is the role, bin_t is the type and s0 is the (default) level. The files in /usr/sbin get their context from a pattern shown by semanage fcontext -l (but there are a lot of matches). In following the guide, you may have removed the pattern for the xrdp — or even for /usr/sbin. However, you can be more explicit in the command, by specifying the user and role using chcon:
chcon -u system_u -r object_r --type=bin_t /usr/sbin/xrdp
chcon -u system_u -r object_r --type=bin_t /usr/sbin/xrdp-sesman
Alternatively, if the patterns are intact but (for instance) you had moved the files rather than installing them, you could repair things using
restorecon -v /usr/sbin/xrdp
restorecon -v /usr/sbin/xrdp-sesman
Further reading:
5.6. SELinux Contexts – Labeling Files
5.6.2. Persistent Changes: semanage fcontext
restorecon - restore file(s) default SELinux security contexts.
chcon - change file SELinux security context
| chcon: can't apply partial context to unlabeled file '/usr/sbin/xrdp' |
1,486,499,744,000 |
$ uname -a
Linux 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Running ubuntu 12.04.1 LTS. Why does it have the architecture (x86_64) listed thrice?
|
I checked uname manual (man uname) and it says the following for the "-a" option:
print all information, in the following order, except omit -p and -i if unknown
In Ubuntu, I guess, options "-m", "-p" and "-i" (machine, processor and hardware-platform) are returning the machine architecture. For example, if you use the command
uname -mpi
You will see:
x86_64 x86_64 x86_64
On the other hand, if you choose all the option:
uname -snrvmpio
You will get the same result as:
uname -a
Output:
Linux <hostname> 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
I also executed "uname" with options "-m", "-p" and "-i" on an ARCHLINUX distro and I got a different answer:
x86_64 unknown unknown
In fact, when I asked for "uname -a" on the ARCHLINUX distro the answer was:
Linux <hostname> xxxxxx-ARCH #1 SMP PREEMPT Mon Feb 14 20:40:47 CEST 2015 x86_64 GNU/Linux
While when executed "uname -snrvmpio" on the ARCHLINUX distro I got:
Linux <hostname> xxxxxx-ARCH #1 SMP PREEMPT Mon Feb 14 20:40:47 CEST 2015 x86_64 unknown unknown GNU/Linux
| Why is architecture listed thrice in uname -a? |
1,486,499,744,000 |
Does the Ghost Vulnerability require access (as in being a logged in user) to the effected OS in question? Can someone clarify the 'remote attacker that is able to make an application call'? I only seem to find tests to run on the local system directly but not from a remote host.
All the information I have gathered so far about the Ghost Vulnerability from multiple sources (credits to those sources) I have posted below in an answer in case anyone else is curious.
Edit, I found my answer:
During a code audit Qualys researchers discovered a buffer overflow in
the __nss_hostname_digits_dots() function of glibc. This bug can be
triggered both locally and remotely via all the gethostbyname*()
functions. Applications have access to the DNS resolver primarily
through the gethostbyname*() set of functions. These functions convert
a hostname into an IP address.
|
Answer to my question, from Qualys:
During our testing, we developed a proof-of-concept in which we send a
specially created e-mail to a mail server and can get a remote shell
to the Linux machine. This bypasses all existing protections (like
ASLR, PIE and NX) on both 32-bit and 64-bit systems.
My compiled research below for anyone else looking:
Disclaimer
Despite what a lot of other threads/blogs might tell you, I suggest not to immediately update every single OS you have blindly without thoroughly testing these glibc updates. It has been reported that the glibc updates have caused massive application segfaults forcing people to roll back their glibc updates to their previous version.
One does not simply mass-update a production environment without testing.
Background Information
GHOST is a 'buffer overflow' bug affecting the gethostbyname() and gethostbyname2() function calls in the glibc library. This vulnerability allows a remote attacker that is able to make an application call to either of these functions to execute arbitrary code with the permissions of the user running the application.
Impact
The gethostbyname() function calls are used for DNS resolving, which is a very common event. To exploit this vulnerability, an attacker must trigger a buffer overflow by supplying an invalid hostname argument to an application that performs a DNS resolution.
Current list of affected Linux distros
RHEL (Red Hat Enterprise Linux) version 5.x, 6.x and 7.x
RHEL 4 ELS fix available ---> glibc-2.3.4-2.57.el4.2
Desktop (v. 5) fix available ---> glibc-2.5-123.el5_11.1
Desktop (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Desktop (v. 7) fix available ---> glibc-2.17-55.el7_0.5
HPC Node (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
HPC Node (v. 7) fix available ---> glibc-2.17-55.el7_0.5
Server (v. 5) fix available ---> glibc-2.5-123.el5_11.1
Server (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Server (v. 7) fix available ---> glibc-2.17-55.el7_0.5
Server EUS (v. 6.6.z) fix available ---> glibc-2.12-1.149.el6_6.5
Workstation (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Workstation (v. 7) fix available ---> glibc-2.17-55.el7_0.5
CentOS Linux version 5.x, 6.x & 7.x
CentOS-5 fix available ---> glibc-2.5-123.el5_11
CentOS-6 fix available ---> glibc-2.12-1.149.el6_6.5
CentOS-7 fix available ---> glibc-2.17-55.el7_0.5
Ubuntu Linux version 10.04, 12.04 LTS
10.04 LTS fix available ---> libc6-2.11.1-0ubuntu7.20
12.04 LTS fix available ---> libc6-2.15-0ubuntu10.10
Debian Linux version 6.x, 7.x
6.x squeeze vulnerable
6.x squeeze (LTS) fix available ---> eglibc-2.11.3-4+deb6u4
7.x wheezy vulnerable
7.x wheezy (security) fix available ---> glib-2.13-38+deb7u7
Linux Mint version 13.0
Mint 13 fix available ---> libc6-2.15-0ubuntu10.10
Fedora Linux version 19 (or older should upgrade)
Fedora 19 - vulnerable - EOL on Jan 6, 2014 (upgrade to Fedora 20/21 for patch)
SUSE Linux Enterprise
Server 10 SP4 LTSS for x86 fix available ---> glibc-2.4-31.113.3
Server 10 SP4 LTSS for AMD64 and Intel EM64T fix available ---> glibc-2.4-31.113.3
Server 10 SP4 LTSS for IBM zSeries 64bit fix available ---> glibc-2.4-31.113.3
Software Development Kit 11 SP3 fix available ---> glibc-2.11.3-17.74.13
Server 11 SP1 LTSS fix available ---> glibc-2.11.1-0.60.1
Server 11 SP2 LTSS fix available ---> glibc-2.11.3-17.45.55.5
Server 11 SP3 (VMware) fix available ---> glibc-2.11.3-17.74.13
Server 11 SP3 fix available ---> glibc-2.11.3-17.74.13
Desktop 11 SP3 fix available ---> glibc-2.11.3-17.74.13
openSUSE (versions older than 11 should upgrade)
11.4 Evergreen fix available ---> glibc-2.11.3-12.66.1
12.3 fix available ---> glibc-2.17-4.17.1
What packages/applications are still using the deleted glibc?
(credits to Gilles)
For CentOS/RHEL/Fedora/Scientific Linux:
lsof -o / | awk '
BEGIN {
while (("rpm -ql glibc | grep \\\\.so\\$" | getline) > 0)
libs[$0] = 1
}
$4 == "DEL" && $8 in libs {print $1, $2}'
For Ubuntu/Debian Linux:
lsof -o / | awk '
BEGIN {
while (("dpkg -L libc6:amd64 | grep \\\\.so\\$" | getline) > 0)
libs[$0] = 1
}
$4 == "DEL" && $8 in libs {print $1, $2}'
What C library (glibc) version does my Linux system use?
The easiest way to check the version number is to run the following command:
ldd --version
Sample outputs from RHEL/CentOS Linux v6.6:
ldd (GNU libc) 2.12
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
Sample outputs from Ubuntu Linux 12.04.5 LTS:
ldd (Ubuntu EGLIBC 2.15-0ubuntu10.9) 2.15
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
Sample outputs from Debian Linux v7.8:
ldd (Debian EGLIBC 2.13-38+deb7u6) 2.13
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
GHOST vulnerability check
The University of Chicago is hosting the below script for easy downloading:
$ wget https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
[OR]
$ curl -O https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
$ gcc GHOST.c -o GHOST
$ ./GHOST
[responds vulnerable OR not vulnerable ]
/* ghosttest.c: GHOST vulnerability tester */
/* Credit: http://www.openwall.com/lists/oss-security/2015/01/27/9 */
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#define CANARY "in_the_coal_mine"
struct {
char buffer[1024];
char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };
int main(void) {
struct hostent resbuf;
struct hostent *result;
int herrno;
int retval;
/*** strlen (name) = size_needed - sizeof (*host_addr) - sizeof (*h_addr_ptrs) - 1; ***/
size_t len = sizeof(temp.buffer) - 16*sizeof(unsigned char) - 2*sizeof(char *) - 1;
char name[sizeof(temp.buffer)];
memset(name, '0', len);
name[len] = '\0';
retval = gethostbyname_r(name, &resbuf, temp.buffer, sizeof(temp.buffer), &result, &herrno);
if (strcmp(temp.canary, CANARY) != 0) {
puts("vulnerable");
exit(EXIT_SUCCESS);
}
if (retval == ERANGE) {
puts("not vulnerable");
exit(EXIT_SUCCESS);
}
puts("should not happen");
exit(EXIT_FAILURE);
}
Compile and run it as follows:
$ gcc ghosttester.c -o ghosttester
$ ./ghosttester
[responds vulnerable OR not vulnerable ]
Red Hat Access Lab: GHOST tool Do not use this tool, its reporting is wrong, the Vulnerability checker from Qualys is accurate.
Patching
CentOS/RHEL/Fedora/Scientific Linux
sudo yum clean all
sudo yum update
Now restart to take affect:
sudo reboot
Alternatively, if your mirror don’t contain the newest packages, just download them manually. *note: For more advanced users
CentOS 5
http://mirror.centos.org/centos/5.11/updates/x86_64/RPMS/
CentOS 6
mkdir ~/ghostupdate
cd ~/ghostupdate
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-devel-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-common-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/nscd-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-static-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-headers-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-utils-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-static-2.12-1.149.el6_6.5.i686.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-devel-2.12-1.149.el6_6.5.i686.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-2.12-1.149.el6_6.5.i686.rpm
yum localupdate *.rpm [OR] rpm -Uvh *.rpm
Ubuntu/Debian Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get dist-upgrade
Restart:
sudo reboot
SUSE Linux Enterprise
To install this SUSE Security Update use YaST online_update. Or use the following commands as per your version:
SUSE Linux Enterprise Software Development Kit 11 SP3
zypper in -t patch sdksp3-glibc-10206
SUSE Linux Enterprise Server 11 SP3 for VMware
zypper in -t patch slessp3-glibc-10206
SUSE Linux Enterprise Server 11 SP3
zypper in -t patch slessp3-glibc-10206
SUSE Linux Enterprise Server 11 SP2 LTSS
zypper in -t patch slessp2-glibc-10204
SUSE Linux Enterprise Server 11 SP1 LTSS
zypper in -t patch slessp1-glibc-10202
SUSE Linux Enterprise Desktop 11 SP3
zypper in -t patch sledsp3-glibc-10206
Finally run for all SUSE linux version to bring your system up-to-date:
zypper patch
OpenSUSE Linux
To see a list of available updates including glibc on a OpenSUSE Linux, enter:
zypper lu
To simply update installed glibc packages with their newer available versions, run:
zypper up
Nearly every program running on your machine uses glibc. You need to restart every service or app that uses glibc to ensure the patch takes effect. Therefore, a reboot is recommended.
How to restart init without restarting or affecting the system?
telinit u
' man telinit ' -- U or u to request that the init(8) daemon re-execute itself. This is not recommended since Upstart is currently unable to pre-serve its state, but is necessary when upgrading system libraries.
To immediately mitigate the threat in a limited manner is by disabling reverse DNS checks in all your public facing services. For example, you can disable reverse DNS checks in SSH by setting UseDNS to no in your /etc/ssh/sshd_config.
Sources (and more information):
https://access.redhat.com/articles/1332213
http://www.cyberciti.biz/faq/cve-2015-0235-patch-ghost-on-debian-ubuntu-fedora-centos-rhel-linux/
http://www.openwall.com/lists/oss-security/2015/01/27/9
https://security.stackexchange.com/questions/80210/ghost-bug-is-there-a-simple-way-to-test-if-my-system-is-secure
http://bobcares.com/blog/ghost-hunting-resolving-glibc-remote-code-execution-vulnerability-cve-2015-0235-in-centos-red-hat-ubuntu-debian-and-suse-linux-servers
https://community.qualys.com/blogs/laws-of-vulnerabilities/2015/01/27/the-ghost-vulnerability
https://security-tracker.debian.org/tracker/CVE-2015-0235
| Ghost Vulnerability - CVE-2015-0235 |
1,486,499,744,000 |
I'm attempting to limit a process to a given number of CPU cores. According to the taskset man page and this documentation, the following should work:
[fedora@dfarrell-opendaylight-cbench-devel ~]$ taskset -pc 0 <PID>
pid 24395's current affinity list: 0-3
pid 24395's new affinity list: 0
To put it simply - this doesn't work. Putting the process under load and watching top, it sits around 350% CPU usage (same as without taskset). It should max out at 100%.
I can properly set affinity via taskset -c 0 <cmd to start process> at process spawn time. Using cpulimit -p <PID> -l 99 also kinda-works. In both cases, putting the process under the same load results in it maxing out at 100% CPU usage.
What's going wrong here?
|
Update: Newer versions of taskset have a -a/--all-tasks option that "operates on all the tasks (threads) for a given pid" and should solve the behavior I show below.
I wrote a Python script that simply spins up some threads and burns CPU cycles. The idea is to test taskset against it, as it's quite simple.
#!/usr/bin/env python
import threading
def cycle_burner():
while True:
meh = 84908230489 % 323422
for i in range(3):
thread = threading.Thread(target=cycle_burner)
print "Starting a thread"
thread.start()
Just running the Python script eats up about 150% CPU usage.
[~/cbench]$ ./burn_cycles.py
Starting a thread
Starting a thread
Starting a thread
Launching my Python script with taskset works as expected. Watching top shows the Python process pegged at 100% usage.
[~/cbench]$ taskset -c 0 ./burn_cycles.py
Starting a thread
Starting a thread
Starting a thread
Interestingly, launching the Python script and then immediately using taskset to set the just-started process' affinity caps the process at 100%. Note from the output that the Linux scheduler finished executing the Bash commands before spawning the Python threads. So, the Python process was started, then it was set to run on CPU 0, then it spawned its threads, which inherited the proper affinity.
[~/cbench]$ ./burn_cycles.py &; taskset -pc 0 `pgrep python`
[1] 8561
pid 8561's current affinity list: 0-3
pid 8561's new affinity list: 0
Starting a thread
[~/cbench]$ Starting a thread
Starting a thread
That result contrasts with this method, which is exactly the same but allows the Python threads to spawn before setting the affinity of the Python process. This replicates the "taskset does nothing" results I described above.
[~/cbench]$ ./burn_cycles.py &
[1] 8996
[~/cbench]$ Starting a thread
Starting a thread
Starting a thread
[~/cbench]$ taskset -pc 0 `pgrep python`
pid 8996's current affinity list: 0-3
pid 8996's new affinity list: 0
What's going wrong here?
Apparently threads spawned before the parent process' affinity is changed don't inherit the affinity of their parent. If someone could edit in a link to documentation that explains this, that would be helpful.
| Setting running process affinity with taskset fails |
1,486,499,744,000 |
The Insert is located right next to Backspace.
So when I am using Leafpad, Gedit, etc., I hit Insert by accident often, which causes the cursor to turn into a bold box which
over-writes text as I type.
How do I disable it?
|
First, find the keysym which corresponds to Insert
$ xmodmap -pke | grep -i insert
This is probably key 118. To disable it globally run
$ xmodmap -e "keycode 118 ="
which causes that key to map to nothing at all.
Running this command every time your Xserver starts is very dependent upon which distribution and session manager you are using.
| How to permanently disable the Insert key on Linux? |
1,486,499,744,000 |
I'm trying to figure out how to blacklist modules, and I'm trying it on the USB storage. Unfortunately it seems to have no effect, and I get the module in even if it's not used (apparently).
My experiment is taking place on an Ubuntu 12.04.3 LTS.
raptor@raptor-VirtualBox:/etc/modprobe.d$ lsmod | grep usb
usb_storage 39720 0
usbhid 46054 0
hid 82511 2 hid_generic,usbhid
raptor@raptor-VirtualBox:/etc/modprobe.d$ cat blacklist.conf | grep usb
blacklist usb_storage
blacklist usbmouse
blacklist usbkbd
|
Your problem probably results from the fact that a copy of /etc/modprobe.d/blacklist.conf is located in the initramfs. When you reboot your computer, it is still using the old copy that doesn't contain your change. Try to rebuild the initramfs with the following command and then reboot:
sudo update-initramfs -u
| Kernel module blacklist not working |
1,486,499,744,000 |
How or where does Linux determine the assignment of a network device? Specifically, wlan0 or wlan1 for wireless USB devices.
I plugged in a TP USB wireless a while ago, and it was assigned wlan0. I removed it. This week I plugged in an Edimax USB wireless device and it comes up as wlan1. I removed it today to try a second Edimax USB wireless device (I bought two) and now it comes up wlan2.
I know enough of Unix/Linux to know this is being configured somewhere, and if I delete the unused config file I can make the latest Edimax become wlan0. But how/where?
|
Udev is the system component that determines the names of devices under Linux — mostly file names under /dev, but also the names of network interfaces.
Versions of udev from 099 to 196 come with rules to record the names of network interfaces and always use the same number for the same device. These rules are disabled by default starting from udev 174, but may nonetheless be enabled by your distribution (e.g. Ubuntu keeps them). Some distributions provide different rule sets.
The script that records and reserves interface names for future use is
/lib/udev/rules.d/75-persistent-net-generator.rules. It writes rules in
/etc/udev/rules.d/70-persistent-net.rules. So remove the existing wlan0 and wlan1 entries from your /etc/udev/rules.d/70-persistent-net.rules, and change wlan2 to wlan0. Run udevadm --trigger --attr-match=vendor='Edimax' (or whatever --attr-match parameter you find matches your device) to reapply the rules to the already-plugged-in device.
| wlan number assignment |
1,486,499,744,000 |
I have listed the names of the files which are to be deleted into a file. How can I pass the file to rm command so that it should delete them one by one.
|
If you have one file per line, one way to do it is:
tr '\n' '\0' < list_of_files_to_be_deleted.txt | xargs -0 -r rm --
The file list is given as input to the tr command which changes the file separator from linefeed to the null byte and the xargs command reads files separated by null bytes on input and launches the rm command with the files appended as arguments.
| How to execute command on list of file names in a file? |
1,486,499,744,000 |
When I use this, the quality doesn't look bad.
ffmpeg -same_quant -i video.mp4 video.avi
In the ffmpeg documentation is written: "Note that this is NOT SAME QUALITY. Do not use this option unless you know you need it."
Do I get with -same_quant the best quality or is there an option that gives the same quality as the input and is more recommended?
|
(adapted from comments above)
Depending on the codecs used (some codecs are incompatible with some containers), you could always simply copy the streams (-codec copy). That is the best way to avoid quality changes, as you're not reencoding the streams, just repackaging those in a different container.
When dealing with audio/video files, it is important to keep in mind that containers are mostly independent from the used codecs. It is common to see people referring to files as "AVI video" or "MP4 video", but those are containers and tell us little about whether a player will be able to play the streams, as, apart from technical limitations (for example, AVI may have issues with h264 and Ogg Vorbis), you could use any codec.
-same_quant seems to be a way to tell ffmpeg to try to achieve a similar quality, but as soon as you reencode the video (at least with lossy codecs), you have no way to get the same quality. If you're concerned with quality, a good rule of thumb is to avoid reencoding the streams when possible.
So, in order to copy the streams with ffmpeg, you'd do:
ffmpeg -i video.mp4 -codec copy video.avi
(As @Peter.O mentioned, option order is important, so that's where -codec copy must go. You could still keep -same_quant, but it won't have any effect as you're not reencoding the streams.)
| How get the best quality when converting from mp4 to avi with ffmpeg? |
1,486,499,744,000 |
btrfs has finally found its way into the latest kernels, is it considered stable and safe enough to use in a home backup scenario (as an alternative to zfs) ?
|
No, and while fuse-ZFS is the bee's knees (having tried it) I wouldn't use it either. It's not a stability issue - both are fairly stable - but one of code maturity.
| Is btrfs stable enough for home usage? [closed] |
1,486,499,744,000 |
If two processes are connected by a pipe,
> cmd1 | cmd2
is there any way for cmd1 to find out the name (or PID) of the process on the other side of the pipe (cmd2)?
Also, vice versa, is there any way for cmd2 to get the name/PID of cmd1?
I know that there is isatty(3) to check if the output goes to (or the input comes from) a terminal, so I wondered if there is a way to find out a little bit more about the other hand side.
|
You can see the pipe in /proc/$PID/fd. The descriptor is a symlink to something like pipe:[188528098]. With that information you can search for the other process:
$ lsof -n | grep -w 188528098
sleep 1565 hl 1w FIFO 0,12 0t0 188528098 pipe
sleep 1566 hl 0r FIFO 0,12 0t0 188528098 pipe
Or, if you want to be sure (for automatic processing) that the number is the socket and not part of a file name:
$ lsof -n | awk 'NF==9 && $5=="FIFO" && $9=="pipe" && $8==188528098'
With lsof 4.88 and above, you can also use the -E or +E flags:
In combination with -p <pid>, -d <descriptor>, you can get the endpoint information for a specific descriptor of a given pid.
$ sleep 1 | sh -c 'lsof -E -ap "$$" -d 0; exit'
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sh 27176 chazelas 0r FIFO 0,10 0t0 2609460 pipe 27175,sleep,1w
Above telling us that fd 0 of sh is a pipe with fd 1 of sleep at the other end. If you change -E to +E, you also get the full information for that fd of sleep:
$ sleep 1 | sh -c 'lsof +E -ap "$$" -d 0; exit'
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sleep 27066 chazelas 1w FIFO 0,10 0t0 2586272 pipe 27067,sh,0r 27068,lsof,0r
sh 27067 chazelas 0r FIFO 0,10 0t0 2586272 pipe 27066,sleep,1w
(see how lsof also has the pipe on its stdin)
| Name of the process on the other end of a unix pipe? |
1,486,499,744,000 |
Is it possible to add a name as a group member when the name has a space? For example "foo bars" is the name and I want it to add to the group called "reindeers".
This group is created in AD and it is quite common for names to have spaces. I won't be able to change the name.
Apologies if this has already been asked here. I just could not find any references. I did find solutions/discussions to adding a username with a space in the sudoers config file by replacing the space with a "_" instead, or escaping the space with a backslash. Not sure if this works with regards to adding it to a group.
Thanks,
Mrky
|
Group and user names aren’t allowed to contain the space character on POSIX-style systems; see Command line login failed with two strings ID in Debian Stretch for references (the restrictions apply to groups as well as users).
In your case you might be able to work around the limitation by managing your groups in AD rather than in /etc/group. But I’d recommend trying to convince the powers that be to drop spaces entirely...
| How to add to group when name has a space? |
1,486,499,744,000 |
recently I had to clean up a hacked server. The malicious process would appear as "who" or "ifconfig eth0" or something like that in "ps aux" output, even tough the executable was just a jumble of letters, which was shown in /proc/[pid]/status .
I'm curious as to how the process managed to mask itself like that.
|
Manipulating the name in the process list is a common practice. E.g. I have in my process listing the following:
root 9847 0.0 0.0 42216 1560 ? Ss Aug13 8:27 /usr/sbin/dovecot -c /etc/dovecot/d
root 20186 0.0 0.0 78880 2672 ? S Aug13 2:44 \_ dovecot-auth
dovecot 13371 0.0 0.0 39440 2208 ? S Oct09 0:00 \_ pop3-login
dovecot 9698 0.0 0.0 39452 2640 ? S Nov07 0:00 \_ imap-login
ericb 9026 0.0 0.0 48196 7496 ? S Nov11 0:00 \_ imap [ericb 192.168.170.186]
Dovecot uses this mechanism to easily show what each process is doing.
It's basically as simple as manipulating the argv[0] parameter in C. argv is an array of pointers to the parameters with which the process has been started. So a command ls -l /some/directory will have:
argv[0] -> "ls"
argv[1] -> "-l"
argv[2] -> "/some/directory"
argv[3] -> null
By allocating some memory, putting some text in that memory, and then putting the address of that memory in argv[0] the process name shown will have been modified to the new text.
| How can a process appear to have different name in ps output? |
1,486,499,744,000 |
A PCB or process control block, is defined like this on Wikipedia
Process Control Block (PCB, also called Task Controlling Block,[1]
Task Struct, or Switchframe) is a data structure in the operating
system kernel containing the information needed to manage a particular
process. The PCB is "the manifestation of a process in an operating
system
and its duty is:
Process identification data
Processor state data
Process control data
So where can the PCB of a process be found?
|
In the Linux kernel, each process is represented by a task_struct in a doubly-linked list, the head of which is init_task (pid 0, not pid 1). This is commonly known as the process table.
In user mode, the process table is visible to normal users under /proc. Taking the headings for your question:
Process identification data is the process ID (which is in the path /proc/<process-id>/...), the command line (cmd), and possibly other attributes depending on your definition of 'identification'.
Process state data includes scheduling data (sched, stat and schedstat), what the process is currently waiting on (wchan), its environment (environ) etc.
Process control data could be said to be its credentials (uid_map) and resource limits (limits).
So it all depends how you define your terms... but in general, all data about a process can be found in /proc.
| Where is PCB on Linux |
1,486,499,744,000 |
I want to know which ports are used by which processes in embedded-linux.
Since it is simple embedded-linux, there are no network command lines such as netstat, lsof.
(only basic command lines such as cat, cp, echo, etc exist).
A partial solution seems to be to use "cat /proc/net/tcp" and "cat /proc/net/udp" command lines.
However, I am not sure the printed list from those command lines shows all ports in use, and the list does not show which process is binded to certain port.
Any comments would be appreciated.
|
You should be able to find all open ports in /proc/net/tcp and /proc/net/udp. Each of those files have an inode column, which can be used to find the process owning that socket.
Once you have an inode number, you can run an ls command such as ls -l /proc/*/fd/* | grep socket:.$INODE to find the processes using that socket. In case a process has been set up with different file descriptors for different threads, you may need to extend the command to ls -l /proc/*/task/*/fd/* | grep socket:.$INODE in order to find them all.
| WITHOUT using network command lines in linux, how to know list of open ports and the process that owns them? |
1,486,499,744,000 |
Is there any way to anonymize http requests through the command line? In other words, is it possible to wget a page without the requester's IP showing up?
|
One method of annoymizing HTTP traffic from the command line is to use tor. This article discusses the method, titled: How to anonymize the programs from your terminal with torify.
General steps from article
You can install the tor package as follows:
Fedora/CentOS/RHEL
$ sudo yum install tor
Ubuntu/Debian
$ sudo apt-get install tor
Edit this file /etc/tor/torrc so that the following lines are present and uncommented:
ControlPort 9051
CookieAuthentication 0
Start the tor service
$ sudo /etc/init.d/tor restart
Testing setup
Real IP
$ curl ifconfig.me
67.253.170.83
anonymized IP
$ torify curl ifconfig.me 2>/dev/null
46.165.221.166
As you can see the ifconfig.me website thinks our IP address is now 46.165.221.166. You can tell tor to start a new session triggering a new IP address for us:
$ echo -e 'AUTHENTICATE ""\r\nsignal NEWNYM\r\nQUIT' | nc 127.0.0.1 9051
250 OK
250 OK
250 closing connection
$ torify curl ifconfig.me 2>/dev/null
37.252.121.31
Do it again to get another different IP
$ echo -e 'AUTHENTICATE ""\r\nsignal NEWNYM\r\nQUIT' | nc 127.0.0.1 9051
250 OK
250 OK
250 closing connection
$ torify curl ifconfig.me 2>/dev/null
91.219.237.161
Downloading Pages
$ torify curl www.google.com 2>/dev/null
Browsing the internet via elinks
$ torify elinks www.google.com
References
Tor docs
How to anonymize the programs from your terminal with torify
| anonymous url navigation in command line? |
1,486,499,744,000 |
What is the reason for the root filesystem being mounted ro in the initramfs (and in initrd).
For example the Gentoo initramfs guide mounts the root filesystem with:
mount -o ro /dev/sda1 /mnt/root
Why not the following?
mount -o rw /dev/sda1 /mnt/root
I can see that there is a probably a good reason (and it probably involves switchroot), however it does not seem to be documented anywhere.
|
The initial ramdisk (initrd) is typically a stripped-down version of the root filesystem containing only that which is needed to mount the actual root filesystem and hand off booting to it.
The initrd exists because in modern systems, the boot loader can't be made smart enough to find the root filesystem reliably. There are just too many possibilities for such a small program as the boot loader to cover. Consider NFS root, nonstandard RAID cards, etc. The boot loader has to do its work using only the BIOS plus whatever code can be crammed into the boot sector.
The initrd gets stored somewhere the boot loader can find, and it's small enough that the extra bit of space it takes doesn't usually bother anyone. (In small embedded systems, there usually is no "real" root, just the initrd.)
The initrd is precious: its contents have to be preserved under all conditions, because if the initrd breaks, the system cannot boot. One design choice its designers made to ensure this is to make the boot loader load the initrd read-only. There are other principles that work toward this, too, such as that in the case of small systems where there is no "real" root, you still mount separate /tmp, /var/cache and such for storing things. Changing the initrd is done only rarely, and then should be done very carefully.
Getting back to the normal case where there is a real root filesystem, it is initially mounted read-only because initrd was. It is then kept read-only as long as possible for much the same reasons. Any writing to the real root that does need to be done is put off until the system is booted up, by preference, or at least until late in the boot process when that preference cannot be satisfied.
The most important thing that happens during this read-only phase is that the root filesystem is checked to see if it was unmounted cleanly. That is something the boot loader could certainly do instead of leaving it to the initrd, but what then happens if the root filesystem wasn't unmounted cleanly? Then it has to call fsck to check and possibly fix it. So, where would initrd get fsck, if it was responsible for this step instead of waiting until the handoff to the "real" root? You could say that you need to copy fsck into the initrd when building it, but now it's bigger. And on top of that, which fsck will you copy? Linux systems regularly use a dozen or so different filesystems. Do you copy only the one needed for the real root at the time the initrd is created? Do you balloon the size of initrd by copying all available fsck.foo programs into it, in case the root filesystem later gets migrated to some other filesystem type, and someone forgets to rebuild the initrd?
The Linux boot system architects wisely chose not to burden the initrd with these problems. They delegated checking of the real root filesystem to the real root filesystem, since it is in a better position to do that than the initrd.
Once the boot process has proceeded far enough that it is safe to do so, the initrd gets swapped out from under the real root with pivot_root(8), and the filesystem is remounted in read-write mode.
| Why does initramfs mount the root filesystem read-only |
1,486,499,744,000 |
I am trying to apply below nftables rule which I adopted from this guide:
nft add rule filter INPUT tcp flags != syn counter drop
somehow this is ending up with:
Error: Could not process rule: No such file or directory
Can anyone spot what exactly I might be missing in this rule?
|
You're probably missing your table or chain.
nft list ruleset
will give you what you are working with. If it prints out nothing, you're missing both.
nft add table ip filter # create table
nft add chain ip filter INPUT { type filter hook input priority 0 \; } # create chain
Then you should be able to add your rule to the chain.
NOTE: If you're logged in with ssh, your connection will be suspended.
| nftables rule: No such file or directory error |
1,486,499,744,000 |
When configuring a chain in nftables, one has to provide a priority value. Almost all online examples set a piority of 0; sometimes, a value of 100 gets used with certain hooks (output, postrouting).
The nftables wiki has to say:
The priority can be used to order the chains or to put them before or after some Netfilter internal operations. For example, a chain on the prerouting hook with the priority -300 will be placed before connection tracking operations.
For reference, here's the list of different priority used in iptables:
NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation
NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation
NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations
NF_IP_PRI_CONNTRACK (-200): Connection tracking operations
NF_IP_PRI_MANGLE (-150): mangle operation
NF_IP_PRI_NAT_DST (-100): destination NAT
NF_IP_PRI_FILTER (0): filtering operation, the filter table
NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example
NF_IP_PRI_NAT_SRC (100): source NAT
NF_IP_PRI_SELINUX_LAST (225): SELinux at packet exit
NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exit
This states that the priority controls interaction with internal Netfilter operations, but only mentions the values used by iptables as examples.
In which cases is the priority relevant (i.e. has to be set to a value ≠ 0)? Only for multiple chains with same hook? What about combining nftables and iptables? Which internal Netfilter operations are relevant for determining the correct priority value?
|
UPDATE: iptables-nft (rather than iptables-legacy) is using the nftables kernel API and in addition a compatibility layer to reuse xtables kernel modules (those described in iptables-extensions) when there's no native nftables translation available. It should be treated as nftables in most regards, except for this question that it has fixed priorities like the legacy version, so nftables' priorities still matter here.
iptables (legacy) and nftables both rely on the same netfilter infrastructure, and use hooks at various places. it's explained there: Netfilter hooks, or there's this systemtap manpage, which documents a bit of the hook handling:
PRIORITY is an integer priority giving the order in which the probe
point should be triggered relative to any other netfilter hook
functions which trigger on the same packet. Hook functions execute on
each packet in order from smallest priority number to largest priority
number. [...]
or also this blog about netfilter: How to Filter Network Packets using Netfilter–Part 1 Netfilter Hooks (blog disappeared, using a Wayback Machine link instead.)
All this together tell that various modules/functionalities can register at each of the five possible hooks (for the IPv4 case), and in each hook they'll be called by order of the registered priority for this hook.
Those hooks are not only for iptables or nftables. There are various other users, like systemtap above, or even netfilter's own submodules. For example, with IPv4 when using NAT either with iptables or nftables, nf_conntrack_ipv4 will register in 4 hooks at various priorities for a total of 6 times. This module will in turn pull nf_defrag_ipv4 which registers at NF_INET_PRE_ROUTING/NF_IP_PRI_CONNTRACK_DEFRAG and NF_INET_LOCAL_OUT/NF_IP_PRI_CONNTRACK_DEFRAG.
So yes, the priority is relevant only within the same hook. But in this same hook there are several users, and they have already their predefined priority (with often but not always the same value reused across different hooks), so to interact correctly around them, a compatible priority has to be used.
For example, if rules have to be done early on non-defragmented packets, then later (as usual) with defragmented packets, just register two nftables chains in prerouting, one <= -401 (eg -450), the other between -399 and -201 (eg -300). The best iptables could do until recently was -300, ie it couldn't see fragmented packets whenever conntrack, thus early defragmentation was in use (since kernel 4.15 with option raw_before_defrag it will register at -450 instead, but can't do both, but iptables-nft doesn't appear to offer such choice).
So now about the interactions between nftables and iptables: both can be used together, with the exception of NAT in older kernels where they both compete over netfilter's nat ressource: only one should register nat, unless using a kernel >= 4.18 as explained in the wiki. The examples nftables settings just ship with the same priorities as iptables with minor differences.
If both iptables and nftables are used together and one should be used before the other because there are interactions and order of effect needed, just sligthly lower or increase nftables' priority accordingly, since iptables' can't be changed.
For example in a mostly iptables setting, one can use nftables with a specific match feature not available in iptables to mark a packet, and then handle this mark in iptables, because it has support for a specific target (eg the fancy iptables LED target to blink a led) no available in nftables. Just register a sligthly lower priority value for the nftables hook to be sure it's done before. For an usual input filter rule, that would be for example -5 instead of 0. Then again, this value shouldn't be lower than -149 or it will execute before iptables' INPUT mangle chain which is perhaps not what is intended. That's the only other low value that would matter in the input case. For example there's no NF_IP_PRI_CONNTRACK threshold to consider, because conntrack doesn't register something at this priority in NF_INET_LOCAL_IN, neither does SELinux register something in this hook if something related to it did matter, so -225 has no special meaning here.
| When and how to use chain priorities in nftables |
1,486,499,744,000 |
I am very much new to Embedded Linux. We use poky build system. We just use bitbake linux-imx command to build the kernel. It generates some files
zImage, rootfs, uboot and also a sdcard image. We just copy the sdcard image and run the linux on our custom board.
My questions what does rootfs and zImage actually contain??
|
To understand what every file is responsible for you should understand how MPU starts up.
As I understood from your qestion you use NXP (Freescale) i.MX microprocessor family. It includes small ROM loader, which will make basic system setup (interfaces to memory, clock tree etc.), search for media to boot from (based on burned OTP bits or GPIO), find bootloader (u-boot in your case) in exact address which is specified in datasheet, load and start it. U-boot will init more interfaces (e.g. Ethernet), find arguments that should be passed to Kernel (screen settings, console, network settings if you use NFS), copy Kernel to DDR and pass all arguments. Kernel will load all drivers, and search for rootfs with all libraries, applications etc. After this Kernel will start init scripts, which will init all system and start your application.
u-boot is the first thing that will start after ROM bootloader.
You can replace it with your own code if you would like MPU to run
bare-metal code without OS (like microcontroller).
zImage is compressed version of the Linux kernel image that is
self-extracting.
rootfs is root file system, which contains all
applications, libs and in most cases everything, including home
folder.
sdcard image is just all stuff mentioned above which can
be copied (with dd) to the card, after copy you will see FAT
partition with Kernel and device tree and EXT partition with rootfs,
u-boot is in unpartitioned area before FAT (in case you use i.MX6
it's 0x80000). It's there just to make your life easier.
| What is zImage, rootfs |
1,486,499,744,000 |
This question originated with a joke between co-workers about increasing performance by moving swap files to a tmpfs. Clearly even if this is possible, it's not a good idea. All I want to know is, can it be done?
I'm currently on Ubuntu 14.04, but I'd imagine the process is similar for most Linux/Unix machines. Here's what I'm doing:
> mkdir /mnt/tmp
> mount -t tmpfs -o size=10m tmpfs /mnt/tmp
> dd if=/dev/zero of=/mnt/tmp/swapfile bs=1024 count=10240
> chmod 600 /mnt/tmp/swapfile
> mkswap /mnt/tmp/swapfile
# So far, so good!
> swapon /mnt/tmp/swapfile
swapon: /mnt/tmp/swapfile: swapon failed: Invalid argument
So, on either linux or unix (I'm interested in any solution) can you somehow set up swap on a file/partition residing in ram? Is there a way around the Invalid argument error I'm getting above?
Again, just want to emphasize that I'm not expecting this to be a solution to a real-world problem. Just a fun experiment, I guess.
|
So, on either linux or unix (I'm interested in any solution) can you
somehow set up swap on a file/partition residing in ram?
Sure. On FreeBSD:
# swapinfo -h
Device 1024-blocks Used Avail Capacity
/dev/mirror/swap.eli 4194300 0B 4.0G 0%
That shows that currently, I have a 4G encrypted swap partition with mirrored redundancy. I'll add another 4G of non-redundant, non-encrypted swap:
First create a 4G RAM-backed "memory disk" (md) device:
# mdconfig -a -t malloc -s 4g; mdconfig -lv
md0
md0 malloc 4096M -
Then tell swapon to add that to the pool of available swap devices, and swapinfo confirms that I now have 8G of swap:
# swapon /dev/md0; swapinfo -h
Device 1024-blocks Used Avail Capacity
/dev/mirror/swap.eli 4194300 0B 4.0G 0%
/dev/md0 4194304 0B 4.0G 0%
Total 8388604 0B 8.0G 0%
| Swap on tmpfs (Obviously a bad idea, but is it possible?) |
1,486,499,744,000 |
Shouldn't bridge (or a switch) be working without having an IP address? I believe I can have a bridge br0 setup with eth0 and eth1 as members both having no IP addresses.
I can't understand why an address should be allocated to br0?
|
A bridge does not need an IP address to function. Without one it will just perform layer 2 switching, spanning tree protocol and filtering (if configured).
An IP address is required if you want your bridge to take part in layer 3 routing of IP packets.
As an example you can setup a bridge without an IP address in Debian/Ubuntu using the following in /etc/network/interfaces
auto br0
iface br0 inet manual
bridge_ports eth0 eth1
| Why IP address for Linux Bridge which is layer 2 virtual device? |
1,486,499,744,000 |
I use Knoppix (or other Live CDs/DVDs) as a secure environment for creating valuable crypto keys. Unfortunately entropy is a limited resource in such environments. I just noticed that each program start consumes quite some entropy. This seems to be due to some stack protection feature that needs address randomization.
Nice feature but completely useless and - worse - destructive in my scenario. Is there any possibility to disable this feature? I would prefer one that allows me to continue using the original Knoppix (or whatever) image and just need some configuration at runtime.
I read that this was caused by glibc. I am surprised that an strace -p $PID -f -e trace=open against bash does not show any accesses to /dev/random when I start programs. But I am not familiar with the interaction of execve() and the linker.
|
If this is indeed due to address randomization (ASLR has to do with where the program is loaded, see here: http://en.wikipedia.org/wiki/Address_space_layout_randomization) then you can disable it by passing norandmaps to the kernel in the boot options (see here: http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re30.html).
| Can entropy consumption at program start be prevented? |
1,486,499,744,000 |
When I execute ls /directory | grep '[^term]' in Bash I get a regular listing, as if the grep command is ignored somehow. I tried the same thing with egrep, I tried to use it with double and single quotes, but to no better results. When I try ls /directory | grep '^[term] I get all entries beginning with term - as expected.
I have tried out this command in an online editor, where I can test my regex and it worked as it should. But not in Bash. So it works in a simulation, but not in real life.
I work on Crunchbang Linux 10. I hope this is enough information and am looking forward to every hint, because failing to execute on such a basic level and wasting hours of time is really frustrating!
|
Are you sure what you want is happening?
When you run ls /directory | grep '[^term]' you are essentially grepping for not the letters t e r m. This means if a file has other letters in its name it will still appear in the output of ls. Take the following directory for instance:
$ ls
alpha brave bravo charlie delta
Now if I run ls |grep '^[brav]' I get the following:
$ ls |grep '^[brav]'
alpha
brave
bravo
As you can see, not only did I get brave and bravo I also got alpha because the character class [] will get any letter from that list.
Consequently, if I run ls |grep '[^brav]' I will get all the files that do not contain the characters b r a v anywhere in the name.
$ ls |grep '[^brav]'
alpha
bravo
brave
charlie
delta
If you notice it included the entire directory listing because all the files had at least one letter that was not included in the character class.
So as Kanvuanza said, to grep for the inverse of "term" as opposed to the characters t e r m you should do it using grep -v.
For instance:
$ ls |grep -v 'brav'
alpha
charlie
delta
Also if you don't want the files that have any characters in the class use grep -v '[term]'. That will keep any files from showing up that have any of those characters. (Kanvuanza's answer)
For instance:
$ ls |grep -v '[brav]'
As you can see there were no files listed because all the files in this directory included at least one letter from that class.
Addendum:
I wanted to add that using PCRE it is possible to use just regex to filter out using negate expressions. To do this you would use something known as a negative look-ahead regex: (?!<regex>).
So using the example above, you can do something like this to get results you want without using grep flags.
$ ls | grep -P '^(?!brav)'
alpha
charlie
delta
To deconstruct that regex, it first matches on a start of a line ^ and then looks for strings that do not match brav to follow afterwards. Only alpha, charlie, and delta match so those are the only ones that are printed.
| Regular expression problem(s) in Bash: [^negate] doesn't seem to work |
1,486,499,744,000 |
What I already know:
An ELF executable has a number of sections, obviously the .text and .data sections get loaded into memory as these are the main parts of the program. But for a program to work, it needs more info, especially when linked dynamically.
What I'm interested in are sections like .plt, .got, .dynamic, .dynsym, .dynstr etcetera. The parts of the ELF that are responsible for the linking of functions to addresses.
From what I've been able to figure out so far, is that things like .symtab and .strtab do not get loaded (or do not stay) in memory. But are .dynsym and and .dynstr used by the linker? Do they stay in memory? Can I access them from program code?
And are there any parts of an executable that reside in kernel memory?
My interest in this is mostly forensic, but any information on this topic will help. The resources I've read about these tables and dynamic linking are more high level, they only explain the workings, not anything practical about the contents in memory.
Let me know if anything in unclear about my question.
|
The following is a really good reference: http://www.ibm.com/developerworks/linux/library/l-dynamic-libraries/. It contains a bibliography at the end of a variety of different references at different levels. If you want to know every gory detail you can go straight to the source: http://www.akkadia.org/drepper/dsohowto.pdf. (Ulrich Drepper wrote the Linux dynamic linker.)
You can get a really good overview of all the sections in your executable by running a command like "objdump -h myexe" or "readelf -S myexe".
The .interp section contains the name of the dynamic loader that will be used to dynamically link the symbols in this object.
The .dynamic section is a distillation of the program header that is formatted to be easy for the dynamic loader to read. (So it has pointers to all the other sections.)
The .got (Global Offset Table) and .plt (Procedure Linkage Table) are the two main structures that are manipulated by the dynamic linker. The .got is an indirection table for variables and the .plt is an indirection table for functions. Each executable or library (which are called "shared objects") has its own .got and .plt and these are tables of the symbols referenced by that shared object that are actually contained in some other shared object.
The .dynsyn contains all the information about the symbols in your shared object (both the ones you define and the external ones you need to reference.) The .dynsyn doesn't contain the actual symbol names. Those are contained in .dynstr and .dynsyn has pointers into .dynstr. .gnu.hash is a hash table used for quick lookup of symbols by name. It also contains only pointers (pointers into .dynstr, and pointers used for making bucket chains.)
When your shared object dereferences some symbol "foo" the dynamic linker has to go look up "foo" in all the dynamic objects you are linked against to figure out which one contains the "foo" you are looking for (and then what the relative address of "foo" is inside that shared object.) The dynamic linker does this by searching the .gnu.hash section of all the linked shared objects (or the .hash section for old shared objects that don't have a .gnu.hash section.) Once it finds the correct address in the linked shared object it puts it in the .got or .plt of your shared object.
| Which parts of an ELF executable get loaded into memory, and where? |
1,486,499,744,000 |
Assume, that beside the Apache web server logs I never had any contact with any kind of (professional) logs on any operation system. So Logging, although I understand some basics, is all together a pretty new topic. At the moment the investment to fully learn about this topic seems to be quite huge, yet I don't even know yet, if it is even worth knowing more then the most abstract concepts.
Which resources would you suggest should someone in that situation consume (tutorials, man pages, books) to learn about Logging?
Which logs should a normal Linux user read on a daily/monthly basis? Is the assumption even correct that they are written for human readability or are they generally evaluated and used by other tools?
What should the normal *nix user and software developer know about these logs?
What do you need to know about log rotation, if you are not expected to manage professional web servers with huge loads of events?
|
[This was written a few years before the widespread adoption of journald on systemd systems and does not touch on it. Currently (late 2018) both journald and (r)syslog, described below, are used on distros such as Debian. On others, you may have to install rsyslog if you want to use it alongside, but the integration with journald is straightforward.]
I won't discuss logging with regard to ubuntu specifically much, since the topic is standardized for linux in general (and I believe most or all of what I have to say is also true in general for any flavor *nix, but don't take my word for that). I also won't say much about "how to read logs" beyond answering this question:
Is the assumption even correct that they are written for human
readability or are they generally evaluated and used by other tools?
I guess that depends on the application, but in general, at least with regard to what goes into syslog (see below), they should be human readable. "Meaningful to me" is another issue, lol. However, they may be also be structured in a way that makes parsing them with standard tools (grep, awk, etc) for specific purposes easier.
Anywho, first, there is a distinction between applications which do their own logging and applications which use the system logger. Apache by default is the former, although it can be configured to do the later (which I think most people would consider undesirable). Applications which do their own logging could do so in any manner using any location for the file(s), so there is not much to say about that. The system logger is generally referred to as syslog.
syslog
"Syslog" is really a standard that is implemented with a daemon process generically called syslogd (d is for daemon!). The predominant syslog daemon currently in use on linux, including ubuntu, is rsyslogd. Rsyslogd can do a lot, but as configured out of the box on most distros it emulates a traditional syslog, which sorts stuff into plain text files in /var/log. You might find documentation for it in /usr/share/doc/rsyslog-doc-[version] (beware, there is also a /usr/share/doc/rsyslog-[version], but that's just notices from the source package such as NEWS and ChangeLog). If it's there, it's html, but Stack Exchange doesn't permit embedding local file links:
file://usr/share/doc/rsyslog-doc/index.html
So you could try copy pasting that. If it's not there, it may be part of a separate package that is not installed. Query your packaging system (eg, apt-cache search rsyslog | grep doc).
The configuration is in /etc/rsyslog.conf, which has a manual page, man rsyslog.conf, although while the manual page makes a fine reference, it may be less penetrable as an introduction. Fortunately, the fundamentals of the stock rsyslog.conf conform to those of the traditional syslog.conf, for which there are many introductions and tutorials around. This one, for example; what you want to take away from that, while peering at your local rsyslog.conf, is an understanding of facilities and priorities ("priority" is sometimes referred to as loglevel), since these are part of the aforementioned syslog standard. The reason this standard is important is because rsyslog actually gets its stuff via the kernel, and what the kernel implements is the standard.
With regard to the $ directives in rsyslog.conf, these are rsyslog specific and if you install that optional doc package you'll find a guide to them in rsyslog_conf_global.html.
Have fun...if you are curious about how applications use the system logger, look at man logger and man 3 syslog.
Log Rotation
The normative means of rotating logs is via a tool called logrotate (and there is a man logrotate). The normative method of using logrotate is via the cron daemon, although it does not have to be done that way (e.g., if you tend to turn your desktop off everyday, you might as well just do it once at boot before syslog starts but, obviously, after the filesystem is mounted rw).
There's a good introduction to logrotate here. Note that logrotate is not just for syslog stuff, it can be used with any file at all. The base configuration file is /etc/logrotate.conf, but since the configuration has an "include" directive, commonly most stuff goes into individual files in the /etc/logrotate.d directory (here d is for directory, not daemon; logrotate is not a daemon).
An important thing to consider when using logrotate is how an application will re-act when its log file gets "rotated" -- in other words, moved -- while the application is running. WRT (r)syslogd, it will just stop writing to that log (I think there is a security justification for this). The usual way to deal with that is to tell syslog to restart (and re-open all its files), which is why you will see a postrotate directive in logrotate conf files sending SIGHUP to the syslog daemon.
| learning about general logging/logrotation on linux? |
1,486,499,744,000 |
Possible Duplicate:
/proc/PID/fd/X link number
i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output:
lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null
l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null
l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null
lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739]
l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739]
lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313]
lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log
I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?
|
Do they point to some property of the resource?
Yes. They're a unique identifier that allows you to identify the resource.
Also why are some of the links broken?
Because they're links to thinks that don't live in the filesystem, you can't follow the link the normal way. Essentially, links are being abused as a way to return the resource type and unique identifier.
what is pipe?
As the name suggests, a pipe is a connection between two points such that anything put in one end comes out the other end.
| File descriptor linked to socket or pipe in proc [duplicate] |
1,486,499,744,000 |
Let's say there are two users on the LAN, A and B. How do I restrict user A from internet access using iptables rules and saving the rules so that after reboot, they are still effective. Suppose also that I want to grant that user access at some point; how do I enable it again? I am using Ubuntu Linux 10.04. It would be nice if anybody show me how to do it from the command line, as I often login to the machine using a local ssh login.
|
I assume that users A and B are using the same Linux machine(s) where you are the administrator. (It's not completely clear from your question. If A and B are have their own computers which they are administrators on, it's a completely different problem.)
The following command will prevent the user with uid 1234 from sending packets on the interface eth0:
iptables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -j DROP
ip6tables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -j DROP
I recommend reading the Ubuntu iptables guide to get basic familiarity with the tool (and refer to the man page for advanced things like the mangle table).
The user will still be able to run ping (because it's setuid root), but not anything else. The user will still be able to connect to a local proxy if that proxy was started by another user.
To remove this rule, add -D to the command above.
To make the rule permanent, add it to /etc/network/if-up.d/my-user-restrictions (make that an executable script beginning with #!/bin/sh). Or use iptables-save (see the Ubuntu iptables guide for more information).
| How to restrict internet access for a particular user on the lan using iptables in Linux |
1,486,499,744,000 |
I have a disk with some pending unreadable sectors, according to smartd. What would be the easiest way to make the disk remap them and stop smartd from complaining?
Today, I get two of these every hour:
Sep 10 23:15:35 hylton smartd[3353]: Device: /dev/sdc, 1 Currently unreadable (pending) sectors
The system is an x86 system running Ubuntu Linux 9.10 (jaunty). The disk is part of an LVM group. This is how smartctl identifies the disk:
Model Family: Western Digital Caviar Second Generation Serial ATA family
Device Model: WDC WD5000AAKS-00TMA0
Serial Number: WD-WCAPW4207483
Firmware Version: 12.01C01
User Capacity: 500,107,862,016 bytes
|
A pending unreadable sector is one that returned a read error and which the drive has marked for remapping at the first possible opportunity. However, it can't do the remapping until one of two things happens:
The sector is reread successfully
The sector is rewritten
Until then, the sector remains pending. So you have two corresponding ways to deal with this:
Keep trying to reread the sector until you succeed
Overwrite that sector with new data
Obviously, (1) is non-destructive, so you should probably try it first, although keep in mind that if the drive is starting to fail in a serious way then continual reading from a bad area is likely to make it fail much more quickly. If you have a lot of pending sectors and other errors, and you care about the data on the drive, I recommend taking it out of service and using the excellent tool ddrescue to recover as much data as possible. Then discard the drive.
If the sector in question contains data you don't care about, or can restore from a backup, then overwriting it is probably the quickest and simplest solution. You can then view the reallocated and pending counts for the drive to make sure the sector was taken care of.
How do you find out what the sector corresponds to in the filesystem? I found an excellent article on the smartmontools web site, here, although it's fairly technical and is specific to ext2/3/4 and reiser file systems.
A simpler approach, which I used on one of my own (Mac) drives, is to use find / -xdev -type f -print0 | xargs -0 ... to read every file on the system. Make a note of the pending count before running this. If the sector is inside a file, you will get an error message from the tool you used to read the files (eg md5sum) showing you the path to it. You can then focus your attentions on re-reading just this file until it reads successfully. Often this will solve the problem, if it's an infrequently-used file which just needed to be reread a few times. If the error goes away, or you don't encounter any errors in reading all the files, check the pending count to see if it's decreased. If it has, the problem was solved by reading.
If the file cannot be read successfully after multiple tries (eg 20) then you need to overwrite the file, or the block within the file, to allow the drive to reallocate the sector. You can use ddrescue on the file (rather than the partition) to overwrite just the one sector, by copying to a temporary file and then copying back again. Note that just removing the file at this point is a bad idea, because the bad sector will go into the free list where it will be harder to find. Completely overwriting it is bad too, because again the sectors will go into the free list. You need to rewrite the existing blocks. The notrunc option of dd is one way to do this.
If you encounter no errors, and the pending count did not decrease, then the sector must be in the freelist or in part of the filesystem infrastructure (eg an inode table). You can try filling up all the free space with cat /dev/zero >tempfile, and then check the pending count. If it goes down, the problem was in the free list and has now gone away.
If the sector is in the infrastructure, you have a more serious problem, and you will probably encounter errors just walking the directory tree. In this situation, I think the only sensible solution is to reformat the drive, optionally using ddrescue to recover data if necessary.
Keep a very close eye on the drive. Sector reallocation is a very good canary in the coal mine, potentially giving you early warning of a drive that is failing. By taking early action you can prevent a later catastrophic and very painful landslide. I'm not suggesting that a few sector reallocations are an indication that you should discard the drive. All modern drives need to do some reallocation. However, if the drive isn't very old (< 1 yr) or you are getting frequent new reallocations (> 1/month) then I recommend you replace it asap.
I don't have empirical evidence to prove it, but my experience suggests that disk problems can be reduced by reading the whole disk once in a while, either by a dd of the raw disk or by reading every file using find. Almost all the disk problems I've experienced in the past several years have cropped up first in rarely-used files, or on machines that are not used much. This makes sense heuristically, too, in that if a sector is being reread frequently the drive has a chance to reallocate it when it first detects a minor problem with that sector rather than waiting until the sector is completely unreadable. The drive is powerless to do anything with a sector unless the host accesses it somehow, either by reading or writing it or by conducting one of the SMART tests.
I'd like to experiment with the idea of a nightly or weekly cron job that reads the whole disk. Currently I'm using a "poor man's RAID" in which I have a second hard drive in the machine and I back up the main disk to it every night. In some ways, this is actually better than RAID mirroring, because if I goof and delete a file by mistake I can get yesterday's version immediately from the backup disk. On the other hand, I believe a hardware RAID controller does a lot of good work in the background to monitor, report and fix disk problems as they emerge. My current backup script uses rsync to avoid copying data that hasn't changed, but in view of the need to reread all sectors maybe it would be better to copy everything, or to have a separate script that reads the entire raw disk every week.
| How do I make my disk unmap pending unreadable sectors |
1,486,499,744,000 |
I've got a few commands that I run in rc.local so they are run last in the startup sequence. I would like to know if there is a similar facility for undoing the results of those commands at shutdown, like an rc.shutdown. Ideally, it would be run before any of the other /etc/init.d scripts.
|
Not really (at least, to my knowledge).
If you've got SystemV style init scripts, you could create something along the lines of /etc/rc6.K00scriptname and /etc/rc0.d/K00scriptname, which should get executed prior to any of the other scripts in there.
| On Linux, is there an rc.local equivalent for shutdown? |
1,486,499,744,000 |
I have a desktop system where Centos 7 is installed. It has 4 core and 12 GB memory. In order to find memory information I use free -h command. I have one confusion.
[user@xyz-hi ~]$ free -h
total used free shared buff/cache available
Mem: 11G 4.6G 231M 94M 6.8G 6.6G
Swap: 3.9G 104M 3.8G
In total column, It is saying that total in 11GB (that's correct), in last column available, it is saying that 6.6GB and used is 4.6G.
If used memory is 4.6GB then remaining should be 6.4 GB (11-4.6=6.4). What is correct interpretation of above output
What is the difference between total and available and free memory?
Am I out of memory is above case if I need 1 GB more for some new application?
|
man free command solve my problem.
DESCRIPTION
free displays the total amount of free and used physical and swap mem‐
ory in the system, as well as the buffers and caches used by the ker‐
nel. The information is gathered by parsing /proc/meminfo. The dis‐
played columns are:
total Total installed memory (MemTotal and SwapTotal in /proc/meminfo)
used Used memory (calculated as total - free - buffers - cache)
free Unused memory (MemFree and SwapFree in /proc/meminfo)
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available
on kernels 2.6.32, displayed as zero if not available)
buffers
Memory used by kernel buffers (Buffers in /proc/meminfo)
cache Memory used by the page cache and slabs (Cached and Slab in
/proc/meminfo)
buff/cache
Sum of buffers and cache
available
Estimation of how much memory is available for starting new
applications, without swapping. Unlike the data provided by the
cache or free fields, this field takes into account page cache
and also that not all reclaimable memory slabs will be reclaimed
due to items being in use (MemAvailable in /proc/meminfo, avail‐
able on kernels 3.14, emulated on kernels 2.6.27+, otherwise the
same as free)
| What is difference between total and free memory |
1,486,499,744,000 |
http://linuxg.net/how-to-transform-a-process-into-a-daemon-in-linux-unix/ gives an example of daemonizing a process in bash:
$ nohup firefox& &> /dev/null
If I am corrrect, the command is the same as "nohup and background a process".
But isn't a daemon more than a nohupped and background process?
What steps are missing here to daemonize a process?
For example, isn't changing the parent process necessary when daemonizing a process? If yes, how do you do that in bash? I am still trying to understand a related reply https://unix.stackexchange.com/a/177361/674.
What other steps and conditions?
See my related question https://stackoverflow.com/q/35705451/156458
|
From the Wikipedia article on daemon:
In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually either created by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix.
Read the manpage of the daemon function.
Running a background command from a shell that immediately exits results in the process's PPID becoming 1. Easy to test:
# bash -c 'nohup sleep 10000 &>/dev/null & jobs -p %1'
1936
# ps -p 1936
PID PPID PGID WINPID TTY UID STIME COMMAND
1936 1 9104 9552 cons0 1009 17:28:12 /usr/bin/sleep
As you can see, the process is owned by PID 1, but still associated with a TTY. If I log out from this login shell, then log in again, and do ps again, the TTY becomes ?.
Read here why it's important to detach from TTY.
Using setsid (part of util-linux):
# bash -c 'cd /; setsid sleep 10000 </dev/null &>/dev/null & jobs -p %1'
9864
# ps -p 9864
PID PPID PGID WINPID TTY UID STIME COMMAND
9864 1 9864 6632 ? 1009 17:40:35 /usr/bin/sleep
I think you don't even have to redirect stdin, stdout and stderr.
| Daemonize a process in shell? |
1,486,499,744,000 |
I wanted to know what mathematical connection is there between the SZ, RSS and VSZ output in ps output e.g.
ps -p 2363 -o sz,rss,vsz
|
sz and vsz represent the same thing, but sz is in page units, while vsz is in 1024 byte units.
To get your system's page size, you can use:
$ getconf PAGE_SIZE
4096
rss is the subset of the process's memory that is currently loaded in RAM (in kilobytes). This is necessarily smaller than vsz.
So the "mathematical" connections are:
vsz * 1024 = sz * page_size
rss <= vsz
| Mathematical connection between SZ RSS and VSZ in ps o/p? |
1,486,499,744,000 |
I would like to set an environment variable so that it is set when I launch a specific Flatpak application, and only set for this application. How do I go about doing this in a permanent manner?
|
You can do this via the flatpak override command.
To set only one environment variable you can use this syntax:
flatpak override --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name
To set multiple environment variables you can use this syntax:
flatpak override --env=VARIABLE_NAME_ONE=VARIABLE_VALUE_ONE --env=VARIABLE_NAME_TWO=VARIABLE_VALUE_TWO full.application.Name
This will set it globally and therefore requires you to run the command as root. If you want to do this for your current user, you can add the --user parameter to the command, like so:
flatpak override --user --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name
Source and further reading: http://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak-override
| How do I permanently set an environment variable for a specific Flatpak application? |
1,486,499,744,000 |
I just started learning how Everything Is A FileTM on Linux, which made me wonder what would happen if I literally read from /dev/stdout:
$ cat /dev/stdout
^C
$ tail /dev/stdout
^C
(The ^C is me killing the program after it hangs).
When I try with vim, I get the unthinkable message: "/dev/stdout" is not a file. Gasp!
So what gives, why am I getting hangups or error messages when I try to read these "files"?
|
why am I getting hangups
You aren't getting "hangups" from cat(1) and tail(1), they're just blocking on read. cat(1) waits for input, and prints it as soon as it sees a complete line:
$ cat /dev/stdout
foo
foo
bar
bar
Here I typed fooEnterbarEnterCTRL-D.
tail(1) waits for input, and prints it only when it can detect EOF:
$ tail /dev/stdout
foo
bar
foo
bar
Here I typed again fooEnterbarEnterCTRL-D.
or error messages
Vim is the only one that gives you an error. It does that because it runs stat(2) against /dev/stdout, and it finds it doesn't have the S_IFREG bit set.
/dev/stdout is a file, but not a regular file. In fact, there's some dance in the kernel to give it an entry in the filesystem. On Linux:
$ ls -l /dev/stdout
lrwxrwxrwx 1 root root 15 May 8 19:42 /dev/stdout -> /proc/self/fd/1
On OpenBSD:
$ ls -l /dev/stdout
crw-rw-rw- 1 root wheel 22, 1 May 7 09:05:03 2015 /dev/stdout
On FreeBSD:
$ ls -l /dev/stdout
lrwxr-xr-x 1 root wheel 4 May 8 21:35 /dev/stdout -> fd/1
$ ls -l /dev/fd/1
crw-rw-rw- 1 root wheel 0x18 May 8 21:35 /dev/fd/1
| Why can't I read /dev/stdout with a text editor? |
1,486,499,744,000 |
While I was reading a C source code files, I found this declarations. (This source code was written for linux system program. This is very important information)
#include <time.h>
#include <stdio.h>
static timer_t* _interval_timer;
...
At first, I wanted to know more about the 'timer_t'. So I googled 'time.h' to get header information. But, there wasn't any words about 'timer_t', only mentioning about 'time_t'.
In curiosity, I searched and opened 'time.h' c standard library file in my 'mac' computer(as you know, /usr/include folder stores c standard library files.) But this file was same with the previous googled one.
Finally, I turned on my linux os(ubuntu) using virtual machine and opend the 'time.h' in the linux c standard library folder(the folder path is same as OSX). As I expected, 'time.h' file in linux has declaration of timer_t.
I added the code lines which declare the 'timer_t' type below.
#if !defined __timer_t_defined && \
((defined _TIME_H && defined __USE_POSIX199309) || defined __need_timer_t)
# define __timer_t_defined 1
# include <bits/types.h>
/* Timer ID returned by `timer_create'. */
typedef __timer_t timer_t;
My question is this.
Why 'timer_t' is only defined in linux c standard library?
Does this situation commonly happens? I mean, are there any differently defined functions or attributes between different OS?
|
Unix and C have an intertwined history, as they were both developed around the same time at Bell Labs in New Jersey and one of the major purposes of C was to implement Unix using a high level, architecture independent, portable language. However, there wasn't any official standardization until 1983. POSIX, the "portable operating system interface" is an IEEE operating system standard dating back to the time of the "Unix Wars". It has been evolving ever since and is now the most widely implemented such standard. OSX is officially POSIX compliant, and linux unofficially is -- there are logistics and costs associated with official compliance that linux distros do not partake in.
Much of what POSIX has focussed on is the elaboration of things not part of ISO C. Time.h is, but the ISO version does not include the timer_t type or any functions which use it. Those are from the POSIX extension, hence this reference in the linux header:
#if !defined __timer_t_defined && \
((defined _TIME_H && defined __USE_POSIX199309)
The __USE_POSIX199309 is an internal glibc symbol that is set in features.h when _POSIX_C_SOURCE >= 199309L, meaning that POSIX.1b is to be supported (see the feature_test_macros manpage). This is also supported with _XOPEN_SOURCE >= 600.
are there any differently defined functions or attributes between different OS?
I think with regard to C, amongst POSIX systems, there is an effort to avoid that, but it does happen. There are some GNU extensions (e.g. sterror_r()) that have incompatible signatures from their POSIX counterparts. Possibly this happens when POSIX takes up the extension but modifies it, or else they are just alternatives dreamed up by GNU -- you can opt for one or the other by using an appropriate #define.
| why is "timer_t" defined in "time.h" on Linux but not OS X |
1,486,499,744,000 |
The default PID max number is 32768. To get this information type:
cat /proc/sys/kernel/pid_max
32768
or
sysctl kernel.pid_max
kernel.pid_max = 32768
Now, I want to change this number... but I can't. Well, actually I can change it to a lower value or the same. For example:
linux-6eea:~ # sysctl -w kernel.pid_max=32768
kernel.pid_max = 32768
But I can't do it for a greater value than 32768. For example:
linux-6eea:~ # sysctl -w kernel.pid_max=32769
error: "Invalid argument" setting key "kernel.pid_max"
Any ideas ?
PS: My kernel is Linux linux-6eea 3.0.101-0.35-pae #1 SMP Wed Jul 9 11:43:04 UTC 2014 (c36987d) i686 i686 i386 GNU/Linux
|
The value can only be extended up to a theoretical maximum of 32768 for 32 bit systems or 4194304 for 64 bit.
From man 5 proc:
/proc/sys/kernel/pid_max
This file (new in Linux 2.5) specifies the value at which PIDs wrap around
(i.e., the value in this file is one greater than the maximum PID). The
default value for this file, 32768, results in the same range of PIDs as
on earlier kernels. On 32-bit platfroms, 32768 is the maximum value for
pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22
(PID_MAX_LIMIT, approximately 4 million).
| How to change the kernel max PID number? [duplicate] |
1,486,499,744,000 |
Well, obviously there is a difference, but I'm curious about the rational behind why some things go under /usr/include/sys and others go under /usr/include/linux, and have the same header file name? Does this have something to do with POSIX vx non-POSIX?
Also, I've managed to populate /usr/include/linux with headers on my Fedora system by grabbing a the kernel-headers package, is there a standard package name for me to get header files that go under /usr/include/sys? I haven't been able to find it.
|
The headers under /usr/include/linux and under /usr/include/asm* are distributed with the Linux kernel. The other headers (/usr/include/sys/*.h, /usr/include/bits/*.h, and many more) are distributed with the C library (the GNU C library, also known as glibc, on all non-embedded Linux systems). There's a little explanation in the glibc manual.
Note that /usr/include/linux and /usr/include/asm should contain the headers that were used when compiling the C library, not the headers from the running kernel. Otherwise, if some constants or data structures changed, there will be an inconsistency between the compiled program and the C library, which is likely to result in a crash or worse. (If the headers match the C library but the C library doesn't match the kernel, what actually happens is that the kernel is designed to keep a stable ABI and must detect that it's called under a different ABI and interpret syscall arguments accordingly. The kernel must do this for statically compiled programs anyway.)
I remember a heated debate between Debian and Red Hat a while (a decade?) ago on the /usr/include/linux issue; apparently each side is sticking to its position. (As far as I understand it, Debian is right, as explained above.) Debian currently distributes /usr/include/linux and friends in the linux-libc-dev package, which is compiled from kernel sources but not upgraded with the kernel. Kernel headers are in version-specific packages providing the linux-headers-2.6 metapackage; this is what you need to compile a module for a particular kernel version.
The package you're looking for is the C library headers. I don't know what it's called, but you can find out with yum provides /usr/include/sys/types.h.
| Difference between /usr/include/sys and /usr/include/linux? |
1,486,499,744,000 |
I have a script that streams a pattern in a single line (no linebreaks). I want to grep stock_ticker in that line and output it as soon as I have found one. Now the script is never ending and is essentially looping indefinitely.
One alternative I thought how I could work on this probably would be to split the input stream into lines and pipe it to grep. I understand that you can grep an output - however per my understanding I find everything from grep, sed, awk read line by line.
Anyway I can change this behavior and work on this?
./a.out | grep 'stock_ticker'
currently outputs Memory exhausted. This is because grep reads line by line before spitting output. I would want to change that behavior. Any ideas how?
|
Without line breaks, grep is buffering all of the input, so that it can show you the "line" where the string appears.
Two questions:
Do you need the adjacent content?
are there spaces or other characters separating the tokens?
If you don't need the context of the adjacent content and there are spaces separating tokens, just use tr to turn spaces into linefeeds:
./a.out | tr ' ' '\n' | grep 'stock_ticker'
If you want the adjacent content, just add one of the options -C -A or -B to the grep command. That lets grep show the "lines" before and/or after the ticker in the search pattern.
| Grep on single line [duplicate] |
1,410,708,734,000 |
I have a Intel BayTrial Z3735D tablet which comes with a 32bit UEFI BIOS.
After some search I've found that most linux distro don't come with a 32bit efi file.
How can I insert one (or build a new ISO)
According to
https://wiki.archlinux.org/index.php/HCL/Firmwares/UEFI#Intel_Atom_SoC_Bay_Trail
, this should be possible.
|
The Baytrail tablets run a 64b processor and a 32b EFI, for reasons best known to Intel.
Grub2 (compiled for 32b EFI) will start a 64b UEFI operating system from a 32b EFI.
Just like a 64b or 32b CPU processor calling into a traditional 16b BIOS, a thunk is needed in the operating system to marshal the arguments from 64b to 32b, change the processor mode, call the firmware, and then restore the processor mode and marshal the arguments from 32b to 64b. A x86-64 Linux kernel built with the option CONFIG_EFI_MIXED=y includes such a thunk to allow the x86-64 kernel to call to a i686 EFI.
At this point in time there is no thunk for AMD's AtomBIOS, and thus the "radeon" module fails. This isn't an issue for the Baytrail tablets, as they use the Intel GPU.
I would look at the Ubuntu operating system when considering Baytrail, as Fedora is yet to build their stock kernels with CONFIG_EFI_MIXED=y. Use a USB stick like Super Grub2 Disk to get to the Grub2 (32b) command line and then load and run the x86-64 installer kernel from the Grub2 command line. Once you have installed Ubuntu go back and install Grub2 32b bootloader to the EFI partition by hand and remove the Grub2 64b bootloader.
The lack of advanced video driver is a showstopper for the MacBookPro2,2 as it uses the AMD Radeon X1600. Linux can boot using the EFI "UGA" driver (roughly equivalent to using the VESA option in BIOS-land). But the result is so much overhead that then fans run at full rate continually. Note that the "radeon" module copies the AtomBIOS contents into RAM, and thus a small change to the driver to allow the AtomBIOS to be loaded from disk is a path to solving this issue. Probably the best approach on a early Mac is to run a 32b operating system, although most of the popular distributions do not support EFI in their i686 32b builds.
| Installing linux on an 32bit UEFI only machine |
1,410,708,734,000 |
I'm confused as to which is best and in which circumstances:
invoke-rc.d apache2 restart
or
service apache2 restart
Is there a real difference?
man service has the following interesting bit:
service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /.
I'm interested mainly in Debian, but also Mint (also based on Debian).
|
The official Debian wiki page on daemons says to use service:
# service ssh restart
Restarting OpenBSD Secure Shell server: sshd.
Functionally service and invoke-rc.d are mostly equivalent, however:
invoke-rc.d is the preferred command for packages' maintainer scripts, according to the command's man page
service has a unique --status-all option, that queries status of all available daemons
It seems like service is the user-oriented command, while invoke-rc.d is there for other uses.
| Should "invoke-rc.d" or "service" be used to restart services? |
1,410,708,734,000 |
According to the man page of lsmod the command shows “what kernel modules are currently loaded”.
I wrote a script that uses modinfo to show what kernel object (.ko) files are actually in use:
#!/bin/sh
for i in `lsmod | awk '{print $1}' | sed -n '1!p'`; do
echo "###############################$i###############################"
echo ""
modinfo $i
echo ""
echo ""
done
Now I found out that modinfo nvidia shows the following output:
ERROR: modinfo: could not find module nvidia
Do you guys have any explanation for this?
|
Your nvidia module is perfectly loaded and working. The problem lies in modinfo.
modinfo fetch the list of known modules by reading the /lib/modules/$(uname -r)/modules.* files, which are usually updated with depmod.
If depmod -a has not been run after installing the nvidia module, then modinfo does not knows about it. This does not prevent anybody from loading the module with insmod and lsmod will show it just fine if loaded.
| Why does modinfo say “could not find module”, yet lsmod claims the module is loaded? |
1,410,708,734,000 |
Suppose that /etc/nsswitch.conf file contains
hosts: files dns
and /etc/host.conf file has
order bind,hosts
then in which order the system would use /etc/hosts and DNS look-up to resolve a host name? In other words, which of the two configuration files takes precedence?
|
/etc/nsswitch.conf is the default file for domain name resolution these days. I have the following line at the top of my /etc/host.conf file:-
# The "order" line is only used by old versions of the C library.
nsswitch.conf is used by pretty much everything on my Debian box for name resolution. So, given the above lines in your files, the default name resolution order would be to check /etc/hosts first, and then use the nameservers configured in /etc/resolv.conf to do a DNS lookup.
Lately (since about Ubuntu 11.10), the /etc/resolv.conf is by default configured to use the localhost interface (127.0.0.1), where a daemon program dnsmasq listens on port 53 for DNS requests. This in turn usually does DNS resolution as configured by your LAN's DHCP server, but this can be manually overridden in the OS's network configuration GUI.
Note: You didn't mention what OS you are using, and the above is coming from personal experience with Debian Ubuntu. The defaults might be different on different flavours of Linux
| nsswitch.conf versus host.conf |
1,410,708,734,000 |
I have mounted /dev and immediately tried to unmount:
$ sudo mount -o rbind /dev m
$ sudo umount m
umount: /tmp/m: target is busy.
$ sudo lsof m
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kdevtmpfs 55 root cwd DIR 0,6 4420 2 m
kdevtmpfs 55 root rtd DIR 0,6 4420 2 m
I have read that fuser can kill processes accessing the mount point, but I would like to understand what is happening in this simple case. Acording to the lsof output does something use the mountpoint as current working directory (cwd)?
I do not want to use lazy unmount.
|
You used rbind to mount a filesystem and submounts. In order to unmount a filesystem, you must unmount its submounts first (and the same for their submounts, recursively). But take care!
mount --make-rslave m
umount -R m
Without the first command, you risk unmounting all the sub-mounts on the source, due to mount propagation. In this case that means all the sub-mounts of /dev, which would have bad effects on your running system ;-).
Basically mount propagation is a massive pit-trap waiting for you to fall into it :-). It seems like it would have been better if bind mounts disabled it by default.
kdevtmpfs is the kernel thread that maintains devtmpfs. It does not prevent unmounting devtmpfs. This is because the kernel thread runs on a separate mount (like a bind mount). You can't see that original mount; it is in a separate mount namespace. If you want to try and work out why kdevtmpfs shows up in lsof, I don't know, maybe consider that a separate question.
| umount: target is busy |
1,410,708,734,000 |
This works as expected:
$ echo a b c | xargs --replace="{}" echo x "{}" y
x a b c y
This does too:
$ echo a b c | xargs --max-args=1 echo x
x a
x b
x c
But this doesn't work as expected:
$ echo a b c | xargs --max-args=1 --replace="{}" echo x "{}" y
x a b c y
And neither does this:
$ echo a b c | xargs --delimiter=' ' --max-args=1 --replace="{}" echo x "{}" y
x a y
x b y
x c
y
I expected this output:
x a y
x b y
x c y
As a workaround, I am using printf and two xargs, but that is ugly:
$ echo a b c | xargs printf '%s\0' | \
> xargs --null --max-args=1 --replace="{}" echo x "{}" y
x a y
x b y
x c y
Any idea why this is happening?
|
According to the POSIX documentation, xargs should run the given utility with arguments delimited by either spaces or newlines, and this is what happens in the two first examples of yours.
However, when --replace (or -I) is used, only newlines will delimit arguments. The remedy is to give xargs arguments on separate lines:
$ printf '%s\n' a b c | xargs --max-args=1 --replace="{}" echo x "{}" y
x a y
x b y
x c y
Using POSIX options:
printf '%s\n' a b c | xargs -n 1 -I "{}" echo x "{}" y
Here, I give xargs not one line but three. It takes one line (at most) and executes the utility with that as the argument.
Note also that -n 1 (or --max-args=1) in the above is not needed as it's the number of replacements made by -I that determines the number of arguments used:
$ printf '%s\n' a b c | xargs -I "{}" echo x "{}" y
x a y
x b y
x c y
In fact, the Rationale section of the POSIX spec on xargs says (my emphasis)
The -I, -L, and -n options are mutually-exclusive. Some implementations use the last one specified if more than one is given on a command line; other implementations treat combinations of the options in different ways.
While testing this, I noticed that OpenBSD's version of xargs will do the the following if -n and -I are used together:
$ echo a b c | xargs -n 1 -I "{}" echo x "{}" y
x a y
x b y
x c y
This is different from what GNU coreutils' xargs does (which produces x a b c y). This is due to the implementation accepting spaces as argument delimiter with -n, even though -I is used. So, don't use -I and -n together (it's not needed anyway).
| Problem using xargs --max-args --replace with default delimiter |
1,410,708,734,000 |
I'm trying to set an RSA key as an environment variable which, as a text file, contains newline characters.
Whenever I attempt to read from the file and pass it into an environment variable, it will just stop at the newline character on the first line. How can I prevent this?
|
Note that except in zsh, shell variables cannot store arbitrary sequences of bytes. Variables in all other shells can't contain the NUL byte. And with the yash, they can't contain bytes not forming valid characters.
For files that don't contain NUL bytes, in POSIX-like shells, you can do:
var=$(cat file; echo .); var=${var%.}
We add a .\n and strip the trailing . to work around the fact that $(...) strips all trailing newline characters.
The above would also work in zsh for files that contain NULs though in zsh you could also use the $mapfile special associative array:
zmodload zsh/mapfile
var=$mapfile[file]
In zsh or bash, you can also use:
{ IFS= read -rd '' var || :; } < file
That reads up to the first NUL byte. It will return a non-zero exit status unless a NUL byte is found. We use the command group here to be able to at least tell the errors when opening the file, but we won't be able to detect read errors via the exit status.
Remember to quote that variable when passed to other commands. Newline is in the default value of $IFS, so would cause the variable content to be split when left unquoted in list contexts in POSIX-like shells other than zsh (not to mention the other problems with other characters of $IFS or wildcards).
So:
printf %s "$var"
for instance (not printf %s $var, certainly not echo $var which would add echo's problems in addition to the split+glob ones).
With non-POSIX shells:
Bourne shell:
The bourne shell did not support the $(...) form nor the ${var%pattern} operator, so it can be quite hard to achieve there. One approach is to use eval and quoting:
eval "var='`printf \\' | cat file - | awk -v RS=\\' -v ORS= -v b='\\\\' '
NR > 1 {print RS b RS RS}; {print}; END {print RS}'`"
With (t)csh, it's even worse, see there.
With rc, you can use the ``(separator){...} form of command substitution with an empty separator list:
var = ``(){cat file}
| How can I set an environment variable which contains newline characters? |
1,410,708,734,000 |
I've seen on various Linux systems where instead of the real device node (for example: /dev/sda1), the root device appears as /dev/root, or instead of the real filesystem, mtab says it is a filesystem called rootfs (which appears as a real filesystem in /proc/filesystems, but doesn't have code in <linux-kernel-source-tree>/fs). Various utilities have been made to use certain attributes to determine the real root device node (such as rdev, and the Chromium OS rootdev). I can find no logical explanation to this other than reading somewhere that very-small embedded devices don't always have to have a /dev device node for their root device. (Is this true, and if so, is that the answer to my question?) Why does mtab sometimes say /dev/root (and I think I might have seen it say rootdev once) instead of the real device node, and how can I make it always say the real device node? The kernel first mounts the root device following the root parameter in the cmdline, then init/systemd re-mounts it according to the fstab, correct? If so, then I presume init maintains mtab. If my theory is correct, how can I make init write the real root device node to mtab? I noticed that /etc/mtab is actually a symbolic link to /proc/mounts, which would mean mtab is maintained by the kernel. So how do I configure/patch a kernel to, instead of saying the root devices node path is /dev/root, have mtab contain the real device node?
|
This is generally an artifact of using an initramfs.
From the kernel documentation (https://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt)
What is rootfs?
Rootfs is a special instance of ramfs (or tmpfs, if that's enabled),
which is always present in 2.6 systems. You can't unmount rootfs for
approximately the same reason you can't kill the init process; rather
than having special code to check for and handle an empty list, it's
smaller and simpler for the kernel to just make sure certain lists
can't become empty.
Most systems just mount another filesystem over rootfs and ignore it.
The amount of space an empty instance of ramfs takes up is tiny.
Thus rootfs is the root filesystem that was created for the initramfs, and can't be unmounted.
In regards to /dev/root, I'm less certain on this, but if I recall correctly /dev/root is created when using an initrd (not the same as an initramfs).
| Why on some Linux systems, does the root filesystem appear as /dev/root instead of /dev/<real device node>in mtab? |
1,410,708,734,000 |
I managed to create a small and fully functional live Linux CD which contains only kernel (compiled with default options) and BusyBox (compiled with default options + static, all applets present, including /sbin/init). I had no issues to create initrd and populate /dev, /proc and /sys and also I had no issues at all with my /init shell script.
Recently I read that BusyBox supports /etc/inittab configurations (at least to some level) and I very much would like to do either of the following:
Forget about my /init shell script and rely entirely on /etc/inittab configuration.
Use both /init shell script and /etc/inittab configuration.
Now the actual problem - it seems that /etc/inittab is completely ignored when my distro boots up. The symptoms are:
When I remove /init and leave only /etc/inittab I end up with kernel panic. My assumption is that kernel doesn't execute /sbin/init at all, or that /sbin/init doesn't find (or read) /etc/inittab.
I read that BusyBox should work fine even without /etc/inittab. So, I removed both /init and /etc/inittab and guess what - kernel panic again.
I tried to execute /sbin/init from my shell and after several guesses which included exec /sbin/init, setsid /sbin/init and exec setsid /sbin/init I ended up with kernel panic. Both with and without /etc/inittab being present on the file system.
Here is the content of my /init shell script:
#!/bin/sh
dmesg -n 1
mount -t devtmpfs none /dev
mount -t proc none /proc
mount -t sysfs none /sys
setsid cttyhack /bin/sh
At this point I don't care what the content of the /etc/inittab would be, as long as I have a way to know that the configuration there actually works. I tried several /etc/inittab configurations, all based on the information which I found here.
As a bare minimum my /etc/inittab contained just this one line:
::sysinit:/bin/sh
Again - I ended up with kernel panic and it seems that /etc/inittab was ignored.
Any suggestions how to force my little live distro to work fine with BusyBox's /etc/inittab are highly appreciated!
Update:
Just to make it clear - I do not have kernel panic troubles with my current /init shell script both with and without /etc/inittab. It all works fine, my /bin/ash console works great and I don't experience any unexpected troubles. The only issue is that /etc/inittab is completely ignored, as I described above.
I examined 3 different live Linux distributions: Slax, Finnix and SysResCD. All of them have /init and none of them have /etc/inittab. In addition this Wiki article concludes my suspicion that /sbin/init is not invoked at all.
|
OK, I did a lot of extensive research and I found out what was wrong. Let's start one by one:
When we use initramfs boot scheme the first process which the kernel invokes is the /init script. The kernel will never try to execute /sbin/init directly.
/init is assigned process identifier 1. This is very important!
The problem now is that /sbin/init can only be started as PID 1 but we are already running /init as PID 1.
The solution is to execute the command line exec /sbin/init while we are still inside /init. In this way the new process (which is /sbin/init) will inherit the PID from its parent (/init with PID 1) and that's all we have to do.
The problem I experienced with my initial configuration (see the question) was due to the fact that the last thing my /init script does is to spawn new /bin/sh process which is assigned brand new PID. From this point it's impossible to run /sbin/init directly from interactive console because even when we execute the command line exec /sbin/init, the best we achieve is to assign the same PID which has already been assigned to the shell and this PID is definitely not PID 1.
Long story short - execute the command line exec /sbin/init directly from /init and that's all.
| Minimal Linux with kernel and BusyBox: /etc/inittab is ignored, only /init is executed |
1,410,708,734,000 |
I'm writing a program, and would like it to store a log file. Problem is, the program really shouldn't be ran as root.
So if I wanted to uphold to the traditions of where files are placed, where could I keep the log file if not in /var/log that a normal user would have permissions to?
Edit: I'm using Arch linux.
|
Since the operating system you are using is missing, a more generic approach could be: Create a directory with the name of your app(say, foo) inside /var/log
# mkdir /var/log/foo
Most of all unix-like OSs will allow you to navigate through var_log folders, but not to view the logfiles contents(as expected).
Give the ownership to the user that you are using to run your program, and permission to this user(only) to see/write those logfiles
# chown userfoo /var/log/foo
# chmod 600 /var/log/foo
You could play with groups too, giving read access to operators for example(and of course, a different permission set of chmod, like 640.
Done. This should be generic enough to any Unix like system, and maybe, a better approach than adding a user to administrative groups.
| Where are all the posibilities of storing a log file |
1,410,708,734,000 |
I've installed Fedora on my machine with / partition, swap partition and ESP partition for EFI booting.
Now, I was installing Elementary OS instead of Fedora.
I have formatted the / partition (/dev/sda3)
Formatted the swap partition (/dev/sda4)
But did not format the EFI boot partition (/dev/sda1)
Now when i boot, i get my old grub menu that's was installed by Fedora.
I can only boot into Elementary OS by:
Entering the boot menu.
Selecting boot from EFI file
Navigate through /dev/sda1/, to get the elementary directory that contains grubx64.efi file. Which is /boot/efi/EFI/elementary/grubx64.efi.
How can i fix that ? I thought of formatting the boot partition /dev/sda1/ with fat16 or something then re-installing grub on it.
My /dev/sda1 now contains this :
root@rafael:/home/rafael# ls /boot/efi/
EFI mach_kernel System
root@rafael:/home/rafael# ls /boot/efi/EFI/
BOOT/ elementary/ fedora/
root@rafael:/home/rafael# ls /boot/efi/EFI/fedora/
BOOT.CSV fonts gcdx64.efi grub.cfg grubx64.efi MokManager.efi shim.efi shim-fedora.efi
root@rafael:/home/rafael# ls /boot/efi/EFI/elementary/
grubx64.efi
Here's my efibootmgr output :
BootCurrent: 003D
Timeout: 0 seconds
BootOrder: 2001,2002,2003
Boot0000* Notebook Hard Drive
Boot0010* Internal CD/DVD ROM Drive
Boot0011* Internal CD/DVD ROM Drive (UEFI)
Boot0012* Fedora
Boot0013* Fedora
Boot0014* Fedora
Boot0015* Fedora
Boot0016* Fedora
Boot0017* Fedora
Boot0018* Fedora
Boot0019* Fedora
Boot001A* Fedora
Boot001B* Fedora
Boot001C* Fedora
Boot001D* Fedora
Boot001E* Fedora
Boot001F* elementary
Boot2001* USB Drive (UEFI)
Boot2002* Internal CD/DVD ROM Drive (UEFI)
Any help would be appreciated.
|
I did it !
First of all, I removed all the unnecessary boot entries by:
efibootmgr -b <entry_hex_number> -B
Then, Reformatting the ESP partition with FAT32 filesystem.
mkfs.vfat -F32 /dev/sda1
Then installed grub to /dev/sda NOT /dev/sda1
grub-install /dev/sda
| How to recreate EFI boot partition? |
1,410,708,734,000 |
I'm running Arch Linux, and I have a udev rule which starts a service when a device is inserted. In this case, it dials a connection when a 3G modem is plugged in.
KERNEL=="ttyUSB*", SYMLINK=="gsmmodem", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]"
However, if the device is removed, systemd won't stop the service, and hence when it is plugged in again, it won't start the service, since it's already running.
What I need is a matching udev rule which runs when the device is removed to stop the service.
Update
Using the answer below, what I now have is the following udev rule
KERNEL=="ttyUSB*", SYMLINK=="gsmmodem", TAG+="systemd", ENV{SYSTEMD_WANTS}="vodafone.service"
with the following service file (which was basically copied and pasted from the netcfg service file:
[Unit]
Description=Netcfg networking service for Vodafone Dongle
Before=network.target
Wants=network.target
BindsTo=dev-gsmmodem.device
After=dev-gsmmodem.device
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/netcfg check-iface wvdial
ExecStop=-/usr/bin/netcfg down wvdial
KillMode=none
[Install]
WantedBy=multi-user.target
I'm using netcfg-wvdial from the AUR to do the dialing.
|
Your problem may be solved using systemd solely, by simply specifying that your service Requires or, even better, BindsTo the given device.
Quoting:
"If one of the other [required/bound to] units gets deactivated or its activation fails, this unit [service] will be deactivated"
You just need to edit your service file like the following.
[Unit]
<...>
BindsTo=<DEVICE UNIT HERE>.device
<...>
After=<DEVICE UNIT HERE>.device
Note: to get a list of all available device unit files use systemctl list-units --all --full | grep ".device"
| What is the correct way to write a udev rule to stop a service under systemd |
1,410,708,734,000 |
I like to work in Linux without using the mouse, because of that I would like to know if there is any method to set a keyboard shortcut to set gnome-terminal tab title.
|
From Edit -> Keyboard Shortcuts... you can set a shortcut to Set Title. I don't have a default one.
| Keyboard shortcut to set gnome-terminal tab title |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.