date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,394,071,337,000 |
I've installed latest firefox linux-x86_64 from ftp.mozilla.com on a usb device and created a new profile file with the -P command. Unfortunately, the application does not recognize the flash plugin that is already installed on the operating system.
How can I enable the flash plugin on the portable version?
|
How to Use Mozilla Firefox, Portable with flash plugin
Make your firefox portable for Linux (all versions):
Download the latest release of Firefox and unpack it on your usb device: http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/
Go to unpack_directory/firefox/browser/plugins (firefox 22+).
Add a short link to your installed flash-plugin binary (libflasplayer.so). It's usually in /usr/lib64/flash-plugin/.
Optionally: Download the UNIX version of the flash-plugin binary from adobe.com and copy it from the archive. Please remember: the flash-plugin is a binary file, no compilation process is needed!
1. Copy the firefox directory to your portable device
2. Create a simple shortcut:
Here's my startup.sh that I have placed on my usb device ($PWD is the current directory (example: USB_DEVICE/firefox_x64).
#!/bin/sh
"$PWD/firefox_x64/firefox" -no-remote -profile
"$PWD/../.mozilla/firefox/YOUR_PROFILE_ID"
3. Run firefox with command line to create a new profile:
You can create a new profile with the -P command as shown below.
I've created my profile inside USB_DEVICE/.mozilla/firefox. You can set this path later. This is Mozilla's default folder skeletton for application settings (like seamonkey, thunderbird or B2G). To create a new profile run:
[user@home]# cd /USB_DEVICE/firefox_x64
[user@home firefox_x64]# ./firefox -no-remote -P
FAQ: How to use the new USB profile with windows:
For Windows just use the Portable Firefox from portableapps.com and run the same commands (step no. 3, simply add the -profile command to the executable .exe).
| Portable Firefox Linux |
1,394,071,337,000 |
Can someone explain why two sets of drivers are needed, one in the Linux kernel and one in X?
I understand that the device drivers are in the kernel, but what is the role of those in the xserver?
Does wayland require such drivers to run?
|
Linux graphics support has been a heavily mutating thing for most of the life of the kernel. Initially, the kernel only talked to the graphics card for text mode purposes. Back then, X used its drivers to do everything, so it worked as a huge kernel-outside-the-kernel.
Later, with Direct Rendering Infrastructure (DRI), some of the code for accelerated graphics features moved kernel-side (called Direct Rendering Manager, DRM — nothing to do with digital rights management) to provide a consistent, abstracted interface to 3D acceleration features.
Currently, you don't need to have a kernel-side DRM module loaded. But if you don't have one, chances are your X session will fall back to software-rendered 3D which is considerably slower and power-hungrier than hardware 3D. Running glxinfo will show info on this.
Wayland is a slightly different story. It sits between the kernel and client applications. With Wayland, the X server is another client application, displaying its root window as just another thing. Wayland takes on the duties of talking to the hardware (X talking to Wayland instead). Since the project is still heavily in development, there's no way to know where it'll end up, but the way I understand it is it still needs kernel support for 3D rendering.
It's obvious from the Wayland architecture diagrams, too: left is the current state of affairs for a modern X desktop, right is the proposed Wayland architecute. The Wayland compositor replaces the X Server as the thing that talks to the hardware, but it doesn't replace the kernel infrastructure — so you'd still need appropriate kernel support. In fact, given the aims of the project, more stuff should move to the kernel for even better abstraction. Wayland, like the X server, are still graphics hardware-dependent.
| Why need drivers for both x server and the linux kernel? |
1,394,071,337,000 |
After configuring and building the kernel using make, why don't I have vmlinuz-<version>-default.img and initrd-<version>.img, but only got a huge vmlinux binary (~150MB)?
|
The compressed images are under arch/xxx/boot/, where xxx is the arch. For example, for x86 and amd64, I've got a compressed image at /usr/src/linux/arch/x86/boot/bzImage, along with /usr/src/linux/vmlinux.
If you still don't have the image, check if bzip2 is installed and working (but I guess if that were the problem, you'd get a descriptive error message, such as "bzip2 not found").
Also, the kernel config allows you to choose the compression method, so the actual file name and compression algorithm may differ if you changed that kernel setting.
As others already mentioned, initrds are not generated by the linux compilation process, but by other tools. Note that unless, for some reason, you need external files (e.g. you need modules or udev to identify or mount /), you don't need an initrd to boot.
| vmlinuz and initrd not found after building the kernel? |
1,394,071,337,000 |
On Mac OS X there's a very handy command called textutil, that can be invoked from the terminal and allows to convert a document from a format to another; Sometimes I use it to convert a RTF file into HTML, but it's also able to convert doc, docx, odt and other formats.
I used to believe that it was a standard unix command, but I cannot find it and when I try to write sudo apt-get install textutil Ubuntu said that it have no idea of what textutil is... maybe I have searched in the wrong place for the command?
Do you know if something similar exists for Linux? I need to invoke that command from a script that will run on a linux server.
|
GNU unrtf does almost exclusively what you want.
Pandoc can do a lot more.
| Is there a linux equivalent of the Mac OS X command "textutil"? |
1,394,071,337,000 |
I would like something that allows me to:
Inspect all HTTP(S) traffic between my computer and the Internet, including 127.0.0.1
Modify incoming or outgoing data
It would also be nice if it had a scripting subsystem for setting rules and events
I prefer it be a GUI application.
Please do not answer with WireShark. I am aware of WireShark and I have used it many many times and it's a great app. I would like something that restricts it's captures to the application layer and HTTP(S) traffic only and ignores the other Internet Protocol Suite layers. Also, it doesn't have some of the features I listed above.
|
Here are a couple:
WebScarab: http://www.owasp.org/index.php/OWASP_WebScarab_Project
Burp http://portswigger.net/proxy/
| Can anybody recommend an HTTP debugging proxy? |
1,394,071,337,000 |
My strings are file paths like s/14/11/13/15/n7ce49B_235_25ed2d70.jpg; my patterns are quite simple ones, all like n7ce49B_.+.
I'm running GNU grep 2.6.3 under Debian 6.0.10 on Dell DL360G7 server (I mention it just to give a sense of this machine perfomance) with 15k HDDs, and this command: time LC_ALL=C grep -E -f path_to_patterns_file path_to_strings_file just can't complete - server swaps too badly. With 20k patterns it takes more than 3 hours.
That seems unreasonable to me.
Per comment request, there is the files: file paths 20k patterns
One may also test and adjust the number of input lines and patterns with:
xxd -p /dev/urandom | fold -sw 100 | head -n 1250 |
grep -Ef <(xxd -p /dev/urandom | fold -sw 10 | head -n 20000)
|
You ran into a performance problem in older versions of GNU grep (bug#22357) that was addressed by this commit, released in 2.28 though that change introduced some regressions, so you would want to get GNU grep 3.0 or newer instead.
| Why is matching 1250 strings against 90k patterns so slow? |
1,394,071,337,000 |
I’m wondering if there’s a way to get rid of audio distortion at high volume levels on headphones. When I dual booted Linux with Windows I’d usually just boot into Windows before booting back into Linux to fix my issues and not get any distortion at higher levels on headphones.
Now I just have a machine with Linux on it and can’t seem to stop the sound from distorting at higher levels whenever I plug in headphones since I’ve messed with the alsa mixer pcm volume, changed the headphone volume in alsamixer, and also tried different headphones.
Even when I get distortion to stop by adjusting the headphone, master, or pcm volumes, the sound tends to be somewhat weak even when I turn my headphones all the way up. I’m also wondering if I need to wait for a firmware/kernel update since I’m using relatively new hardware, with a Realtek ALC 295 sound-card, or just change out the card even though I’m not sure if I should do this yet if there’s simpler ways to fix the issue.
Thanks!
|
Turning all hardware mixers up worked.
Edit: Actually just found another, slightly hacky, work around to this issue after it came back a few boots later by using an ladpsa amplifier and compressor in Arch. Something similar to this could probably work in other distros too.
Install ladspa plugins:
pacman -S ladspa-plugins
Create an .asoundrc file in the home folder:
Paste this code into the .asoundrc file:
pcm.pulse {
type pulse
}
ctl.pulse {
type pulse
}
pcm.default pcm.pulse
ctl.default ctl.pulse
paste these commands into /etc/pulse/default.pa:
load-module module-ladspa-sink sink_name=ladspa_output.fastLookaheadLimiter label=fastLookaheadLimiter plugin=fast_lookahead_limiter_1913 control=20,0,0.3
load-module module-ladspa-sink sink_name=ladspa_output.dysonCompress label=dysonCompress plugin=dyson_compress_1403 master=ladspa_output.fastLookaheadLimiter control=0,0.5,0.5,0.99
set-default-sink ladspa_output.dysonCompress
Remove pulseaudio-alsa since it conflicts with ladpsa-plugins
Reboot!
Turn volume down
Edit 2:
Just added some slightly tuned default.pa settings to help eliminate distorted frequencies. It's not perfect but worked fairly well on my hardware:
load-module module-ladspa-sink
sink_name=ladspa_output.fastLookaheadLimiter
label=fastLookaheadLimiter plugin=fast_lookahead_limiter_1913
control=5,0,0.8 load-module module-ladspa-sink
sink_name=ladspa_output.dysonCompress label=dysonCompress
plugin=dyson_compress_1403 master=ladspa_output.fastLookaheadLimitercontrol=-14,1,0.5,0.99 set-default-sink ladspa_output.dysonCompress
| Distortion At High Volume On Headphones |
1,394,071,337,000 |
Linux seems to have a default limit of 128KiB (131072) on the length of any single environment variable -- any attempt to set an envvar longer than this and then run any program will result in an 'Argument list too long' error.
This seems like it should be a configuration parameter, but I've been unable to find any way to raise it. Is there any way to increase it?
It is problematic for tools like "automake" which try to pull together long lists of files or tests in an environment variable as part of their building and testing process.
|
MAX_ARG_STRLEN is a constant defined as PAGESIZE*32 in include/uapi/linux/binfmts.h. Its value cannot be changed without recompiling the kernel.
/*
* These are the maximum length and maximum number of strings passed to the
* execve() system call. MAX_ARG_STRLEN is essentially random but serves to
* prevent the kernel from being unduly impacted by misaddressed pointers.
* MAX_ARG_STRINGS is chosen to fit in a signed 32-bit integer.
*/
#define MAX_ARG_STRLEN (PAGE_SIZE * 32)
#define MAX_ARG_STRINGS 0x7FFFFFFF
| Raise 128KiB limit on environment variables in Linux |
1,394,071,337,000 |
Linux uses the unused portions of memory for file caching, and it cleans up the space when needed.
My question is about how it picks a victim page for replacement?
There are various algorithms (LRU, FIFO, LFU and random replacement)
I'd like to know
1) What page replacement algorithms are used in Linux kernel for OS file cache?
2) If possible, I'd like to know how it have evolved over time in Linux kernel. I assume that its algorithm and implementation might change overtime considering "reasonable" changes in trends. How can I find those? Do I need to read kernel source codes?
|
Linux memory management ("MM") does seem to be a bit arcane and challenging to track down.
Linux literature makes heavy mention of LRU (Least Recently Used), in the context of memory management. I haven't noticed any of the other terms being mentioned.
I found an interesting introduction (first four paragraphs) in this article on the incomparable LWN.net. It explains how basic LRU can be implemented in practice for virtual memory. Read it.
True LFU (Least Frequently Used) replacement is not considered practical for virtual memory. The kernel can't count every single read of a page, when mmap() is used to access file cache pages - e.g. this is how most programs are loaded in memory. The performance overhead would be far too high.
To go beyond that simple concept, there is a design outline around Linux version 2.6.28-32 here:
http://linux-mm.org/PageReplacementDesign
It suggests Clock-PRO is used for file pages. There is an original paper available on it. There is an old description of Clock-PRO on LWN.net, again including some practical implementation details. Apparently Clock-PRO "attempts to move beyond the LRU approach", a variant of which is "used in most systems". It seems to put more weight on frequency.
Linux-mm has another design document for implementing Clock-PRO in Linux. This one doesn't talk about it being merged in; it was written a few months before the LWN article about it. http://linux-mm.org/ClockProApproximation
More recent descriptions are that Linux merely "uses some ideas" from Clock-PRO, and is actually a "mash of a number of different
algorithms with a number of modifications for catching corner cases and
various optimisations".
The above quote was answered a question by Adrian McMenamin. McMenamin went on to complete an MSc project in 2011, testing modifications to Linux page replacement based on the "working set model". It includes a brief description of Linux page replacement. The "variant of LRU" is named as "the 2Q [two-queue] approach for database management", a number of references are provided, and there is a diagram illustrating movement between the two queues and other state transitions. He also describes Linux as using a partial implementation of CLOCK-PRO.
I expect the LRU concept, as opposed to the other possibilities you mention, was established from the start. And the most significant change was the introduction of Clock-PRO based features, i.e. putting some more weight on frequency.
In 2013 Linux gained "thrash detection-based file cache sizing". This is probably also relevant to the question.
| What page replacement algorithms are used in Linux kernel for OS file cache? |
1,394,071,337,000 |
I have files created into my home directory with only user read permission (r-- --- ---). I want to copy this file to another directory /etc/test/ which has the folder permission of 744 (rwx r-- r--). I need to allow for the file I am copying to inherit the permission of the folder it is copied in because so far when I copy it, the files permissions are still the same (r-- --- ---). I have tried setfacl command, but it did not work? Please help.
PS. I can't just chmod -r /etc/test/ because there are many files which will be copied into this folder over time and I don't want to run chmod command every time a file is copied over.
|
Permissions are generally not propagated by the directory that files are being copied into, rather new permissions are controlled by the user's umask. However when you copy a file from one location to another it's a bit of a special case where the user's umask is essentially ignored and the existing permissions on the file are preserved. Understanding this concept is the key to getting what you want.
So to copy a file but "drop" its current permissions you can tell cp to "not preserve" using the --no-preserve=all switch.
Example
Say I have the following file like you.
$ mkdir -m 744 somedir
$ touch afile
$ chmod 400 afile
$ ll
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
And as you've confirmed if we just blindly copy it using cp we get this:
$ cp afile somedir/
$ ls -l somedir/
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
Now let's repeat this but this time tell cp to "drop permissions":
$ rm -f somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-rw-r--. 1 saml saml 0 Feb 14 15:21 afile
So the copied file now has its permissions set to 664, where did it get those?
$ umask
0002
If I changed my umask to something else we can repeat this test a 3rd time and see the effects that umask has on the un-preserved cp:
$ umask 037
$ rm somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-r-----. 1 saml saml 0 Feb 14 15:29 afile
Notice the permissions are no longer 664, but are 640? That was dictated by the umask. It was telling any commands that create a file to disable the lower 5 bits in the permissions ... these guys: (----wxrwx).
| File inheriting permission of directory it is copied in? |
1,394,071,337,000 |
I'm learning numeric computation and have a Core i5, 4gb laptop which I find to be slow for some tasks.
I've read that a single PS3 has the processing power of 30 clustered PCs.
Basically I'm thinking of purchasing a PS and installing Linux on it and then running my python programs on it.
I've read that Sony has disabled the ability to install Linux with firmware update 3.21
Is there a way to run Linux on recent versions of PS? Is there a hack around the new limitation? If I went out and bought one would I be able to run Linux or not?
|
As matters currently stand, there is no "safe" way to use Linux on a PS3 you buy brand new from a retail store. Since the firmware will not provide you low level access to the hypervisor, it's impossible to install Linux without first replacing the firmware. The console will only install firmware with Sony's cryptographic signature, and you are not allowed to downgrade the firmware; it is not possible to overwrite the firmware unless you can build your own and forge Sony's signing key.
To directly answer your questions:
Are there ways? Yes, because Sony is not very good at keeping their signing keys a secret. You will need to do research on custom firmware. Using such firmware would void your warranty, and you risk having your console banned from the Playstation Network if you connect to it and Sony detects that you're not running an official firmware release. Even if a firmware is "safe" one day, it might not be the next.
Would you be able to run Linux on one that you bought? "Maybe." Do your research, and pay very close attention to any commentary about whether or not the hacks work with newer hardware revisions. Do not buy unless you're sure that the hack you intend to use will work with that console, proceeding recklessly could permanently damage your purchase.
Instructions that are more specific than this are unlikely to be posted as answers, because nobody wants Sony breathing down their neck.
| How to run linux on PS3? |
1,394,071,337,000 |
I have two commands, one that lets me record my screen to an AVI video file, and another which lets me stream a video file as a (fake) "webcam". This is really useful in apps that doesn't support selecting one screen to share (I'm looking at you Slack).
command #1 (https://askubuntu.com/a/892683/721238):
ffmpeg -y -f alsa -i hw:0 -f x11grab -framerate 30 -video_size 1920x1080 -i :0.0+1920,0 -c:v libx264 -pix_fmt yuv420p -qp 0 -preset ultrafast screenStream.avi
command #2 (https://unix.stackexchange.com/a/466683/253391):
ffmpeg -re -i screenStream.avi -map 0:v -f v4l2 /dev/video1
Why can't I just run both of these in parallel? Well, the second command starts streaming from the beginning of the file, whenever I use my "webcam". So I have to time it really close, otherwise there is latency.
I've tried lots and lots of solutions (including solutions with gstreamer instead of ffmpeg), can't get anything to work. This is my last hope.
How can I stream my desktop/screen to /dev/video1 as a (fake) "webcam" on Ubuntu?
|
Solved.
Steps to solve:
Unload previous v4l2loopback sudo modprobe -r v4l2loopback
git clone https://github.com/umlaeute/v4l2loopback/
make && sudo make install (if you're using secure boot, you'll need to sign it first https://ubuntu.com/blog/how-to-sign-things-for-secure-boot)
sudo depmod -a
Load the videodev drivers sudo modprobe videodev
sudo insmod ./v4l2loopback.ko devices=1 video_nr=2 exclusive_caps=1 Change video_nr based on how many cams you already got. Zero indexed
ls -al /dev/video* Use /dev/video[video_nr] with ffmpeg
sudo ffmpeg -f x11grab -r 60 -s 1920x1080 -i :0.0+1920,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'hflip,scale=640:360' /dev/video2
Go to https://webcamtests.com and test your dummy cam
Profit!
If you want this to persist between boots, https://askubuntu.com/a/1024786/721238 should do it.
| How can I stream my desktop/screen to /dev/video1 as a (fake) "webcam" on Linux? |
1,394,071,337,000 |
Short question:
How do I connect to a local unix socket (~/test.sock) via ssh? This sockets forwards to an actual ssh server. The obvious does not work and I can't find any documentation:
public> ssh /home/username/test.sock
"ssh: Could not resolve hostname: /home/username/test.sock: Name of service not known"
Long Question:
The Problem I try to solve, is to connect from my (public) university server to my (local) PC, which is behind NAT and not visible to public.
The canonical solution is to create a ssh proxy/tunnel to local on public:
local> ssh -NR 2222:localhost:22 public
But this is not possible, as the administration prohibits creating ports.
So I have thought about using UNIX socket instead, which works:
local> ssh -NR /home/username/test.sock:localhost:22 public
But now, how can I connect to it with ssh?
|
You should be able to do utilizing socat and ProxyCommand option for ssh. ProxyCommand configures ssh client to use proxy process for communicating with your server. socat establishes two-way communication between STDIN/STDOUT (socat and ssh client) and your UNIX socket.
ssh -o "ProxyCommand socat - UNIX-CLIENT:/home/username/test.sock" foo
| SSH connect to a UNIX socket instead of hostname |
1,394,071,337,000 |
I have a Linux machine (RHEL 6.7) with 2 IP's configured on a single NIC (eth1). The primary address, and therefore the address that all traffic appears to come from, is 10.0.0.23. The other is 10.0.0.160.
I am looking for a way to use iptables to change the source IP based on the destination address of a packet. Normally the traffic will 'go out on' 10.0.0.23, but say my packet is destined for 10.0.0.1, I want that packet to 'go out on' 10.0.0.160.
The reason for this is firewalls on the network that are out of my control. There are rules in place allowing traffic from 10.0.0.160 to 10.0.0.1, but not from 10.0.0.23 to 10.0.0.1.
I don't want all traffic to originate from 10.0.0.160, only that destined for 10.0.0.1.
I was looking at using the nat table and maybe a prerouting rule, but don't see a way to change the source address.
If it would help I could create an alias for eth1 (so there would be eth1 and eth1:0) but would like to see if there's a solution in the current config.
Thanks in advance for any advice.
|
Here are two different methods of achieving the desired behaviour:
1. Using iptables
The SNAT target in iptables allows the source address to be modified as you requested. The man page for iptables-extensions has this to say about SNAT:
This target is only valid in the nat table, in the POSTROUTING and
INPUT chains,
and user-defined chains which are only called from those chains. It specifies
that the source address of the packet should be modified (and all future packets
in this connection will also be mangled), and rules should cease being examined.
Based on your question, the following rule will change the source address of packets destined for 10.0.0.1 to 10.0.0.160:
$ iptables -t nat -A POSTROUTING --destination 10.0.0.1/32 -j SNAT --to-source 10.0.0.160
2. Using a static route
Alternatively, instead of an iptables rule, add a static route for the destination host to the routing table, using the following syntax:
$ ip route add <destination>/32 via <gateway> src <alias>
Based on the information you provided, you would use:
$ ip route add 10.0.0.1/32 via <gateway> src 10.0.0.160
Replace <gateway> with the actual IP address of your gateway, as this wasn't provided in your question.
Traffic destined for 10.0.0.1 will now originate from 10.0.0.160. Any other traffic takes the default route, originating from 10.0.0.23.
| iptables: change local source address if destination address matches |
1,394,071,337,000 |
I am no longer able to forward X11 using KiTTY/PuTTY to CygwinX.
I am connecting to an Ubuntu Server 14.10 machine that is correctly configured to allow X11 forwarding. I am able to initiate X11 forwarding using Cygwin xterm and from other linux machines.
I am using CygwinX [1.7.34(0.285/5/3)] and KiTTY 0.64.0.1 (PuTTY fork, I have also tried PuTTY), on Win7.
I have verified my display variable and have tried disabling xhost access control in Cygwin xterm.
$ echo $DISPLAY
:1
$ xhost +
access control disabled, clients can connect from any host
My KiTTY/PuTTY is configured to enable X11 forwarding and the correct display is set. I've tried :1 and :1.0.
When I SSH to the server my DISPLAY variable is set and xauth is updated. I have deleted my .Xauthority and recreated it to verify.
user@server:~$ echo $DISPLAY
localhost:10.0
user@server:~$ xauth list
server/unix:10 MIT-MAGIC-COOKIE-1 3983b2d7f3d5f9f66d9796997771bf82
When I attempt to launch an X11 application I get the following error.
user@server:~$ xterm
KiTTY X11 proxy: unable to connect to forwarded X server: Network error: Connection refused
xterm: Xt error: Can't open display: localhost:10.0
XWin.exe is listening on port 34576 if that matters.
[XWin.exe]
TCP 127.0.0.1:34576 0.0.0.0:0 LISTENING
I believe there is a software or configuration issue I am missing as I am seeing this with multiple server and client machines. Any help would be appreciated.
|
Ok, I figured out the solution to my own problem.
By default CygwinX no longer listens for tcp connections (Cyg SSH is using Unix sockets to connect). To enable tcp connections "-listen tcp" needs to be added to the command line parameters. In my case I changed the "XWin Server" icon to read:
C:\cygwin64\bin\run.exe --quote /usr/bin/bash.exe -l -c "cd; /usr/bin/startxwin -- -multiwindow -listen tcp"
| PuTTY, CygwinX, and X11 forwarding connection refused |
1,394,071,337,000 |
I use XIM under Debian Linux with Gnome 2. Many compose sequences are already defined in the system, including the one for ± sign:
palec@Palec:~$ grep PLUS-MINUS /usr/share/X11/locale/en_US.UTF-8/Compose
<Multi_key> <plus> <minus> : "±" plusminus # PLUS-MINUS SIGN
This does not work with numpad of my Lenovo G550’s keyboard.
I noticed that numbers on numpad require KP_ prefix to be matched, so I tried adding copy of the original rule with keys changed to KP_plus and KP_minus to my ~/.XCompose, where I already have other rules I use in addition to those from the system Compose file. No luck though.
I did not manage to find any useful doc for XIM nor ~/.XCompose. Is there any? Most of information on XIM and compose sequences I got from forums I found by googling. How to get the name of a key for use in ~/.XCompose? Particularly, what are they for numpad + and -?
I do not insist on XIM, but I want to be able to configure custom compose sequences. If there is another, preferably better documented solution, I’d like to hear about it.
|
How to get the name of a key: Use the command xev and press the keys of interest. The name is shown as last word in the parenthesis in the terminal output.
Particularly, what are the names for numpad + and −: xev tells me, that they are KP_Add and KP_Subtract.
| How to find name of key for use in ~/.XCompose? (specifically keypad plus and minus) |
1,394,071,337,000 |
As I was trying in vain to fix a faulty ethernet controller here, one thing I tried was running tcpdump on the machine.
I found it interesting that tcpdump was able to detect that some of the ICMP packets the ping application thought it was sending were not actually going out on the wire, even though it was running on the same machine. I have reproduced those tcpdump results here:
14:25:01.162331 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 1, length 64
14:25:02.168630 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 2, length 64
14:25:02.228192 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 2, length 64
14:25:07.236359 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 3, length 64
14:25:07.259431 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 3, length 64
14:25:31.307707 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 9, length 64
14:25:32.316628 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 10, length 64
14:25:33.324623 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 11, length 64
14:25:33.349896 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 11, length 64
14:25:43.368625 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 17, length 64
14:25:43.394590 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 17, length 64
14:26:18.518391 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 30, length 64
14:26:18.537866 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 30, length 64
14:26:19.519554 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 31, length 64
14:26:20.518588 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 32, length 64
14:26:21.518559 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 33, length 64
14:26:21.538623 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 33, length 64
14:26:37.573641 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 35, length 64
14:26:38.580648 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 36, length 64
14:26:38.602195 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 36, length 64
Notice how the seq number jumps several times... that indicates packets the ping application generates that are not actually leaving the box.
Which brings me to my question: how was tcpdump able to detect that the ICMP packets weren't actually going out? Is it able to somehow directly monitor what is on the wire?
If it does accomplish this, I assume it is by interfacing to some part of the kernel, which in turn interfaces to some hardware that is a standard part of a network controller.
Even so, that's pretty cool! If that is not actually how tcpdump functions, can someone explain to me how it detected the missing packets in software?
|
Yes. By putting network interfaces into promiscuous mode, tcpdump is able to see exactly what is going out (and in) the network interface.
tcpdump operates at layer2 +. it can be used to look at Ethernet, FDDI, PPP & SLIP, Token Ring, and any other protocol supported by libpcap, which does all of tcpdump's heavy lifting.
Have a look at the pcap_datalink() section of the pcap man page for a complete list of the layer 2 protocols that tcpdump (via libpcap) can analyze.
A read of the tcpdump man page will give you a good understanding of how exactly, tcpdump and libpcap interface with the kernel and network interfaces to be able to read the raw data link layer frames.
| what level of the network stack does tcpdump get its info from? |
1,394,071,337,000 |
I have several unistd.h files in my Ubuntu Linux. I've one on /usr/include/asm/unistd.h. This file has this directives:
# ifdef __i386__
# include "unistd_32.h"
# else
# include "unistd_64.h"
# endif
In that folder, I can find those files (unistd_32.h and unistd_64.h).
But in /usr/src/linux-headers-2.6.31-22/include/asm-generic/ there's another unistd.h that starts with this directives:
#if !defined(_ASM_GENERIC_UNISTD_H) || defined(__SYSCALL)
#define _ASM_GENERIC_UNISTD_H
So, the question is: How can I know which one is loaded? Is there any way to check it in runtime with Java?
|
The exact rules followed by the gcc compiler for finding include files are explained at: http://gcc.gnu.org/onlinedocs/cpp/Search-Path.html
A quick command-line trick to find out where an include file comes from is the following:1
echo '#include <unistd.h>' | gcc -E -x c - > unistd.preprocessed
Then, if you look at the unistd.preprocessed file, you will notice lines like:
# 1 "/usr/include/unistd.h" <some numbers>
These tell you that the following block of lines (until the next # number ... line)
come from file /usr/include/unistd.h.
So, if you want to know the full list of files included, you can grep for the # number lines:
echo '#include <unistd.h>' | gcc -E -x c - | egrep '# [0-9]+ ' | awk '{print $3;}' | sort -u*emphasized text*
On my Ubuntu 10.04 / gcc 4.4.3 system, this produces:
$ echo '#include <unistd.h>' | gcc -E -x c - | egrep '# [0-9]+ ' | awk '{print $3;}' | sort -u
"<built-in>"
"<command-line>"
"<stdin>"
"/usr/include/bits/confname.h"
"/usr/include/bits/posix_opt.h"
"/usr/include/bits/predefs.h"
"/usr/include/bits/types.h"
"/usr/include/bits/typesizes.h"
"/usr/include/bits/wordsize.h"
"/usr/include/features.h"
"/usr/include/getopt.h"
"/usr/include/gnu/stubs-64.h"
"/usr/include/gnu/stubs.h"
"/usr/include/sys/cdefs.h"
"/usr/include/unistd.h"
"/usr/lib/gcc/x86_64-linux-gnu/4.4.3/include/stddef.h"
1 Note: The search path for include files is modified by
the -I command-line option; so, you should add any -I path
arguments to the gcc invocation. Also, if you are compiling a C++
source, you should substitute -x c with -x c++.
| How can i know which unistd.h file is loaded? |
1,394,071,337,000 |
I want a service to start on demand rather than on boot. To do that I could use systemd socket activation (with the service and socket files).
But this is a resource limited server, so after some time (e.g. 1 hour) of inactivity, I want to stop the service (until it is triggered again). How can I do that?
I looked through some of the documentation but I can't figure out if this is supported.
Update:
Assuming this is unsupported, the use case is still probably quite common. What would be a good way / workaround to achieve this?
|
Socket activation in systemd can work in two modes:
Accept=true: systemd keeps the listening socket, accepts every incoming connection, spawns a new process for each connection and passes the established socket to it. This case is trivial (each process exits when it's done).
Accept=false: systemd creates the listening socket and watches it for incoming connection. As soon as one comes in, systemd spawns the service and passes the listening socket to it. The service then accepts the incoming connection and any subsequent ones. Systemd doesn't track what's happening on the socket anymore, so it can't detect inactivity.
In the latter case, I think the only truly clean solution is to modify the application to make it exit when it's idle for some time. If you can't do that, a crude workaround could be to set up cron or a systemd timer to kill the service once an hour. This could be a reasonable approximation if the service is only spawned really infrequently.
Note that the use case is probably pretty rare. A process sitting in poll()/select() waiting for a connection doesn't consume any CPU time, so the only resource that's used in that situation is memory. It's probably both easier and more efficient to just set up some swap and let the kernel decide whether it's worth keeping the process in RAM all the time or not.
| Deactivate a systemd service after idle time |
1,394,071,337,000 |
I have to make a configuration file available to guest OS running on top of KVM hyper-visor.
I have already read about folder sharing options between host and guest in KVM with 'qemu' and 9P virtio support. I would like to know about any simple procedure which can help in one time file transfer from host to guest.
Please let me know, how to transfer file while guest OS is running as well as a possible way to make the file available to guest OS by the time it starts running(like packaging the file and integrating with the disk-image if possible).
Host OS will be linux.
|
Just hit upon two different ways:
Transfer files via network. For example you can run httpd on the host and use any web browser or wget/curl to download files. Probably most easy and handy.
Build ISO image on the host with files you want to transfer. Then attach it to the guest's CD drive.
genisoimage -o image.iso -r /path/to/dir
virsh attach-disk guest image.iso hdc --driver file --type cdrom --mode readonly
You can use mkisofs instead of genisoimage.
You can use GUI like virt-manager instead of virsh CUI to attach an ISO image to the guest.
You need to create a VM beforehand, supply that VM's ID as guest. You can see existing VMs by virsh list --all.
| How to send/upload a file from Host OS to guest OS in KVM?(not folder sharing) |
1,394,071,337,000 |
In Linux, is there any difference between after-ip link down-condition and real link absence (e.g. the switch's port burned down, or someone tripped over a wire).
By difference I mean some signs in the system that can be used to distinguish these two conditions.
E.g. will routing table be identical in these two cases? Will ethtool or something else show the same things? Is there some tool/utility which can distinguish these conditions?
|
There are difference between an interface which is administratively up but disconnected or administratively down.
Disconnected
The interface gets a carrier down status. Its proper handling might depend on the driver for the interface and the kernel version. Normally it's available with ip link show. For example with a virtual ethernet veth interface:
# ip link add name vetha up type veth peer name vethb
# ip link show type veth
2: vethb@vetha: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 02:a0:3b:9a:ad:4d brd ff:ff:ff:ff:ff:ff
3: vetha@vethb: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
link/ether 36:e3:62:1b:a8:1f brd ff:ff:ff:ff:ff:ff
vetha which is itself administratively UP, displays NO-CARRIER and the equivalent operstate LOWERLAYERDOWN flags: it's disconnected.
Equivalent /sys/ entries exist too:
# cat /sys/class/net/vetha/carrier /sys/class/net/vetha/operstate
0
lowerlayerdown
In usual settings, for an interface which is administratively up the carrier and operstate match (NO-CARRIER <=> LOWERLAYERDOWN or LOWER_UP <=> UP). One exception would be for example when using IEEE 802.1X authentication (advanced details of operstate are described in this kernel documentation: Operational States, but it's not needed for this explanation).
ethtool queries a lower level API to retrieve this same carrier status.
Having no carrier doesn't prevent any layer 3 settings to stay in effect. The kernel doesn't change addresses or routes when this happens. It's just that in the end a packet that should be emitted won't be emitted by the interface and of course no reply will come either. So for example trying to connect to an other IPv4 address will sooner or later trigger again an ARP request which will fail, and the application will receive a "No route to host". Established TCP connections will just bid their time and stay established.
Administratively down
Above vethb has operstate DOWN and doesn't display any carrier status (since it has to be up to detect this. A physical Ethernet interface of course behaves the same).
When the interface is brought down (ip link set ... down), the carrier can't be detected anymore since the underlying hardware device was very possibly powered off and the operstate becomes "down". ethtool will just say there's no link too, so can't be used reliably for this (it will surely display a few unknown entries too but is there a reliable scheme for this?).
This time this will have an effect on layer 3 network settings. The kernel will refuse to add routes using this interface and will remove any previous routes related to it:
the automatic (proto kernel) LAN routes added when adding an address
any other route added (eg: the default route) in any routing table (not only the main routing table) depending directly on the interface (scope link) or on other previous deleted routes (probably then scope global) . As these won't reappear when the interface is brought back up (ip link set ... up) they are lost until an userspace tool adds them back.
Userspace interactions
When using recent tools like NetworkManager, one can get confused and think a disconnect is similar to an interface down. That's because NM monitors links and will do actions when such events happen. To get an idea the ip monitor tool can be used to monitor from scripts, but it doesn't have a stable/parsable output currently (no JSON output available), so its use gets limited.
So when a wire is disconnected, NM will very likely consider it's not using the current configuration anymore unless a specific setting prevents it: it will then delete the addresses and routes itself. When the wire is connected back, NM will apply its configuration again: adds back addresses and routes (using DHCP if relevant). This looks like the same but isn't. All this time the interface stayed up, or it wouldn't even have been possible for NM to be warned when the connection was back.
Summary
It's easy to distinguish the two cases: ip link show will display NO-CARRIER+LOWERLAYERDOWN for a disconnected interface, and DOWN for an interface administratively brought down.
setting an interface administratively down (and up) can lose routes
losing carrier and recovering it doesn't disrupt network settings. If the delay is short enough it should not even disrupt ongoing network connections
but applications managing network might react and change network settings, sometimes with a result similar to administratively down case
you can use commands like ip monitor link to receive events about interfaces set administratively down/up or carrier changes, or ip monitor to receive all the multiple related events (including address or route changes) that would happen at this time or shortly after.
Most ip commands (but not ip monitor) have a JSON output available with ip -json ... to help scripts (along with jq).
Example (continuing from the first veth example):
vethb is still down:
# ip -j link show dev vethb | jq '.[].operstate'
"DOWN"
# ip -j link show dev vetha | jq '.[].operstate'
"LOWERLAYERDOWN"
Set vethb up, which now gets a carrier on both:
# ip link set vethb up
# ip -j link show dev vetha | jq '.[].operstate'
"UP"
This tells about the 3 usual states: administratively down, lowerlayerdown (ie: up but disconnected) or up (ie: operational).
| The difference between ip link down and physical link absence |
1,394,071,337,000 |
The Linux Programming Interface shows the layout of a virtual address space of a process:
Is the kernel in the physical memory completely or partially mapped to the part "Kernel" on the top from 0xC0000000 to 0XFFFFFFFF in the virtual address space of each process?
If partially, which part of the kernel in the physical memory is mapped to
the "Kernel" part in the virtual address space of each process, and which part isn't?
Does the "Kernel" part in the virtual address space of a process store exactly the part of the kernel code which is accessible to the process when it is running in kernel mode, not the part of the kernel code which isn't?
Do the virtual address spaces of all the processes have the same content in their "Kernel" parts?
|
The answer depends on whether kernel page-table isolation is enabled (which depends on the architecture and whether it supports KPTI).
Without KPTI, the kernel is fully mapped in each process’ address space, but as mentioned in the diagram, those mappings are inaccessible from user space (barring side-channel leaks).
With KPTI, the kernel page tables are separate from the userspace page tables, and only a minimal set of mappings are left in each process’ address space, as required to allow user space to call into the kernel, and to enable the processor to give control to the kernel when dealing with interrupts or exceptions.
In both cases, all processes have the same mappings for the kernel.
See also LWN’s article on KAISER.
| Do the virtual address spaces of all the processes have the same content in their "Kernel" parts? |
1,394,071,337,000 |
It seems that Linux supports changing the owner of a symbolic link (i.e. lchown) but changing the mode/permission of a symbolic link (i.e. lchmod) is not supported. As far as I can see this is in accordance with POSIX. However, I do not understand why one would support either one of these operations but not both. What is the motivation behind this?
|
Linux, like most Unix-like systems (Apple OS/X being one of the rare exceptions), ignores permissions on symlinks when it comes to resolving their targets for instance.
However ownership of symlinks, like other files, is relevant when it comes to the permission to rename or unlink their entries in directories that have the t bit set, such as /tmp.
To be able to remove or rename a file (symlink or not) in /tmp, you need to be the owner of the file. That's one reason one might want to change the ownership of a symlink (to grant or remove permission to unlink/rename it).
$ ln -s / /tmp/x
$ rm /tmp/x
# OK removed
$ ln -s / /tmp/x
$ sudo chown -h nobody /tmp/x
$ rm /tmp/x
rm: cannot remove ‘/tmp/x’: Operation not permitted
Also, as mentioned by Mark Plotnick in his now deleted answer, backup and archive applications need lchown() to restore symlinks to their original owners. Another option would be to switch euid and egid before creating the symlink, but that would not be efficient and complicate right managements on the directory the symlink is extracted in.
| Why do Linux/POSIX have lchown but not lchmod? |
1,394,071,337,000 |
xvfb is supposed to let me run X programs in a headless environment. But when I run xvfb-run glxgears, I get:
libGL error: failed to load driver: swrast
libGL error: Try again with LIBGL_DEBUG=verbose for more details.
Error: couldn't get an RGB, Double-buffered visual
When I run LIBGL_DEBUG=verbose xvfb-run glxgears, I get:
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/tls/swrast_dri.so
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so
libGL error: failed to load driver: swrast
Error: couldn't get an RGB, Double-buffered visual
I'm running stock Lubuntu 13.10 x64 with Intel Ivy Bridge integrated graphics. libgl1-mesa-dri is installed and /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so exists. Running as root doesn't help.
What's going wrong?
|
Just if anyone finds this old question, there is a solution to that mentioned in a bug report linked from another unix.stackexchange question. It was enough to change the default server parameters (-s/--server-args) from -screen 0 640x480x8 to -screen 0 640x480x24, i.e. anything with the 24 color depth.
| Why does `xvfb-run glxgears` fail with an swrast error? |
1,394,071,337,000 |
$ xdg-open
The program 'xdg-open' is currently not installed. You can install it by typing:
sudo apt-get install xdg-utils
$ sudo apt-get install xdg-utils
Reading package lists... Done
Building dependency tree
Reading state information... Done
xdg-utils is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.
$ whereis xdg-open
xdg-open: /usr/bin/xdg-open /usr/bin/X11/xdg-open /usr/share/man/man1/xdg-open.1.gz
$ which xdg-open
$ xdg-open
The program 'xdg-open' is currently not installed. You can install it by typing:
sudo apt-get install xdg-utils
No, I didn't mean "recursion".
I'm on Linux Mint 15 MATE, but instead of MATE I'm using the i3 window manager.
Edit taking @slm's advice
$ type -a xdg-open
type: xdg-open not found
But it's in /usr/bin/xdg-open. I checked.
$ dpkg -S /usr/bin/xdg-open
xdg-utils: /usr/bin/xdg-open
The next one was even more interesting.
$ dpkg -S xdg-open
git-annex: /usr/share/doc/git-annex/html/bugs/Fix_for_opening_a_browser_on_a_mac___40__or_xdg-open_on_linux__47__bsd__63____41__.html
xdg-utils: /usr/bin/xdg-open
xdg-utils: /usr/share/man/man1/xdg-open.1.gz
The bug-fix is just a mail archive of a patch for an OSX problem. Anyway, I guess I could try using the full path:
$ /usr/bin/xdg-open
/usr/bin/xdg-open: No such file or directory
|
This sounds like your package database is screwed up. First I'd identify all the versions of xdg-open that you have on your system. The type should always be used for doing this task, never rely on which or whereis.
Example
Identify all xdg-open's.
$ type -a xdg-open
xdg-open is /usr/bin/xdg-open
Find out which packages they're a part of.
$ dpkg -S /usr/bin/xdg-open
xdg-utils: /usr/bin/xdg-open
You'll want to either repeat the above dpkg -S .. for each match returned by type -a or use this dpkg -S .. search instead.
$ dpkg -S xdg-open
xdg-utils: /usr/bin/xdg-open
xdg-utils: /usr/share/man/man1/xdg-open.1.gz
I would do each, one at a time.
Reinstalling xdg-utils
If you'd like to refresh this package's installation do this:
$ sudo apt-get --reinstall xdg-utils
| xdg-open is installed yet also is not installed |
1,394,071,337,000 |
I am trying to sort my understanding of the different part of graphics on Linux, and I am confused as to the roles played by each of the following concepts.
Display Server
Window Manager
Graphics Driver
My questions:
Are graphics drivers implemented inside the Linux Kernel or outside? If outside the kernel why are they excluded when network, disk, file system are all inside the kernel?
X Windows, Gnome, Ubuntu Unity, KDE, Mir, Wayland who does what in terms of Display Server, Window Manager, and Graphics Driver?
My goal for this question is to understand which projects are contributing what parts of the Linux Graphics experience?
UPDATE http://blog.mecheye.net/2012/06/the-linux-graphics-stack/ has a lot of the details that I was looking for.
|
The term "graphics driver" is used to refer to several different things. One of them is a kernel driver. The kernel driver mostly just sets the video mode and facilitates passing data to/from the card. It also usually downloads the firmware into the GPU on the card. The firmware is a program that the GPU itself runs, but unfortunately, graphics vendors only provide it as a binary blob so you can't look at its source code.
Above that you usually have Xorg running, which has its own driver that translates generic X11 or OpenGL drawing calls into commands the card understands, and sends them down to the card to execute. It also may do some of the work itself depending on what commands the gpu does and does not support. In the case of the OpenGL calls, the Direct Rendering Infrastructure allows this part of the driver to actually execute directly in the client application rather than the X server, in order to get acceptable performance. It also allows the driver in the client application to send its commands directly to the gpu, thanks to coordination with and help from Xorg and the kernel driver at startup.
Wayland and Mir are supposed to replace Xorg as a simplified type of display server.
Unity is both a shell ( provides desktop/launcher ) and compositing window manager in one.
GNOME and KDE are desktop environments. They are large projects consisting of many components. The core of them are their respective application toolkits, which are GTK for GNOME and Qt for KDE. This is a library framework that an application is written with and provides the foundation on which everything else is built. Some of the basic services they provide are event and object handling, Windows, basic drawing functions, I/O, and much more.
| Display Server vs. Window Manager vs. Graphics Driver? |
1,394,071,337,000 |
I would like to understand cgroups better and would like to understand the use-cases for applying cgroups.
Are cgroups a good way for prioritizing different applications (i.e, giving higher priority to specific types of applications like web servers)?
|
There are several uses for cgroups. From the system administration probably the most important one is limiting resources -- the classical example here being the cpu access. If you create a group for e.g. sshd and give it some non-negligible CPU time share (compared to other groups or the default under which fall all unsorted processes), you are guaranteed to being able to login even in times when the machine will be running a CPU-intensive tasks.
More interestingly, if you give this "remote access" processes much higher CPU share then the rest, you will be able to log in almost instantly (because the ssh daemon will be prioritised over the rest of running processes) while you won't hurt the overall computation strength of the machine, since the resources are allocated only on a per-need basis. You usually want to do this together with I/O (including network) prioritisation. However, as John correctly points out in the comment below, one doesn't want to do these things carelessly (since it might fire back in an unexpected way). Important thing to bear in mind is that the groups are inherited by default -- i.e. one doesn't want to start a memory/CPU hog from such an ssh session. Yet for this there are mechanisms that can assign processes to cgroups as they start.
Another use is isolating the processes from each other -- in combination with other features (namespace isolation) in recent Linux kernels they are used to create OS-level virtualization like the LXC (Linux Containers).
Other than that you can do various accounting and control stuff (freezing some processes groups, assigning them to specific CPU cores etc.).
The two links here, should be a reasonable starting place if you are looking for more information. You may also want to check Documentation/cgroups directory in the Linux kernel source tree.
| controlling priority of applications using cgroups |
1,394,071,337,000 |
I have spent 2 hours reading questions about this matter, and still there is some misunderstanding.
I have this process:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1452 0.4 1.8 1397012 19308 ? Sl 04:23 3:48 ./transaction_server
This shows it uses 19.3MB of system resident memory (I have no swap file), and around 1.8% of the whole 1GB system memory, but the virtual size is 1.39GB?!? I have read that ulimit -m doesn't work. People use ulimit -v to set virtual memory limit for the process. Is this virtual memory is the one VSZ listed with ps? What value I should set if I want to restrict this process to use 100MB system memory at most? I have read documentation for setrlimit and this seems legit:
RLIMIT_AS
This is the maximum size of a process' total available memory,
in bytes. If this limit is exceeded, the malloc() and mmap()
functions shall fail with errno set to [ENOMEM]. In addition,
the automatic stack growth fails with the effects outlined above.
But other versions of the documentation say this RLIMIT_AS parameter sets virtual memory size. What is the truth?
|
Yes, VSZ is virtual memory. As to RLIMIT_AS, where did you find the paragraph quoted above? Since setrlimit(2) is a Linux system call, I do not see how it could possibly monitor malloc(3), a library function. Instead, it can only work with brk(2), sbrk(2), and mmap(2) -- this is also what its manpage (checked of Scientific Linux) suggests. However, the total amount of memory requested via these functions is virtual memory, so RLIMIT_AS indeed limits virtual memory. (This is, again, in accordance with the setrlimit(2) manpage.)
Unfortunately, you cannot limit RSS under Linux (this would be ulimit -m). You can try ulimit -d (RLIMIT_DATA), but this will include mmap(2) only since Linux 4.7, typically used for large allocations. Another possibility would be to limit virtual memory, but with such a large difference between RSS and VSZ, this might be difficult.
| How to limit application memory usage? |
1,326,719,357,000 |
I have what I believe is a system file, /etc/cron.daily/ntpupdate which runs ntpdate ntp.ubuntu.com daily to sync with the network time. Every day it generates output very similar to this:
/etc/cron.daily/ntpupdate:
16 Jan 06:30:42 ntpdate[21446]:
step time server 91.189.94.4 offset -12.646804 sec
I'm not positive what the 91.189.94.4 means but I'm pretty sure -12.646804 sec means that my server is off by around 12 seconds. But I don't know why it is off by around the same amount every day. This is an Amazon EC2 instance running Ubuntu.
I can only guess that either it is losing / gaining 12 seconds per day, or something else is syncing the time with another clock that is off by 12 seconds and then I am re-syncing it.
What should I do to try and track this down further? I don't see any other cron jobs in the /etc/cron.* directories or in the users' cron jobs...
UPDATE
Just thought I'd share that I started running this hourly to see if there would be a big jump at a certain hour. This is what the hourly output is:
16 Jan 15:17:04 ntpdate[8346]:
adjust time server 91.189.94.4 offset -0.464418 sec
So apparently every hour the clock is off by around half a second, so that makes sense that each day (24 hours) the clock would be off by around 12 seconds. Guess the clock is just running fast! Thanks!
|
There are a number of factors that might make a software clock run slow or fast. Clocks on virtual servers are especially prone to a whole class of these problems. 12 seconds a day is pretty bad until you come across virtual boxes with clocks that run at 180–200% speed! Clocks on laptops that suspend can suffer from time-keeping issues too.
You should consider dropping ntupdate in favour of ntpd. The package name is ntp on Debian (and presumably Ubuntu too). The NTP daemon keeps your time in sync a lot more proactively than a cron job, synchronising with one or more other NTP servers and keeping your clock much more accurate. It's another implementation of the same protocol ntpdate uses, except ntpd monitors the time continuously.
If you don't want the (very small) overhead of ntpd, you might consider running ntpdate once an hour. Assuming you're 0.5s off every hour, that should be sufficient.
| Why is my EC2 server's time off by ~10 seconds every day? |
1,326,719,357,000 |
As I was reading Linux source code, and more specifically the system calls code, I came across sys_reboot implementation: http://lxr.free-electrons.com/source/kernel/reboot.c#L199.
199 SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
200 void __user *, arg)
201 {
202 ...
...
286 }
In the middle, there is this specific piece of code:
209
210 /* For safety, we require "magic" arguments. */
211 if (magic1 != LINUX_REBOOT_MAGIC1 ||
212 (magic2 != LINUX_REBOOT_MAGIC2 &&
213 magic2 != LINUX_REBOOT_MAGIC2A &&
214 magic2 != LINUX_REBOOT_MAGIC2B &&
215 magic2 != LINUX_REBOOT_MAGIC2C))
216 return -EINVAL;
I wonder what kind of "safety" it actually provides. I mean, is it to prevent misuse? In this case, as the parameters are public, any library or application could misuse the system call even though they are required to pass the parameters. What did I miss?
|
This question has been answered in this Super User question:
What is the purpose of the magic numbers in Linux reboot?
Basically, a bit flip in an address can cause a program to think it is calling one system call when, in fact, it's calling the reboot() system call. Because reboot() is a very destructive, non-syncing operation that erases the state of the system -- thus erasing the evidence of the bit-flip problem that would otherwise be exposed as a program error or panic -- Linux includes extra protections around its successful use.
Interestingly enough, the second set of magic numbers correspond to the birthdays of Linus and his three daughters:
Magic numbers of the Linux reboot() system call
| What is the use of "magic arguments" in Linux reboot system call? |
1,326,719,357,000 |
I get an error when I uncompress my tar.
I do this:
tar xvf VM_DECOMPTES.tar
and after some time I get the following error:
tar: short read
What is going wrong here?
tar: unrecognized option `--version' BusyBox
v1.9.1-VMware-visor-klnext-2965 (2010-04-19 12:53:48 PDT) multi-call
binary
|
I suspect that your tarfile is corrupted or truncated.
The header of a tarfile contains a size field that contains the length of the file.¹ If the actual file is shorter than the header says it should be, tar will try to read past the filesystem end of file and get back a read shorter than it expected, thus generating the message you see.
¹ This feature dates to when tar was used primarily for Tape ARchiving where you could only know the length of a "file" by reading until you hit an EOF marker on the tape. It was retained for backwards compatibility and also provides a nice (if kinda cheap) check on header and file consistency.
| tar: short read |
1,326,719,357,000 |
Let's say I want to create an internal network with 4 subnets. There is no central router or switch. I have a "management subnet" available to link the gateways on all four subnets (192.168.0.0/24). The general diagram would look like this:
10.0.1.0/24 <-> 10.0.2.0/24 <-> 10.0.3.0/24 <-> 10.0.4.0/24
In words, I configure a single linux box on each subnet with 2 interfaces, a 10.0.x.1 and 192.168.0.x. These function as the gateway devices for each subnet. There will be multiple hosts for each 10.x/24 subnet. Other hosts will only have 1 interface available as a 10.0.x.x.
I want each host to be able to ping each other host on any other subnet. My question is first: is this possible. And second, if so, I need some help configuring iptables and/or routes. I've been experimenting with this, but can only come up with a solution that allow for pings in one direction (icmp packets are only an example, I'd ultimately like full network capabilities between hosts e.g. ssh, telnet, ftp, etc).
|
Ok, so you have five networks 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, 10.0.4.0/24 and 192.168.0.0/24, and four boxes routing between them. Let's say the routing boxes have addresses 10.0.1.1/192.168.0.1, 10.0.2.1/192.168.0.2, 10.0.3.1/192.168.0.3, and 10.0.4.1/192.168.0.4.
You will need to add static routes to the other 10.0.x.0/24 networks on each router box, with commands something like this (EDITED!):
# on the 10.0.1.1 box
ip route add 10.0.2.0/24 via 192.168.0.2
ip route add 10.0.3.0/24 via 192.168.0.3
ip route add 10.0.4.0/24 via 192.168.0.4
and the corresponding routes on the other router boxes. On the non-routing boxes with only one interface, set the default route to point to 10.0.x.1. Of course you will also have to add the static addresses and netmasks on all the interfaces.
Also note that linux does not function as a router by default, you will need to enable packet forwarding with:
echo 1 > /proc/sys/net/ipv4/ip_forward
The ip commands above do not make the settings persistent, how to do that is dependent on the distribution.
As I said, I haven't tested this and may have forgotten something.
| Routing Between Multiple Subnets |
1,326,719,357,000 |
I'm not sure what is wrong here but when running fdisk -l I don't get an output, and when running
fdisk /dev/sdb # I get this
fdisk: unable to open /dev/sdb: No such file or directory
I'm running Ubuntu 12.10 Server
Can someone please tell me what I'm doing wrong? I want to delete /dev/sdb2-3 and just have one partition for sdb
The only thing I've done differently with the setup of this server is use ext4 instead of ext3, I figured the extra speed of ext4 would help since I am using SSDs now
root@sb8:~# ll /dev/sd*
brw-rw---- 1 root disk 8, 1 Nov 23 14:58 /dev/sda1
brw-rw---- 1 root disk 8, 2 Nov 23 14:55 /dev/sda2
brw-rw---- 1 root disk 8, 17 Nov 23 19:20 /dev/sdb1
brw-rw---- 1 root disk 8, 18 Nov 23 15:45 /dev/sdb2
brw-rw---- 1 root disk 8, 19 Nov 23 14:51 /dev/sdb3
brw-rw---- 1 root disk 8, 33 Nov 23 15:47 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Nov 23 15:48 /dev/sdd1
root@sb8:~# cat /proc/partitions
major minor #blocks name
8 0 117220824 sda
8 1 112096256 sda1
8 2 5119968 sda2
8 16 117220824 sdb
8 17 20971520 sdb1
8 18 95718400 sdb2
8 19 526304 sdb3
8 48 1953514584 sdd
8 49 1863013655 sdd1
8 32 1953514584 sdc
8 33 1863013655 sdc1
root@sb8:~# ll /dev/disk/by-path/
total 8
drwxr-xr-x 2 root root 4096 Nov 23 15:48 ./
drwxr-xr-x 5 root root 4096 Nov 23 15:42 ../
lrwxrwxrwx 1 root root 10 Nov 23 14:58 pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 23 19:20 pci-0000:00:1f.2-scsi-1:0:0:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 23 15:45 pci-0000:00:1f.2-scsi-1:0:0:0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Nov 23 15:47 pci-0000:00:1f.2-scsi-2:0:0:0-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Nov 23 15:48 pci-0000:00:1f.2-scsi-3:0:0:0-part1 -> ../../sdd1
root@sb8:~# df -T /dev
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/root ext4 111986032 1993108 104388112 2% /
|
On most non-embedded Linux installations, and many embedded installations, /dev is on a RAM-backed filesystem, not on the root partition. Most current installations have /dev as a tmpfs filesystem, with the udev daemon creating entries when notified by the kernel that some hardware is available. Recent kernel offer the possibility of having /dev mounted as the devtmpfs filesystem, which is directly populated by the kernel.
I think Ubuntu 12.10 still uses udev. Either way, /dev should not be on the root partition (as shown by the output of df /dev), it should be on its own filesystem. Did you accidentally unmount /dev?
The first thing you should try is to reboot: this should mount /dev properly. Before that, check that you haven't added an entry for /dev in /etc/fstab (there should be no line with /dev in the second column).
Even with /dev on the root partition, you can create /dev/sdb by running
cd /dev
sudo MAKEDEV sdb
But not having /dev managed dynamically isn't a stable configuration, you'll run into similar problems for a lot of other hardware.
| /dev/sdb: No such file or directory (but /dev/sdb1 etc. exists) |
1,326,719,357,000 |
Into: I like learning by reading sources. But it's tiring of searching them across internet splited on many many different project sites. I'd love to see central browsable repo with sources of many many apps in one place.
When someone want to find documentation of some Linux tool, best
place is : man toolname.
When I want to browse Linux sources "on-demand" I can always jump to
: Linux Cross Reference.
When I want to find most common staff, I can find all sources in
Coreutils.
When I want to check how to build something, I can (for example) jump
into http://www.archlinux.org/packages/ , check it's pkgbuild.
Is there any repo that holds sources of most of tools in one place ?
- just like man holds documentation or Linux Cross Reference kernel sources.
I mean something for "rapid" "on-demand" checking how stuff is implemented. (Yes, I know google -> but I am tired of routine: 1. searching project site 2. browsing repo or even worse -> checking out it's repo 3. deleting when finished)
REMARK:
I've stressed out, I'd like to check tools : rapidly, fast, on-demand.
It means: I don't want to install whole app with it's sources just to take a look into it's sources. (btw. web resource is preferable, so I could check sources from many computers - I do not have admin on all of them)
|
Let me to respond to your question with a alternative answer. I guess you want read the code for the traditional Unix command line tools, not only the GNU version of these. Read the code of similar tools from different projects is a good practice for learning different ideas and implementations.
GNU has a nice web interface for the repo of coreutils: http://git.savannah.gnu.org/cgit/coreutils.git
The BSD family has similar web interfaces for the repos:
OpenBSD: http://www.openbsd.org/cgi-bin/cvsweb/src/
DragonFly BSD: http://gitweb.dragonflybsd.org/dragonfly.git/tree
NetBSD: http://cvsweb.netbsd.org/bsdweb.cgi/src/
FreeBSD: http://svnweb.freebsd.org/base/head/
The BSD codebase is interesting because usually uses less code for the same tools, i.e.: only supports traditional options, no extra options, sh is a real shell and no just a link to other big shell (bash), etc. Tools similar to coreutils are within bin, sbin, usr.bin and usr.sbin.
You can also browse the same web interfaces if you want read the code for to build third party software (similar to arch's pkgbuild). NetBSD and DragonFly use pkgsrc from the NetBSD repo. OpenBSD and FreeBSD have these frameworks within their respective repos.
Other repos interesting for your purpose are:
Illumos: http://src.illumos.org/source/xref/
Minix: http://git.minix3.org/?p=minix.git;a=tree
| Where is best place to find sources of standard linux command line tools? [closed] |
1,326,719,357,000 |
In the GNU OS a process can only write data to a pipe if another process reads the same data (from the same pipe) at the same time.
Is there something like a pipe which lets the 1st process write and buffers the data until the 2nd one reads it?
|
This question is rather old now - but the buffer command provides the ability to, well, buffer data in a pipe.
| Buffering (named) pipe in GNU OS |
1,326,719,357,000 |
I've been trying to understand the difference in use cases for Zswap, Zram, and Zcache. Apologies in advance for the long/slightly sloppily worded question.
I've done a bunch of googling, and I understand that zram is basically a block device for compressed swap, while zswap compresses in kernel using the frontswap api.
It appears that one advantage of zswap is that it can move some pages to a backing swap when under pressure in a LRU manner, while zram can't do that (please confirm, not sure if this is true).
So here's my question:
1.) As a desktop user, what is the performance difference between zcache/zswap/zram, especially zswap and zram? For example, is one much better/worse at memory fragmentation (the kind that leads to excessive memory usage and waste)?
Bonus question:
2.) Is there a likely ideal combination of the above (say, zram+zswap, or zram+zcache) for desktop performance (including responsiveness of desktop, plus minimally disruptive swap behavior and sane memory management)?
*Citation of sources is greatly appreciated.
I should add that I'm a decently experienced Linux user (5 years), and have tried to really understand how my system including the kernel works. However, I'm not a programmer, and only have very basic programming knowledge (3 credits college course). But be technical if you need to; I'll parse your meaning on my own time.
System specs:
Linux Mint 15
Processor:Core 2 Quad 6600 (2.4ghz)
Ram: 8G
linux kernel: liquorix 3.11 series
Storage: 128 GB SSD, 1TB HDD 5400rpm
No "buy more ram" comments, please! I've maxed the ram on this motherboard, and have a $0 upgrade budget for the foreseeable future. However I like to keep open memory intensive programs (multiple browsers being the main consumers of my ram) so I don't mind swapping within reasonable performance degradation limits.
|
The best way I can attempt to answer those questions is to say what those three actually are.
zRAM
zRAM is nothing more than a swap device in essence. The memory management will push pages out to the swap device and zRAM will compress that data, allocating memory as needed.
Zswap
Zswap is a compressed swap space that is allocated internally by the kernel and does not appear as a swap device. It is used by frontswap in the same way a swap device may be used, but in a more efficient manner.
Zcache
Zcache is the frontend to frontswap and cleancache.
Zcache supersedes zRAM so you don't really want both of them fighting over resources, although there is some talk about how the two can work well together given the right circumstances. For now I wouldn't bother trying and leave it up to the experts to figure that one out.
Some reading:
Cleancache vs zram?
https://lwn.net/Articles/454795/
https://www.kernel.org/doc/Documentation/vm/zswap.txt
http://www.zeropoint.com/Support/ZCache/ZCachePro/ZCacheAdvantages.html
Personally, I have just disabled zRAM and enabled Zcache on all my systems that have a new enough kernel (zRAM is still enabled on the Android devices).
As for performance: that's something you'd have to look into yourself. Everybody is different. In theory, though, Zcache should be much more memory efficient than zRAM and it works on two levels (frontswap and cleancache), and it can page out to a swap device as needed (on the hard drive, for example). You can also choose which compression algorithm to use, should it be using too much CPU (which I can't imagine it will).
Update: Zcache has been removed from the 3.11 kernel (for now), so zRAM has again become the only option in newer kernels.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1256503/comments/3
http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=96256460487387d28b8398033928e06eb9e428f7
| Zswap, Zram, Zcache desktop usage scenarios |
1,326,719,357,000 |
Is there a Linux equivalent of the note-taking software Notational Velocity?
|
NVpy is "a cross-platform simplenote-syncing note-taking app inspired by Notational Velocity."
Looking at screenshots NVpy looks like a closer match than TomBoy.
Here's a little write-up of NVpy on Lifehacker.
Though this guy didn't like it and tweaked gvim a bit instead.
Looking more, I found NVVim which "is a clone of the mac app Notational Velocity in vim. It is designed for fast plain-text note-taking and retrieval."
| Note-taking software like "Notational Velocity" for Linux |
1,326,719,357,000 |
I am trying to add space to my root partition and am really not sure the safest way to go about it. I have read this thread, Can I resize the root partition without uninstalling and reinstalling Linux (or losing data)? but I don't think the information lines up with my system.
Any help would be througholy appreciated. Also, in that post, F1234K asked for recommendations on good reading to learn this stuff, but no one replied to it. I would also be very interested in some learning material on the subject.
Thanks!
edit: I should add that my goal is to take 20 gb from sdb6 and put it in sdb1. sdb6 is a logical volume of sdb2.
sudo fdisk -l
Disk /dev/sdb: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000babf
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 19531775 9764864 83 Linux
/dev/sdb2 19533822 1465147391 722806785 5 Extended
/dev/sdb5 19533824 76765183 28615680 82 Linux swap / Solaris
/dev/sdb6 76767232 1465147391 694190080 83 Linux
Disk /dev/sda: 1000.2 GB, 1000200658432 bytes
255 heads, 63 sectors/track, 121600 cylinders, total 1953516911 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a9997
Device Boot Start End Blocks Id System
/dev/sda1 2048 1953515519 976756736 83 Linux
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x927a1713
Device Boot Start End Blocks Id System
/dev/sdc1 * 2048 206847 102400 7 HPFS/NTFS/exFAT
/dev/sdc2 206848 976771071 488282112 7 HPFS/NTFS/exFAT
WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdd: 999.5 GB, 999501594624 bytes
256 heads, 63 sectors/track, 121041 cylinders, total 1952151552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x809da6bc
Device Boot Start End Blocks Id System
/dev/sdd1 1 4294967295 2147483647+ ee GPT
df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 9.2G 8.8G 0 100% /
udev 10M 0 10M 0% /dev
tmpfs 2.4G 896K 2.4G 1% /run
/dev/disk/by-uuid/d7968f08-4108-4382-a585-0b4a3850ec63 9.2G 8.8G 0 100% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 11G 692K 11G 1% /run/shm
/dev/sdb6 652G 581G 39G 94% /home
/dev/sr0 4.4G 4.4G 0 100% /media/cdrom0
/dev/sr1 354M 354M 0 100% /media/cdrom1
/dev/sdd2 931G 759G 173G 82% /media/zacharydimaria
mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=3091362,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2474360k,mode=755)
/dev/disk/by-uuid/d7968f08-4108-4382-a585-0b4a3850ec63 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=10671840k)
/dev/sdb6 on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
/dev/sr0 on /media/cdrom0 type udf (ro,nosuid,nodev,noexec,relatime,utf8,user=zachary)
/dev/sr1 on /media/cdrom1 type udf (ro,nosuid,nodev,noexec,relatime,utf8,user=zachary)
/dev/sdd2 on /media/zacharydimaria type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
pvdisplay does not return anything.
edit: cat etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb1 during installation
UUID=d7968f08-4108-4382-a585-0b4a3850ec63 / ext4 errors=remount-ro 0 1
# /home was on /dev/sdb6 during installation
UUID=ec5e593e-a36f-4b88-b210-8666128b4bf1 /home ext4 defaults 0 2
# swap was on /dev/sdb5 during installation
UUID=4525d76b-dd14-48e4-b60d-dffee5b08245 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/sr1 /media/cdrom1 udf,iso9660 user,noauto 0 0
|
If done carefully, you can use gparted to resize your partitions safely.
You should boot to a live image since you can't resize mounted partitions, and make sure you have a valid back up of your data!!
| How can I resize my root partition in Debian? |
1,326,719,357,000 |
Imagine two processes, a reader and a writer, communicating via a regular file on an ext3 fs. Reader has an inotify IN_MODIFY watch on the file. Writer writes 1000 bytes to the file, in a single write() call. Reader gets the inotify event, and calls fstat on the file. What does Reader see?
Is there any guarantee that Reader will get back at least 1000 for st_size on the file? From my experiments, it seems not.
Is there any guarantee that Reader can actually read() 1000 bytes?
This is happening on a seriously I/O bound box. For example, sar shows an await times of about 1 second. In my case the Reader is actually waiting 10 seconds AFTER getting the inotify event before calling stat, and getting too-small results.
What I had hoped was that the inotify event would not be delivered until the file was ready. What I suspect is actually happening is that the inotify event fires DURING the write() call in the Writer, and the data is actually available to other processes on the system whenever it happens to be ready. In this case, 10s is not enough time.
I guess I am just looking for confirmation that the kernel actually implements inotify the way I am guessing. Also, if there are any options, possibly, to alter this behavior?
Finally- what is the point of inotify, given this behavior? You're reduced to polling the file/directory anyway, after you get the event, until the data is actually available. Might as well be doing that all along, and forget about inotify.
*** EDIT ****
Okay, as often happens, the behavior I am seeing actually makes sense, now that I understand what I am really doing. ^_^
I am actually responding to an IN_CREATE event on the directory the file lives in. So I am actually stat()'ing the file in response to the creation of the file, not necessarily the IN_MODIFY event, which may be arriving later.
I am going to change my code so that, once I get the IN_CREATE event, I will subscribe to IN_MODIFY on the file itself, and I won't actually attempt to read the file until I get the IN_MODIFY event. I realize that there is a small window there in which I may miss a write to the file, but this is acceptable for my application, because in the worst case, the file will be closed after a maximum number of seconds.
|
From what I see in the kernel source, inotify does only fire up after a write is completed (i.e. your guess is wrong). After the notification is triggered, only two more things happen in sys_write, the function that implements the write syscall: setting some scheduler parameters, and updating the position on the file descriptor. This code has been similar as far back as 2.6.14. By the time the notification fires, the file already has its new size.
Check for things that may go wrong:
Maybe the reader is getting old notifications, from the previous write.
If the reader calls stat and then calls read or vice versa, something might happen in between. If you keep appending to the file, calling stat first guarantees that you'll be able to read that far, but it's possible that more data has been written by the time the reader calls read, even if it hasn't yet received the inotify notification.
Just because the writer calls write doesn't mean that the kernel will write the requested number of characters. There are very few circumstances where atomic writes are guaranteed up to any size. Each write call is guaranteed atomic, however: at some point the data isn't written yet, and then suddenly n bytes have been written, where n is the return value of the write call. If you observe a partially-written file, it means that write returned less than its size argument.
Useful tools to investigate what's going on include:
strace -tt
the auditd subsystem
| Does inotify fire a notification when a write is started or when it is completed? |
1,326,719,357,000 |
When compiling a 3.3 kernel, I noticed that a new driver called teaming was added to the networking system. According to the relevant commit teaming is a userspace-driven alternative to bonding.
Has anyone been testing this out? Is it faster or better than the old tried-and-true bonding driver? What would be the advantages of changing?
|
It looks like the advantages of changing right now are "none at all" inasmuch as the project has only just been added to the kernel, has very little documentation, and is self-described as being "still in its dipers[sic] atm".
In the long run, a userspace networking bonding driver could have some of the same benefits that FUSE (the userspace filesystem interface) brings to the world of filesystems -- primarily that it's much easier to develop and experiment with different policies, protocol implementations, and so forth. By simplifying the in-kernel code and pushing the complexity into userspace, you can also end up with a solution that is more robust in the event of failures and that allows for more agile responses to bugs and feature requests and so forth.
This presentation (warning:PDF) describes the motivation and goals of the project. Primarily, they're looking to replace the legacy bonding code which is bloated and complicated with something that is smaller, easier to maintain, and more performant.
| What are the benefits of the new teaming driver? |
1,326,719,357,000 |
I saw a kernel option today in menuconfig that used braces for its checkbox.
{*} Button
This isn't listed in the legend at the top of the screen.
[*] built-in [ ] excluded <M> module < > module capable
What do the braces signify?
|
It represents an option that has been implied to a specific value by another option.
This Gentoo's wiki has a clear explanation and lists all the available types that menuconfig can display. For example: the hyphen is also listed there.
| What do the kernel options in braces mean? |
1,326,719,357,000 |
I would like to have a log file that contains an entry for every time a user runs any suid program, containing the user name, the program and any command line arguments passed to it. Is there a standard way to achieve this on Linux?
|
You can log all invocations of a specific executable (setuid or not) through the audit subsystem. The documentation is rather sparse; start with the auditctl man page, or perhaps this tutorial. Most recent distributions ship an auditd package. Install it and make sure the auditd daemon is running, then do
auditctl -A exit,always -F path=/path/to/executable -S execve
and watch the calls get logged in /var/log/audit/audit.log (or wherever your distribution has set this up).
| Log every invocation of every SUID program? |
1,326,719,357,000 |
I'm having CentOS 7 64 installed on my desktop. After recent system update, I am getting below error while booting the CentOS 7.
Some time system is able to boot and I can work on it. but it gives the same error at the time of next boot.
after entering this:
systemctl status kdump.service
I get this:
● kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled)
Active: failed (Result: exit-code) since Thu 2015-01-22 02:55:49 MST; 39min ago
Main PID: 1139 (code=exited, status=1/FAILURE)
Jan 22 02:55:49 localhost.localdomain kdumpctl[1139]: No memory reserved for crash kernel.
Jan 22 02:55:49 localhost.localdomain kdumpctl[1139]: Starting kdump: [FAILED]
Jan 22 02:55:49 localhost.localdomain systemd1: kdump.service: main process exited, code=exited, status=1/FAILURE
Jan 22 02:55:49 localhost.localdomain systemd1: Failed to start Crash recovery kernel arming.
Jan 22 02:55:49 localhost.localdomain systemd1: Unit kdump.service entered failed state.
Jan 22 02:55:49 localhost.localdomain systemd1: kdump.service failed.
system-config-kdump:
command not found...
Adding image
|
Install the required packages
yum --enablerepo=debug install kexec-tools crash kernel-debug kernel-debuginfo-`uname -r`
Modify grub
A kernel argument must be added to /etc/grub.conf to enable kdump. It’s called crashkernel and it can be either auto or set as a predefined value e.g. 128M, 256M, 512M etc.
The line will look similar to the following:
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet"
Change the value of the crashkernel=auto to crashkernel=128 or crashkernel=256 ...
Regenerate grub configuration:
grub2-mkconfig -o /boot/grub2/grub.cfg
On a system with UEFI firmware, execute the following instead:
grub2-mkconfig -o /boot/efi/EFI/Centos/grub.cfg
Open the /etc/zipl.conf configuration file
locate the parameters= section, and edit the crashkernel= parameter (or add it if not present). For example, to reserve 128 MB of memory, use the following:crashkernel=128M save and exit
Regenerate the zipl configuration:zipl
Enabling the Service
To start the kdump daemon at boot time, type the following command as root:
chkconfig kdump on
This will enable the service for runlevels 2, 3, 4, and 5.
Similarly, typing chkconfig kdump off will disable it for all runlevels.
To start the service in the current session, use the following command as root:
service kdump start
| Kdump.service FAILED centOS 7 |
1,326,719,357,000 |
fdisk -l output:
.
.
Disk label type: dos
Disk identifier: 0x0006a8bd
.
.
What are Disk label type and Disk identifier?
Also, apart from the manuals, where else can I find more information about disk management / partitioning etc..?
|
The disk label type is the type of Master Boot Record. See http://en.wikipedia.org/wiki/Master_boot_record. The disk identifier is a randomly generated number stuck onto the MBR.
In terms of tools for looking at disks, fdisk is on its way to being deprecated if it isn't already so. parted is the replacement for fdisk and gparted can be used to provide a graphical interface to parted (although certainly other tools exist as well).
| "fdisk -l" output: what are Disk label type" and "Disk identifier" |
1,326,719,357,000 |
Well I've been around computers since the late 80's(I was like 3 actually). Went the whole mile: Atari Xl-XE, MS-DOS, Windows 3.1, 95, etc. Then started using Linux because of the looks(yes I know, Compiz-Fusion was the real reason to explore Linux) and now is installed in all my machines. I've even have it in Windows 10.
I've assembled machines from scratch before and you could always boot to "MS-DOS", that is what I remember and that made me wonder.
How was Unix installed back in the 80's or late 70's(I wasn't even alive)? Was it trivial like boot a big floppy or black magic involved?
It happens that I can't find any references to it and people in my country just don't get Free Software thing.
EDIT:
I've skipped a crucial part, I feel dumb because I didn't think about that in the first place.
Eveything starts with the boot sequence, which isn't an Operating System, but it's in ROM memory, like the BIOS(In my mind is a very minimalistic OS for machine config) and other stuff, like the boot sequence. At this stage it will look for the devices listed in the BIOS then it will iterate them in order, till one device responds with the boot instructions, like the ones in the tapes. So no initial OS is necesary and Unix can be installed.
Dumb mistakes, takes you the long way, but surely you learn more.
|
My experience with installing Unix in the 80's was on a PDP-11, and the installation process is actually pretty interesting. I actually did it tonight (on an emulator), for the first time in years...
Unix V7 for the PDP-11 was distributed on tape. The tape contained several files, one after the other.
The first file on the tape was a boot loader. It came in two parts. The first part was the boot block, and it knew just enough to read the second part of the bootloader from tape into memory, and then transfer control to it. The code for this was less than 512 bytes. The second part was bigger, it had stripped-down "standalone drivers" for a couple of different types of disk and tape, and it knew just enough about the Unix filesystem to be able to find files either on tape, or in the root directory of a filesystem on a hard drive, load them, and run them. The complete size of the boot loader (the total size of both parts) was about 8K bytes.
The second file on the tape was a standalone cat program. When I say "standalone", I mean it ran directly on the bare metal (without any operating system at all); it was written with the same standalone device drivers and filesystem drivers as the boot loader. You could load and run this using the boot loader. When it started up, you tell it what device you want to read a file from, and what file to read. It reads it, prints it out, and then exits. That's all it does. This was of limited usefulness.
The third file on the tape was just a text file that had a listing of what files were on the tape. Almost no one ever even looked at this. If you were using one of these distribution tapes, you pretty much already knew what was on it...
The fourth file on the tape was a standalone mkfs program. This was built with the same library of standalone device drivers and filesystem drivers as the other standalone programs, and it too ran on the bare metal, without an operating system. You could load and run this using the boot loader, it would ask you what disk (and partition) you wanted to make a filesystem on, and how big the filesystem was supposed to be, and then it would write out the initial filesystem structure on the device and partition you told it to. Then it would exit.
The fifth file on the tape was a standalone restor program (yes, much like the creat() system call, restor was spelled without an 'e'...). You could load and run this using the boot loader. Again, it ran on the bare metal, no operating system. It would ask for a tape file containing a filesystem dump, and a disk partition on which to restore it. And then, it would do that. Then it would exit.
The sixth file on the tape was just a filesystem dump of the root filesystem.
The seventh file on the tape was just a filesystem dump of the /usr filesystem.
And that's it - that's what you get.
So, if you had this tape, you had to get the process started somehow. Some PDP-11's had boot ROMs that knew how to load the first block off of a device (like a tape or disk) and jump it it. (And for this tape, the first block is less than 512 bytes of executable code, that knows how to load the rest of the boot loader.) The first PDP-11 that I used, however, did not have a bootstrap ROM. Every time we booted the machine, we had to enter in the boot code to load the first block off of a device and jump to it. By hand. In binary... Fortunately, it was pretty short (for example, the code to read the first block off of a TU16 or TE16 tape drive and jump to it was only 6 words, or 12 bytes), and we had the boot code written down on a piece of paper taped to the machine. Needless to say, we did our best to avoid needing to reboot the machine at all costs...
So, given all that ... the general process to install the system was:
Use the boot ROM (or key in the boot code by hand...) to load the so-called "block-zero boot loader" into memory, and that is then used to load the rest of the boot loader.
Use the boot loader to load the standalone mkfs program (the fourth file on the tape), to lay down the structure of the root filesystem on a hard disk partition.
Use the boot loader to load the standalone restor program (the fifth file on the tape), to restore the filesystem dump of the root filesystem (the sixth file on the tape) on to your hard disk.
Use the boot loader to load the Unix kernel out of a file in the root filesystem on the hard drive (that you just restored from tape), and transfer control to it. At this point, Unix is now running.
Use the normal Unix mkfs and restor commands to create the /usr filesystem on another partition of the hard disk, and restore the filesystem dump of the /usr filesystem to the partition you just prepared.
And then, you're pretty much done, except for installing the boot code in the first disk block on the hard disk (so either your boot ROM, or your hand-entered boot code, can run it whenever you reboot your system), a few items of system tuning, and setting some things up the way you want them to be.
Procedures like this were how many Unix distributions were installed, for a long time, in the 1970's and 1980's. Berkeley Unix (4.2BSD and later) provided a distribution tape with a very similar structure, and a very similar installation procedure.
If you want to see Charles Haley's and Dennis Ritchie's own instructions for installing V7 Unix on a PDP-11, you can find them here. I just followed these instructions tonight, and they work fine. ;-)
| How was Unix installed in the 70's-80's? |
1,326,719,357,000 |
My desktop is usually very responsive, even under heavy load. But when I copy files to a USB drive, it always locks up after some time. By "lock up", I mean:
Moving focus from one window to another can take 10-20s
Switching desktops can take 10-20s
Videos don't update anymore (in YouTube, the audio continues to play, only the video freezes)
The system load isn't exceptionally high when this happens. Sometimes, I see a lot of white on xosview indicating that the kernel is busy somewhere.
At first glance, it looks as if copying files to the USB drive would interfere somehow with compiz but I can't imagine what the connection could be.
Here is the output of htop:
Here is the output of iostat -c -z -t -x -d 1 during a 2 minute hang:
19.07.2012 20:38:22
avg-cpu: %user %nice %system %iowait %steal %idle
1,27 0,00 0,38 37,52 0,00 60,84
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdg 0,00 2,00 0,00 216,00 0,00 109248,00 1011,56 247,75 677,69 0,00 677,69 4,63 100,00
As you can see, only the external harddisk is active. Here is the complete log: http://pastebin.com/YNWTAkh4
The hang started at 20:38:01 and ended at 20:40:19.
Software information:
openSUSE 12.1
KDE 4.7.x
Filesystems: reiserfs and btrfs on my internal harddisk, btrfs on the USB drive
|
My first guess was btrfs since the I/O processes of this file system sometimes take over. But it wouldn't explain why X locks up.
Looking at the interrupts, I see this:
# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
0: 179 0 0 0 0 0 0 0 IR-IO-APIC-edge timer
1: 6 0 0 0 0 0 0 0 IR-IO-APIC-edge i8042
8: 1 0 0 0 0 0 0 0 IR-IO-APIC-edge rtc0
9: 0 0 0 0 0 0 0 0 IR-IO-APIC-fasteoi acpi
12: 10 0 0 0 0 0 0 0 IR-IO-APIC-edge i8042
16: 3306384 0 0 0 0 0 0 0 IR-IO-APIC-fasteoi ehci_hcd:usb1, nvidia, mei, eth1
Well, duh. The USB driver uses the same IRQ as the graphics card and it is first in the chain. If it locks up (because the file system does something expensive), the graphics card starves (and the network, too).
| Why does my desktop lock up when I copy lots of files to a USB drive? |
1,326,719,357,000 |
I have been using Linux on my Acer 5740 for a couple of years now. Lately, I noticed that my computer starts heating up and steadies at around 70 degrees. If I fire up Eclipse or ffmpeg or something, the computer shoots to 85-90 degrees. Maybe this has happened before but I might have ignored it.
I have a dual-boot with Windows 7 and 70 degrees is the maximum even when I play games.
I expect Linux to heat up a little because of drivers but 70 degrees @ idling is a little too much.
My prior research on this shows:
A friend of mine with exact same Laptop but with an ATI card instead of the Intel (present on mine) was struggling with heat problems of much greater intensity. He installed fgrlx and his Laptop is as cool as Siberia.
I have attempted to install Intel drivers for my card. I have the latest version of Xorg and xorg for Intel. It doesn't help.
The problem is independent of Distribution. I have tried Ubuntu, Debian, Fedora and FreeBSD.
The graph for temperature versus time after boot-up is fairly steady. There are no sudden jumps.
All temperatures are in Celcius and correspond to max(acpi -t)
Any solutions?
Edit: My CPU if scaled at 933MHz still doesn't help. I can't find Fan Control for my Laptop. There are few scripts for Acer Aspire One but I can't find one for 5740.
My /proc/acpi/fan folder is empty!
blah@blah-Laptop:/proc/acpi/fan$ ls -l
total 0
|
I'm running Arch Linux, and this is what I do to reduce heat emissions.
I use laptop-mode-tools to control CPU frequency scaling and spinning down of the hard disk. The hard disk can heat up quite a bit if you keep it running continuously. But take note, spinning down of the hard disk too often will cause it to break. Desktop hard drives are usually rated for only 40,000-50,000 spinups. Laptop hard drives are usually rated for around 300,000 spinups. Link.
I installed acpi_call Visit here or follow this post for instructions to disable/ activate your discrete card . For me, I disabled the discrete card and only make use of the integrated card.
If you are using i915 drivers for your intel card, this will work. Check your output of lspci -mvknn | grep -B8 i915. If it returns non-empty, then you may add i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 i915.lvds_downclock=1 to your boot parameters.
pcie_aspm=force can also be added if all PCIe hardware on the system supports Active State Power Management.
I use steps to mainly to reduce power consumption, but I also noticed a drop in temperature of more than 10°C as a side effect. I guess with less power used, less heat is emitted.
| Why does Linux heat up my computer? |
1,326,719,357,000 |
I have a Mini2440 ARM Board, and I have put a base Debian 6.0 system on it using multistrap.
I have used tmux to run several processes in defferent windows from /etc/rc.local. I connect to the board using its serial port and an inittab entry to run getty on that port. I use picocom as serial communicator.
When root logs in, ~/.bashrc attaches him to the already running tmux server, and processes can be easily monitored. the actual command is exec tmux attach-session -t "main". tmux runs with default config.
Everything works, except one of the processes (a shell script around pppd) does not receive Ctrlc from terminal, while other processes do. Also Ctrl\ works. also kill -INT <pppd_pid> works, but kill -INT <shellscript_pid> does not.
I really need Ctrlc to work. What is wrong with this setup?
Edit:
here is the output of stty -a in the shell script, right before pppd:
speed 38400 baud; rows 23; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>;
eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-iuclc -ixany imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke
since it's just pppd process that has this issue, I think it has something to do with it or its configuration, but when I run pppd outside of tmux, Ctrl-C works. pppd runs with nodetach option, so it stays in terminal foreground.
I also tested it on my dev machine (Debian 6.0 on amd64) with the same results.
|
Turned out it was a bug in the particular version of pppd that was being used in the distro. I checked and previous and later versions of pppd do not have this problem. Also the problem is not specific to this arch and platform or tmux. If pppd is being run inside a shell script, It does not handle Ctrl-C, while outside shell, it has no problem.
| Ctrl-C does not work with pppd non-detached session |
1,326,719,357,000 |
I have a Linux gateway performing NAT for my home network. I have another network which I'd like to transparently forward packets to, but only to/from specific IP/ports (ie. not a VPN). Here's some example IP and ports to work with:
Source Router Remote Gateway Remote Target
192.168.1.10 -> 192.168.1.1 -> 1.2.3.4 -> 192.168.50.50:5000
I'd like the Source machine to be able to talk to specific ports on Remote Target as if it were directly routable from Router. On the Router, eth0 is the private network and eth1 is internet-facing. Remote Gateway is another Linux machine which I can ssh into and it can route directly to Remote Target.
My attempt at a simple solution is to set up ssh port forwarding on Router, such as:
ssh -L 5000:192.168.50.50:5000 1.2.3.4
This works fine for Router, which can now connect locally to port 5000. So "telnet localhost 5000" will be connected to 192.168.50.50:5000 as expected.
Now I want to redirect traffic from Source and funnel through the now-established ssh tunnel. I attempted a NAT rule for this:
iptables -t nat -D PREROUTING -i eth0 -p tcp -s 192.168.1.10 --dport 5000 -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:5000
and since the Router is already my NAT gateway, it already has the needed postrouting rule:
-A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE
Most Q&A on this site or elsewhere seem to deal with forwarding server ports or hairpin NAT, both of which I have working fine elsewhere, neither of which apply to this situation. I certainly could DMZ forward Remote Target ports through Remote Gateway, but I don't want the ports internet-accessible, I want them accessible only through the secure SSH tunnel.
The best answer I can find relates to Martian packet rejection in the Linux kernel:
iptables, how to redirect port from loopback?
I've enabled logging of martians and confirmed that the kernel is rejecting these packets as martians. Except that they aren't: I know exactly what these packets are for, where they're from and where they're going (my ssh tunnel).
The "roundabout" solution presented there is applicable to that original question, but does not apply for my case.
However, while writing/researching this question, I have worked around my problem by using SSH source IP binding like so:
ssh -L 192.168.1.1:5000:192.168.50.50:5000 1.2.3.4
iptables -t nat -D PREROUTING -i eth0 -p tcp -s 192.168.1.10 --dport 5000 -d 1.2.3.4 -j DNAT --to-destination 192.168.1.1:5000
Since I'm not using loopback, this gets around Martian rejection.
I still post the question here for two reasons:
In hope that someone who is trying to do something similar in the future might find this in their searches and this workaround might help them.
I still prefer the idea of keeping my ssh port forwards connection bound only to loopback and being able to route to them through iptables. Since I know exactly what these packets are and where they are going, shouldn't there some way for me to flag them as such so that Linux martian filtering doesn't reject them? All my searching on this topic leads to rp_filter, which didn't help at all in my testing. And even if it did work, it isn't specific to the exact packets I am trying to allow.
I'm interested in contributing my question and workaround to general search to save someone else the hours of searching I did only to come up with dead ends, as well as hopefully having someone answer the loopback/martian part of my question that still remains open to me.
|
The issue with doing a DNAT to 127.0.0.1:5000 is that when the remote side responds, these packets return into the routing engine as if they were locally originated (from 127.0.0.1) but they have an outside destination address. SNAT/MASQUERADE matching the outside interface would have caught them and rewritten them, but the routing decisions that have to be made for the packets to arrive at that interface come first, and they disallow these packets which are bogus by default. The routing engine can't guarantee you'll remember to do that rewrite later.
The thing that you should be able to do instead is reject any outside connections to 192.168.1.1:5000 at iptables INPUT other than those coming from 192.168.1.10 using the ! argument before the -s source address specification. If you use TCP reset as the rejection mechanism (-j REJECT --reject-with tcp-reset, instead of the default ICMP destination unreachable), it will be largely identical to the situation where nothing was even listening on that address:port combination as far as the outside world is concerned.
| Linux iptables ssh port forwarding (martian rejection) |
1,326,719,357,000 |
I have a file of genomic data with tag counts, I want to know how many are represented once:
$ grep "^1" file |wc -l
includes all lines beginning with 1, so it includes tags represented 10 times, 11, times, 100 times, 1245 times, etc. How do I do this?
Current format
79 TGCAG.....
1 TGCAG.....
1257 TGCAG.....
1 TGCAG......
I only want the lines that are:
1 TGCAG.....
So it cannot include the lines beginning with 1257. NOTE: The file above is tab delimited.
|
With awk:
awk '$1 == "1" { print; x++ } END { print x, "total matches" }' inputfile
| Grep lines starting with 1, but not 10, 11, 100 etc [duplicate] |
1,326,719,357,000 |
Why do Linux people always say to read the manual when it would be so much easier to just give you an answer? There's no manual! It didn't come with one.
|
There is a manual, you just have to know where it is. It can be accessed with the man command. If you are unsure how to use it, type man man. The man command is very important; remember it even if you forget everything else.
The manual contains detailed information about a variety of topics, which are separated into several sections:
General commands
System calls
Library functions, covering in particular the C standard library
Special files (usually devices, those found in /dev) and drivers
File formats and conventions
Games and screensavers
Miscellaneous
System administration commands and daemons
The notation ls(1) refers to the ls page in section 1. To read it type man 1 ls or man ls.
To avoid being told to read the manual when you ask a question, try man command, apropos command, command -?, command --help, and a few Google searches. If you do not understand something in the manual, quote it in your question and try to explain what you don't understand. Usually when they ask you to read the manual, it is because they think it will be more beneficial to you than a simple, incomplete answer. If you don't know which man pages are relevant, ask.
| Why do Linux people always say to read the manual? [closed] |
1,326,719,357,000 |
I used Linux a bit in college, and am familiar with the terms. I develop in .NET languages regularly, so I'm not computer illiterate.
That said, I can't really say I understand the "compile it yourself" [CIY] mentality that exists in *nix circles. I know it's going away, but still hear it from time to time. As a developer, I know that setting up compilers and necessary dependencies is a pain in the butt, so I feel like CIY work flows have helped to make *nix a lot less accessible.
What social or technical factors led to the rise of the CIY mentality?
|
Very simply, for much of the history of *nix, there was no other choice. Programs were distributed as source tarballs and the only way you had of using them was to compile from source. So it isn't so much a mentality as a necessary evil.
That said, there are very good reasons to compile stuff yourself since they will then be compiled specifically for your hardware, you can choose what options to enable or not and you can therefore end up with a fine tuned executable, just the way you like it. That, however, is obviously only something that makes sense for expert users and not for people who just want a working machine to read their emails on.
Now, in the Linux world, the main distributions have all moved away from this many years ago. You very, very rarely need to compile anything yourself these days unless you are using a distribution that is specifically designed for people who like to do this like Gentoo. For the vast majority of distributions, however, your average user will never need to compile anything since pretty much everything they'll ever need is present and compiled in their distribution's repositories.
So this CIY mentality as you call it has essentially disappeared. It may well still be alive and kicking in the UNIX world, I have no experience there, but in Linux, if you're using a popular distribution with a decent repository, you will almost never need to compile anything yourself.
| What is the source of the "compile it yourself" mentality in linux [closed] |
1,326,719,357,000 |
I have a CSV file as
input.csv
"1_1_0_0_76"
"1_1_0_0_77"
"1_1_0_0_78"
"1_1_0_0_79"
"1_1_0_0_80"
"1_1_0_0_81"
"1_1_0_0_82"
"1_1_0_0_83"
"1_1_0_0_84"
"1_1_0_0_85"
............. and so on.
I need to convert this CSV file into
result.csv
1,1,0,0,76
1,1,0,0,77
1,1,0,0,78
1,1,0,0,79
1,1,0,0,80
1,1,0,0,81
1,1,0,0,82
1,1,0,0,83
1,1,0,0,84
1,1,0,0,85
|
Far simpler way is to use tr
$ tr '_' ',' < input.csv | tr -d '"'
1,1,0,0,76
1,1,0,0,77
1,1,0,0,78
The way this works is that tr takes two arguments - set of characters to be replaced, and their replacement. In this case we only have sets of 1 character. We redirect input.csv input tr's stdin stream via < shell operator, and pipe the resulting output to tr -d '"' to delete double quotes.
But awk can do it too.
$ cat input.csv
"1_1_0_0_76"
"1_1_0_0_77"
"1_1_0_0_78"
$ awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv
1,1,0,0,76
1,1,0,0,77
1,1,0,0,78
The way this works is slightly different: awk reads each file line by line, each in-line script being /Pattern match/{ codeblock}/Another pattern/{code block for this pattern}. Here we don't have a pattern, so it means to execute codeblock for each line. gsub() function is used for global substitution within a line, thus we use it to replace underscores with commas, and double quotes with a null string (effectively deleting the character). The 1 is in place of the pattern match with missing code block, which defaults simply to printing the line; in other words the codeblock with gsub() does the job and 1 prints the result.
Use the shell redirection (>) to send output to a new file:
awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv > output.csv
| Replacing underscore by comma and removing double quotes in CSV |
1,326,719,357,000 |
I have thousands of unl files named something like this cbs_cdr_vou_20180624_603_126_239457.unl. I wanted to print all the lines from those files by using following command. but its giving me only file names. I don't need file names, I just need contents from those files.
find -type f -name 'cbs_cdr_vou_20180615*.unl' > /home/fifa/cbs/test.txt
Current Output:
./cbs_cdr_vou_20180615_603_129_152023.unl
./cbs_cdr_vou_20180615_603_128_219001.unl
./cbs_cdr_vou_20180615_602_113_215712.unl
./cbs_cdr_vou_20180615_602_120_160466.unl
./cbs_cdr_vou_20180615_603_125_174428.unl
./cbs_cdr_vou_20180615_601_101_152369.unl
./cbs_cdr_vou_20180615_603_133_193306.unl
Expected output:
8801865252020|200200|20180613100325|;
8801837463298|200200|20180613111209|;
8801845136955|200200|20180613133708|;
8801845205889|200200|20180613141140|;
8801837612072|200200|20180613141525|;
8801877103875|200200|20180613183008|;
8801877167964|200200|20180613191607|;
8801845437651|200200|20180613200415|;
8801845437651|200200|20180613221625|;
8801839460670|200200|20180613235936|;
Please note that, for cat command I'm getting error like -bash: /bin/logger: Argument list too long that's why wanted to use find instead of cat command.
|
The find utility deals with pathnames. If no specific action is mentioned in the find command for the found pathnames, the default action is to output them.
You may perform an action on the found pathnames, such as running cat, by adding -exec to the find command:
find . -type f -name 'cbs_cdr_vou_20180615*.unl' -exec cat {} + >/home/fifa/cbs/test.txt
This would find all regular files in or under the current directory, whose names match the given pattern. For as large batches of these as possible, cat would be called to concatenate the contents of the files.
The output would go to /home/fifa/cbs/test.txt.
Related:
Understanding the -exec option of `find`
| How can I print contents instead of file name from using linux find command? |
1,494,519,046,000 |
I recently encountered the Linux Null Block device driver, null_blk, while I benchmarking the I/O stack instead of on a specific block device. I found the devices created under this driver (let's use the device name /dev/nullb0 as an example) quite intriguing, especially considering their similarity in name to the /dev/null device. Since I couldn't find any existing questions on this topic from Stackoverflow, I decided to reach out for clarification.
My main question is: what are the differences between the /dev/null and the block device created under the null_blk device driver?
To this point: I've already noticed some distinctions.
First, (as far as I understand), the null device /dev/null doesn't go through any driver. However, devices created under null_blk are true block drivers that the data must pass through. I also confirmed this by running fio on both devices; /dev/null performs much better in terms of random read IOPS and submission latency.
Second, we know that reading from /dev/null results in an EOF (for example, cat /dev/null), but when I attempt cat /dev/nullb0, it doesn't return an EOF and instead hangs.
Additionally, as a side note, the kernel documentation for null_blk mentions parameters for configuration, but I don't see any similar options for /dev/null to be configured.
It seems, large number of differences exist under the similar names. Can someone provide further also formal insights or clarification on these differences? Thanks!
|
/dev/null is handled by a device driver, but it’s a very simple one.
The major difference between /dev/null and null_blk is that the former is a character device, the latter a block device. The former handles sequences of characters with direct implementations of the operations invoked by programs using it, without much indirection between a system call using the device and the code implementing the corresponding operation in the driver. The latter is much more complex, and operates on full sectors, with support for command queues etc. There is much more going on to handle any single request.
Of course there is also a major difference in purpose: /dev/null is intended for actual direct use, whereas null_blk is a support feature for kernel developers wanting to benchmark other parts of the block-device-handling stack.
| Differences Between `/dev/null` and Devices Under `null_blk` Driver |
1,494,519,046,000 |
I have a strings:
AddData
TestSomething
TellMeWhoYouAre
and so on. I want to add space before uppercase letters. How can I do it?
|
Using sed, and assuming you don't want a space in front of the word:
$ sed 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' file.in
Add Data
Test Something
Tell Me Who You Are
The substitution will look for an upper-case letter immediately following a another non-whitespace character, and insert a space in-between the two.
For strings with more than one consecutive upper-case character, like WeAreATeam, this produces We Are ATeam. To sort this, run the substitution a second time:
$ sed -e 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' \
-e 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' file.in
| Add space before uppercase letter |
1,494,519,046,000 |
When I use the grep command, all occurrences of a word are picked up, even if they are part of other words. For example, if I use grep to find occurrences of the word 'the' it will also highlight 'the' in 'theatre'
Is there a way to adapt the grep command so that it only picks up full words, not part of words?
|
-w, --word-regexp
Select only those lines containing matches that form whole
words. The test is that the matching substring must either be
at the beginning of the line, or preceded by a non-word
constituent character. Similarly, it must be either at the end
of the line or followed by a non-word constituent character.
Word-constituent characters are letters, digits, and the
underscore.
from man grep
| Is it possible to use grep to pick up only full words? [duplicate] |
1,494,519,046,000 |
mknod /tmp/oracle.pipe p
sqlplus / as sysdba << _EOF
set escape on
host nohup gzip -c < /tmp/oracle.pipe > /tmp/out1.gz \&
spool /tmp/oracle.pipe
select * from employee;
spool off
_EOF
rm /tmp/oracle.pip
I need to insert a trailer at the end of the zipped file out1.gz ,
I can count the lines using
count=zcat out1.gz |wc -l
How do i insert the trailer
T5 (assuming count=5)
At the end of out1.gz without unzipping it.
|
From man gzip you can read that gzipped files can simply be concatenated:
ADVANCED USAGE
Multiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example:
gzip -c file1 > foo.gz
gzip -c file2 >> foo.gz
Then
gunzip -c foo
is equivalent to
cat file1 file2
This could also be done using cat for the gzipped files, e.g.:
seq 1 4 > A && gzip A
echo 5 > B && gzip B
#now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz:
cat A.gz B.gz > C.gz && zcat C.gz
1
2
3
4
5
#or for appending B.gz to A.gz:
cat B.gz >> A.gz
For doing it without external file for you line to be appended, do as follows:
echo "this is the new line" | gzip - >> original_file.gz
| How to append a line in a zipped file without unzipping? |
1,494,519,046,000 |
Is there a way in linux to look through a directory tree for only those directories that are the ends of branches (I will call them leaves here), i.e., dircetories with no subdirectories in them? I looked at this question but it was never properly answered.
So if I have a directory tree
root/
├── branch1
│ ├── branch11
│ │ └── branch111 *
│ └── branch12 *
└── branch2
├── branch21 *
└── branch22
└── branch221 *
can I find only the directories that are the end of their branch (the ones marked with*), so looking only at the number of directories, not at the number of files? In my real case I am looking for the ones with files, but they're a subset of the 'leaves' that I want to find in this example.
|
To find only those leaf directories that contain non-directory files, you can combine an answer of the referenced question https://unix.stackexchange.com/a/203991/330217 or similar questions https://stackoverflow.com/a/4269862/10622916 or https://serverfault.com/a/530328 with find's ! -empty
find rootdir -type d -links 2 ! -empty
Checking the hard links with -links 2 should work for traditional UNIX file systems. The -empty condition is not part of the POSIX standard, but should be available on most Linux systems.
According to KamilMaciorowski's comment the traditional link count semantics for directories is not valid for Btrfs. This is confirmed in https://linux-btrfs.vger.kernel.narkive.com/oAoDX89D/btrfs-st-nlink-for-directories which also mentions Mac OS HFS+ as an exception from the traditional behavior. For these file systems a different method is necessary to check for leaf directories.
| How to find only directories without subdirectories? [duplicate] |
1,494,519,046,000 |
I have to run top command on one computer being on another.
My targeted PC has IP 192.168.0.81
I was trying to do it: ssh 192.168.0.81 top
But I got this result: top: tcgetattr() failed: Invalid argument
Could anybody help me with this issue?
System info: Linux iRP-C-09 2.4.18-timesys-4.0.642
Top version: 2.0.7
|
top is a full screen interactive console application. It requires a tty to run. Try ssh -t or ssh -tt to force pseudo-tty allocation.
| How to properly run "top" command through SSH? |
1,494,519,046,000 |
I want to do research on the evolution of Linux.
Therefore it would be nice if I could download the sources of Linux at several moments in time (from 1991 till now).
Is there a site where one can find those sources?
Similar sites for other Unix based operating systems are also welcome.
|
I suggest these two:
http://www.oldlinux.org/
and a more straightforward one from this site that contain Linux kernel 0.01, 0.10, 0.11,...,0.98:
http://www.oldlinux.org/Linux.old/
and the other:
http://www.codeforge.com/article/170371
| Where can I find the historical source code of the Linux sources |
1,494,519,046,000 |
If I subtract a time amount from the current date, GNU date works intuitively:
date '+%F %R'; date '+%F %R' --date='- 1 hour'
2021-04-19 15:35
2021-04-19 14:35
However, when I use a date as operand, the result is unexpected:
$ date '+%F %R' --date='2000/1/2 03:04:05 - 1 hour'
2000-01-02 06:04
$ date '+%F %R' --date='2000/1/2 03:04:05 + 1 hour ago'
2000-01-02 02:04
How is date intepreting the $date - 1 hour expression?
|
In short: The date you give with --date is taken in local time, unless you specify a time zone, and something like +/- NNN is taken as one. Only anything after that, even if it's just hour is taken as the relative modifier. So - 1 hour doesn't mean to subtract one hour from the given time, but to specify that the time is in the time zone UTC-01, and then to add one hour to it.
What I think should work for what you're trying, would be to either explicitly give the timezone before the offset, or put the offset first so it can't be confused with a timezone.
Here, using the Central European Summer Time timezone (CEST), and today's date, with %Z added to the output to show the timezone. (You could also use %z to output the numeric timezone, or +0200 here.)
$ date +'%F %T %Z' -d '2021-04-19 12:00:00 CEST + 5 hours'
2021-04-19 17:00:00 CEST
$ date +'%F %T %Z' -d '+ 5 hours 2021-04-19 12:00:00'
2021-04-19 17:00:00 CEST
Though of course for a January date like in the question, a summer-time time zone like CEST would not be a valid one. But rearranging the two still works, the time you give is just taken as the local time at that time.
$ date +'%F %T %Z' -d '+ 5 hours 2021-01-01 12:00:00'
2021-01-01 17:00:00 CET
(And for 2021-10-31 02:30:00 I get CET, even though that time also exists in CEST...)
(See older revisions of this answer for more examples on how it interprets various inputs.)
As per @muru's answer on another question, we can also use the --debug option to have the program actually tell us what it did. Note the second and third lines:
$ date --debug +'%F %T %Z' -d '2021-04-19 12:00:00 - 1 hour'
date: parsed date part: (Y-M-D) 2021-04-19
date: parsed time part: 12:00:00 TZ=-01:00
date: parsed relative part: +1 hour(s)
date: input timezone: -01:00 (set from parsed date/time string)
date: using specified time as starting value: '12:00:00'
date: starting date/time: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00'
date: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00' = 1618837200 epoch-seconds
date: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns),
date: new time = 1618840800 epoch-seconds
date: output timezone: +01:00 (set from TZ="Europe/Berlin" environment value)
date: final: 1618840800.000000000 (epoch-seconds)
date: final: (Y-M-D) 2021-04-19 14:00:00 (UTC0)
date: final: (Y-M-D) 2021-04-19 16:00:00 (output timezone TZ=+01:00)
2021-04-19 16:00:00 CEST
The man page says:
The date string format is more complex than is easily documented here [...]
Which indeed seems quite apt. The more comprehensive documentation is in the info pages, or online: https://www.gnu.org/software/coreutils/manual/html_node/Date-input-formats.html
| Why does the `-` (minus) interpretation of GNU date differs from the intuitive one, when a date is specified? |
1,494,519,046,000 |
I'm trying to determine which group(s) a running child process has inherited. I want to find all groups the process is in given its uid. Is there a way to determine this via the /proc filesystem?
|
The list of groups is given under Groups in /proc/<pid>/status; for example,
$ grep '^Groups' /proc/$$/status
Groups: 4 24 27 30 46 110 115 116 1000
The primary group is given under Gid:
$ grep '^Gid' /proc/$$/status
Gid: 1000 1000 1000 1000
ps is also capable of showing the groups of a process, as the other answers indicate.
| Determine which group(s) a running process is in? |
1,494,519,046,000 |
I'm having a little bit of confusion when understanding linux based OS's. When I download the newest version of Mint and Ubuntu, aren't they the "same" at their core (kernel)? It just seems that they have different GUI's? Isn't a GUI technically just a program that runs on startup of a computer? Same as with windows (dos is the core but explorer.exe is the gui). Is anyone able to explain this?
With the sudo apt-get command can't I install Ubuntu from a mint Terminal?
I know that this is a mess of questions, but hopefully someone can clarify the differences between multiple distros before the GUI appears, and then after the GUI appears.
|
First: Windows has not been a DOS GUI for quite a while; NT-based Windows (NT/2000/XP/Vista/7/8) are totally independent from DOS. explorer.exe isn't the GUI, either: it's just a shell (you can find shell replacements for Windows, too)
At heart, all distros are based on the Linux kernel; the main differences (from an end-user point of view - there are differences in e.g. init systems, files under /etc and other places) - between distributions are:
package management
Ubuntu, Mint and all other Debian-based distros use dpkg/APT as the packaging system. Other distros will use other systems (e.g. Red Hat, Fedora, SuSE will use RPM, Arch will use pacman).
selection of packages
Effectively, Mint is an Ubuntu with some extra packages (e.g. codecs, not included with Ubuntu for patent/copyright reasons) and a different theme (to create a custom identity and avoid trademark/plagiarism questions and user confusion).
Of course, you can install any other GUI in Mint: you could use Mint's desktop environment (Cinnamon) in Ubuntu and technically (reality is another story: you probably will bump into package conflicts) you should be able to install Unity and Ubuntu's visual identity (themes, icons) in Mint.
So, in theory you could turn your Ubuntu into a Mint-ish system but in practice this is quite difficult to do.
As per the comment about the difference between 'interface' and 'shell', which can raise some confusion:
In the UNIX world, 'shell' already has a specific, well-accepted meaning:
A Unix shell is a command-line interpreter or shell that provides a traditional user interface for the Unix operating system and for Unix-like systems.
Compare with the Windows shell, which is a different thing entirely:
The Windows shell is the main graphical user interface in Microsoft Windows. The Windows shell includes well-known Windows components such as the taskbar and the Start menu. The Windows shell is not the same as a "command-line shell", but the two concepts are related.
In our case we would call Cinnamon (or KDE, GNOME, Unity, XFCE) a desktop environment: a set of applications (window manager, panels, notification tray items etc...) that provide the user experience.
| Understanding different Linux Distros |
1,494,519,046,000 |
I am creating an empty file...
dd if=/dev/zero of=${SDCARD} bs=1 count=0 seek=$(expr 1024 \* ${SDCARD_SIZE})
...then turning it into an drive image...
parted -s ${SDCARD} mklabel msdos
...and creating partitions on it
parted -s ${SDCARD} unit KiB mkpart primary fat32 ${IMAGE_ROOTFS_ALIGNMENT} $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED})
parted -s ${SDCARD} unit KiB mkpart primary $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED}) $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED} \+ $ROOTFS_SIZE)
How do I use mkfs.ext and mkfs.vfat without mounting this image?
|
You want to format a partition in a disk-image file, rather than the entire image file. In that case, you need to use losetup to tell linux to use the image file as a loopback device.
NOTE: losetup requires root privileges, so must be run as root or with sudo. The /dev/loop* devices it uses/creates also require root privs to access and use.
e.g (as root)
# losetup /dev/loop0 ./sdcard.img
# fdisk -l /dev/loop0
Disk /dev/loop0: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x54c246ab
Device Boot Start End Sectors Size Id Type
/dev/loop0p1 1 1023 1023 511.5K c W95 FAT32 (LBA)
/dev/loop0p2 1024 2047 1024 512K 83 Linux
# file -s /dev/loop0p1
/dev/loop0p1: data
# mkfs.vfat /dev/loop0p1
mkfs.fat 3.0.28 (2015-05-16)
Loop device does not match a floppy size, using default hd params
# file -s /dev/loop0p1
/dev/loop0p1: DOS/MBR boot sector, code offset 0x3c+2, OEM-ID "mkfs.fat", sectors/cluster 4, root entries 512, sectors 1023 (volumes <=32 MB) , Media descriptor 0xf8, sectors/FAT 1, sectors/track 32, heads 64, serial number 0xfa9e3726, unlabeled, FAT (12 bit)
and, finally, detach the image from the loopback device:
# losetup -d /dev/loop0
See man losetup for more details.
| How to run mkfs on file image partitions without mounting? |
1,494,519,046,000 |
I am trying to create a symlink of a file on one linux workstation to another linux workstation, without having to 'mount' any network shares. Here's what I am trying to do, but can't get it to work.
ln -s /link/to/local/file.mov //10.0.1.103/sharedFolder/symlinkFile.mov
|
You can't:
A symlink is simply an extra inode (a structure that points to the file) and this inode consists of, amongst other things, a deviceId and an inode pointer. The deviceId effectively points to a device special file within the /dev directory and the inode pointer points to a block on that device.
Your network location of 10.0.1.103 does not and cannot have an deviceId (it's not in /dev) therefore you can't possibly have a symlink to a network location.
On the other hand, a mounted network share will have a deviceId which is why you can create a symlink to a mounted location.
| Symlink from one workstation to another without mount |
1,494,519,046,000 |
I noticed that I have a strange partition under sda3, with a size of 1K. I am about to reformat my hard drive and re-install my OS with Ubuntu 14.04 while creating separate partitions for / and /home.
What is this almost-empty partition, and should I do anything with it? Why is it in lsblk but not in blkid?
[lucas@lucas-ThinkPad-W520]~$ sudo blkid
/dev/sda1: LABEL="SYSTEM_DRV" UUID="30CA6C06CA6BC6A6" TYPE="ntfs"
/dev/sda2: LABEL="Windows7_OS" UUID="9426707E26706362" TYPE="ntfs"
/dev/sda4: LABEL="Lenovo_Recovery" UUID="E2CA772DCA76FD5B" TYPE="ntfs"
/dev/sda5: UUID="7d513625-85de-41b7-9c81-0d3fbc4e6a0f" TYPE="ext4"
/dev/sda6: UUID="602d2625-8ab9-44e5-b73a-d1f0181f5549" TYPE="swap"
[lucas@lucas-ThinkPad-W520]~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1.5G 0 part /media/lucas/SYSTEM_DRV
├─sda2 8:2 0 262.1G 0 part /media/lucas/Windows7_OS
├─sda3 8:3 0 1K 0 part
├─sda4 8:4 0 15.6G 0 part /media/lucas/Lenovo_Recovery
├─sda5 8:5 0 178.7G 0 part /
└─sda6 8:6 0 7.9G 0 part [SWAP]
sr0 11:0 1 1024M 0 rom
|
That is almost certainly the extended partition that contains your logical ones. You should be able to confirm by running parted -l (or fdisk -l) as root. For example, on my system:
$ sudo parted -l
Model: ATA ST9500420AS (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 41.1MB 41.1MB primary fat16 diag
2 41.9MB 15.8GB 15.7GB primary ntfs boot
3 15.8GB 99.7GB 83.9GB primary ntfs
4 99.7GB 500GB 400GB extended lba
5 99.7GB 102GB 2147MB logical fat32 lba
7 102GB 176GB 73.8GB logical ext4
6 176GB 492GB 316GB logical ext4
8 492GB 500GB 8389MB logical linux-swap(v1)
Note that sda4 is listed as an extended partition with a size of 400GB. That is the sum of the sizes of the logical partitions it contains (5,7,6 and 8). In the lsblk output, it shows as a 1K partition (because it is not a real, bona fide partition that contains data but an extended one):
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 39.2M 0 part
├─sda2 8:2 0 14.7G 0 part
├─sda3 8:3 0 78.1G 0 part /windows
├─sda4 8:4 0 1K 0 part
├─sda5 8:5 0 2G 0 part
├─sda6 8:6 0 294.4G 0 part /home
├─sda7 8:7 0 68.7G 0 part /
└─sda8 8:8 0 7.8G 0 part [SWAP]
sr0 11:0 1 1024M 0 rom
It does not appear in the output of blkid for the same reason, it only lists "real" partitions by default. You can force it to mention the extended one by using the -p flag:
$ sudo blkid -p /dev/sda* | grep sda4
/dev/sda4: PTTYPE="dos" PART_ENTRY_SCHEME="dos" PART_ENTRY_TYPE="0xf" PART_ENTRY_NUMBER="4" PART_ENTRY_OFFSET="194643601" PART_ENTRY_SIZE="782129519" PART_ENTRY_DISK="8:0"
| What is this 1K logical partition? |
1,494,519,046,000 |
How do I log into my Linux laptop if I have forgotten both the username and password?
|
You can drop into single mode from Grub. During boot press Esc on the Grub boot screen when it prompts you to. It may just show you Grub with listings of each kernel - if that's the case don't press Esc.
From here select the first entry and press e to edit that entry. Page down to the line that starts with kernel and press e again.
This will allow you to edit the entire line. Scroll to the right until you reach the end and remove splash quiet from the line, replacing it with single. Press Enter to accept the changes and press b to boot into the modified kernel line. This will boot you into single user mode and should drop you into a root shell once the boot has completed.
From here you can add users to the system, change user passwords, etc.
| How do I log into my Linux laptop if I have forgotten both the login and password? |
1,494,519,046,000 |
I would like to check, in a bash script, on what filesystem type a directory is.
The idea is something like
if [path] is on a [filesystem] filesystem then
filesystem specific command
end if
|
Use df. You can pass it a path, and it will give you the filesystem information for that path. If you need the filesystem type, use the -T switch, like so:
$ df -T test
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda2 ext4 182634676 32337180 141020160 19% /home
To extract the filesystem type, you can parse it (use the -P switch to avoid df breaking lines if the device part is too long):
$ df -PT test | awk 'NR==2 {print $2}'
ext4
So you can use that value in an if construct like so:
if [ "$(df -PT "$path" | awk 'NR==2 {print $2}')" = "ext4" ] ; then
it is an ext4 filesystem
fi
Beware that the device column can contain spaces (but it's rare), in which case the parsing will fail.
| how to check the format of a filesystem |
1,494,519,046,000 |
I have an application which is communicating with workers via signals (particullary SIGUSR1/SIGUSR2/SIGSTOP).
Can I trust that whatever happens every signal will be delivered and processed by handler?
What happens if signals are sent quicklier than is't possible for application to handle them (eg. due to high host load at the moment)?
|
Aside from the "too many signals" problem, signals can be explicitly ignored. From man 2 signal:
If the signal signum is delivered to the process, then one of the
following happens:
* If the disposition is set to SIG_IGN, then the signal is ignored.
Signals can also be blocked. From man 7 signal;
A signal may be blocked, which means that it will not be delivered
until it is later unblocked. Between the time when it is generated
and when it is delivered a signal is said to be pending.
Both blocked and ignored sets of signals are inherited by child processes, so it may happen that the parent process of your application ignored or blocked one of these signals.
What happens when multiple signals are delivered before the process has finished handling previous ones? That depends on the OS. The signal(2) manpage linked above discusses it:
System V would reset the signal disposition to the default. Worse, rapid delivery of multiple signals would result in recursive (?) calls.
BSD would automatically block the signal until the handler is done.
On Linux, this depends on the compilation flags set for GNU libc, but I'd expect the BSD behaviour.
| Can signal be ignored (lost)? |
1,494,519,046,000 |
I try to remove string from .bash_profile. String is added when my shell script run:
My string at bash_profile as follows:
# for Myapllication
export MYAPP_HOME=/opt/myapp
I want to remove the strings from .bash_profile when myapp is removed via rpm.
How to remove any string from a file via shell script? (or possible alternative method)
|
You can remove a string from a text file with sed (other tools exist).
For example:
sed -i -e '/myapp/d' .bash_profile
removes from .bash_profile every line containing the string myapp.
| How to remove any string from a file via shell scripts? |
1,494,519,046,000 |
In Windows, if I wanted to find a string across all files in all subdirectories, I would do something like
findstr /C:"the string" /S *.h
However, in Linux (say, Ubuntu) I have found no other way than some piped command involving find, xargs, and grep (an example is at this page: How can I recursively grep through sub-directories?). However, my question is different: is there any single, built-in command that works through this magic, without having to write my shell script?
|
GNU grep allows searching recursively through subdirectories:
grep -r --include='*.h' 'the string' .
| Finding a substring in files across subdirectories with a single built-in command? |
1,494,519,046,000 |
When re-partitioning a USB Flash drive on CentOS 6.x got following error.
Disk /dev/sdb: 31.5 GB, 31466323968 bytes
255 heads, 63 sectors/track, 3825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e693bd9
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 3826 30727808 c W95 FAT32 (LBA)
[root@csc ~]# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): 1
1: unknown command
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): d
No partition is defined yet!
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3825, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3825, default 3825):
Using default value 3825
Command (m for help):
Command (m for help):
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 86
Changed system type of partition 1 to 86 (NTFS volume set)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
|
Looks like this device is mounted. Run umount /dev/sdb1 and try again.
| Re-reading the partition table failed with error 16: Device or resource busy |
1,494,519,046,000 |
In Linux, when a child process terminates and it's parent has not yet waited on it, it becomes a zombie process. The child's exit code is stored in the pid descriptor.
If a SIGKILL is sent to the child, there is not supposed to be any effect.
Does this mean that the exit code will not be modified by the SIGKILL or will the exit code be modified to indicate that the child exited because it received a SIGKILL?
|
To answer that question, you have to understand how signals are sent to a process and how a process exists in the kernel.
Each process is represented as a task_struct inside the kernel (the definition is in the sched.h header file and begins here). That struct holds information about the process; for instance the pid. The important information is in line 1566 where the associated signal is stored. This is set only if a signal is sent to the process.
A dead process or a zombie process still has a task_struct. The struct remains, until the parent process (natural or by adoption) has called wait() after receiving SIGCHLD to reap its child process. When a signal is sent, the signal_struct is set. It doesn't matter if the signal is a catchable one or not, in this case.
Signals are evaluated every time when the process runs. Or to be exact, before the process would run. The process is then in the TASK_RUNNING state. The kernel runs the schedule() routine which determines the next running process according to its scheduling algorithm. Assuming this process is the next running process, the value of the signal_struct is evaluated, whether there is a waiting signal to be handled or not. If a signal handler is manually defined (via signal() or sigaction()), the registered function is executed, if not the signal's default action is executed. The default action depends on the signal being sent.
For instance, the SIGSTOP signal's default handler will change the current process's state to TASK_STOPPED and then run schedule() to select a new process to run. Notice, SIGSTOP is not catchable (like SIGKILL), therefore there is no possibility to register a manual signal handler. In case of an uncatchable signal, the default action will always be executed.
To your question:
A defunct or dead process will never be determined by the scheduler to be in the TASK_RUNNING state again. Thus the kernel will never run the signal handler (default or defined) for the corresponding signal, whichever signal is was. Therefore the exit_signal will never be set again. The signal is "delivered" to the process by setting the signal_struct in task_struct of the process, but nothing else will happen, because the process will never run again. There is no code to run, all that remains of the process is that process struct.
However, if the parent process reaps its children by wait(), the exit code it receives is the one when the process "initially" died. It doesn't matter if there is a signal waiting to be handled.
| What happends when sending SIGKILL to a Zombie Process in Linux? |
1,494,519,046,000 |
In bash, I use arguments that look like
paste <(cat file1 | sort) <(cat file2 | sort)
or
comm <(cat file1 | sort) <(cat file2 | sort)
When I check man comm or man paste, the documentation says the args are indeed FILES.
Question:
Are intermediate temporary files get created (on TEMP filesystem or elsewhere on slower disk) for <(cat file1 | sort) and <(cat file2 | sort)?
What is the name for this <( ) magic? (to lookup its documentation)
Is it specific to bash or does it work across other shells?
|
This is called process substitution.
3.5.6 Process Substitution
Process substitution allows a process’s input or output to be referred to using a filename.
The process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list. Note that no space may appear between the < or > and the left parenthesis, otherwise the construct would be interpreted as a redirection. Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files.
It is not just a bash thing as it originally appeared in ksh but it's not in the posix standard.
Under the hood, process substitution has two implementations. On systems which support /dev/fd (most Unix-like systems) it works by calling the pipe() system call, which returns a file descriptor $fd for a new anonymous pipe, then creating the string /dev/fd/$fd, and substitutes that on the command line. On systems without /dev/fd support, it calls mkfifo with a new temporary filename to create a named pipe, and substitutes this filename on the command line.
| How does the "<( cmd )" pattern work in bash? [duplicate] |
1,494,519,046,000 |
I am connected to my Debian 9 with Virtualmin by SSH from my PC. I go for +-2 minutes away and after I return, SSH is disconnected...
I tried changing ssh config on server and on client... Nothing helped...
Where to search for problem? Can it be some settings of networking or maybe router?
|
Some over-zealous routers like to drop TCP connections that are idle for too long (i.e. don't transmit any data). It might be because they assume the user only uses things like HTTP, where the connection is often closed after a single query is complete.
Assuming OpenSSH, use the ClientAliveInterval and ClientAliveCountMax directives in sshd_config, or equivalently ServerAliveInterval and ServerAliveCountMax in the client side config (~/.ssh/config or /etc/ssh/ssh_config) to enable protocol-level keepalive packets.
They're actually meant to detect if the remote host has gone away, but since they cause messages to be sent when the connection is otherwise idle, they also work to prevent the connection from being seen as idle by outside devices.
*AliveInterval sets the interval (in seconds) after which the client/server sends a query to the remote, and *AliveCountMax sets the number of unanswered queries after which the the client/server drops the connection as inactive.
Something like these values should do:
ClientAliveInterval 15
ClientAliveCountMax 4
| SSH keeps disconnecting after few minutes of inactivity |
1,494,519,046,000 |
The CPU is a [email protected]. It has 4 cores and each core has 2 threads. Here is the dmidecode output:
# dmidecode -t 4
# dmidecode 2.9
SMBIOS 2.7 present.
Handle 0x0042, DMI type 4, 42 bytes
Processor Information
Socket Designation: SOCKET 0
Type: Central Processor
Family: <OUT OF SPEC>
Manufacturer: Intel(R) Corporation
ID: A9 06 03 00 FF FB EB BF
Version: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
Voltage: 1.1 V
External Clock: 100 MHz
Max Speed: 3800 MHz
Current Speed: 3400 MHz
Status: Populated, Enabled
Upgrade: <OUT OF SPEC>
L1 Cache Handle: 0x003F
L2 Cache Handle: 0x003E
L3 Cache Handle: 0x0040
Serial Number: Not Specified
Asset Tag: Fill By OEM
Part Number: Fill By OEM
Core Count: 4
Core Enabled: 4
Thread Count: 8
Characteristics:
64-bit capable
It will be 8 logic core in a system, like what shows in /proc/cpuinfo. But can any one tell why the cpu MHz of a core is 1600MHz? I guess there is 2 threads in a core, so a hw thread freq may be about the half of the core's? How this number is calculated?
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
stepping : 9
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 7013.49
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
Also, here is the output of lshw and lscpu command. There are also 1600MHz mentioned.
lshw info:
#lshw -class processor
*-cpu
description: CPU
product: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
vendor: Intel Corp.
physical id: 42
bus info: cpu@0
version: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
slot: SOCKET 0
size: 1600MHz
capacity: 3800MHz
width: 64 bits
clock: 100MHz
capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq
lscpu info:
#lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 8
Thread(s) per core: 2
Core(s) per socket: 4
CPU socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Stepping: 9
CPU MHz: 1600.000
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
|
Modern cpu's can operate at several different frequencies changing dynamically
under the load requirements (see wikipedia). Intel call this SpeedStep. When a cpu has little to do it will run at a lower frequency to reduce power (and therefore heat and fan noise).
So the 1600Mhz you see is probably because all the cpus are not doing much, but it can rise to some maximum like 3400 Mhz determined by cpu and motherboard architecture, and temperature.
I'm not sure where /proc/cpuinfo gets its single value from, but
you can see individual cpu info in files /sys/devices/system/cpu/cpu*/cpufreq/, eg for the current frequency:
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
and read more about Linux cpu frequency scaling software in archlinux.
| What does "cpu MHz" field mean in the /proc/cpuinfo of a hyper-threading cpu? |
1,494,519,046,000 |
In my iptables script I have been experimenting with writing as finely grained rules as possible. I limit which users are allowed to use which services, partly for security and partly as a learning exercise.
Using iptables v1.4.16.2 on Debian 6.0.6 running the 3.6.2 kernel.
However I've hit an issue I don't quite understand yet.. .
outgoing ports for all users
This works perfectly fine. I do not have any generic state tracking rules.
## Outgoing port 81
$IPTABLES -A OUTPUT -p tcp --dport 81 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
$IPTABLES -A INPUT -p tcp --sport 81 -s $MYIP -m conntrack --ctstate ESTABLISHED -j ACCEPT
outgoing ports with user matching
## outgoing port 80 for useraccount
$IPTABLES -A OUTPUT --match owner --uid-owner useraccount -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED --sport 1024:65535 -j ACCEPT
$IPTABLES -A INPUT -p tcp --sport 80 --dport 1024:65535 -d $MYIP -m conntrack --ctstate ESTABLISHED -j ACCEPT
This allows port 80 out only for the account "useraccount", but rules like this for TCP traffic have issues.
## Default outgoing log + block rules
$IPTABLES -A OUTPUT -j LOG --log-prefix "BAD OUTGOING " --log-ip-options --log-tcp-options --log-uid
$IPTABLES -A OUTPUT -j DROP
The Issue
The above works, the user "useraccount" can get files perfectly fine. No other users on the system can make outgoing connections to port 80.
useraccount@host:$ wget http://cachefly.cachefly.net/10mb.test
But the wget above leaves x7 dropped entries in my syslog:
Oct 18 02:00:35 xxxx kernel: BAD OUTGOING IN= OUT=eth0 SRC=xx.xx.xx.xx DST=205.234.175.175 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=12170 DF PROTO=TCP SPT=37792 DPT=80 SEQ=164520678 ACK=3997126942 WINDOW=979 RES=0x00 ACK URGP=0
I don't get these drops for similar rules with UDP traffic. I already have rules in place that limit which users can make DNS requests.
The dropped outgoing ACK packets seem to be coming from the root account (URGP=0) which I don't understand. Even when I swap useraccount for root.
I believe that ACK packets are categorised as new because conntrack starts tracking connections after the 3rd step of the 3 way handshake, but why are the being dropped?
Can these drops be safely ignored?
Edit
So I often see rules like these, which work fine for me:
$IPTABLES -A OUTPUT -s $MYIP -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPTABLES -A INPUT -p tcp -m tcp --sport 80 -d $MYIP -m state --state ESTABLISHED -j ACCEPT
I swapped "-m state --state" for "-m conntrack --ctstate" as state match is apparently obsolete.
Is it best practice to have generic state tracking rules? Are the rules above not considered correct?
For tight control over outgoing users connections would something like this be better?
$IPTABLES -A INPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT
$IPTABLES -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT
$IPTABLES -A OUTPUT -p tcp --dport 80 -s $SERVER_IP_TUNNEL -m conntrack --ctstate NEW -m owner --uid-owner useraccount -j ACCEPT
$IPTABLES -A OUTPUT -p tcp --dport 80 -s $SERVER_IP_TUNNEL -m conntrack --ctstate NEW -m owner --uid-owner otheraccount -j ACCEPT
|
To cut a long story short, that ACK was sent when the socket didn't belong to anybody. Instead of allowing packets that pertain to a socket that belongs to user x, allow packets that pertain to a connection that was initiated by a socket from user x.
The longer story.
To understand the issue, it helps to understand how wget and HTTP requests work in general.
In
wget http://cachefly.cachefly.net/10mb.test
wget establishes a TCP connection to cachefly.cachefly.net, and once established sends a request in the HTTP protocol that says: "Please send me the content of /10mb.test (GET /10mb.test HTTP/1.1) and by the way, could you please not close the connection after you're done (Connection: Keep-alive). The reason it does that is because in case the server replies with a redirection for a URL on the same IP address, it can reuse the connection.
Now the server can reply with either, "here comes the data you requested, beware it's 10MB large (Content-Length: 10485760), and yes OK, I'll leave the connection open". Or if it doesn't know the size of the data, "Here's the data, sorry I can't leave the connection open but I'll tell when you can stop downloading the data by closing my end of the connection".
In the URL above, we're in the first case.
So, as soon as wget has obtained the headers for the response, it knows its job is done once it has downloaded 10MB of data.
Basically, what wget does is read the data until 10MB have been received and exit. But at that point, there's more to be done. What about the server? It's been told to leave the connection open.
Before exiting, wget closes (close system call) the file descriptor for the socket. Upon, the close, the system finishes acknowledging the data sent by the server and sends a FIN to say: "I won't be sending any more data". At that point close returns and wget exits. There is no socket associated to the TCP connection anymore (at least not one owned by any user). However it's not finished yet. Upon receiving that FIN, the HTTP server sees end-of-file when reading the next request from the client. In HTTP, that means "no more request, I'll close my end". So it sends its FIN as well, to say, "I won't be sending anything either, that connection is going away".
Upon receiving that FIN, the client sends a "ACK". But, at that point, wget is long gone, so that ACK is not from any user. Which is why it is blocked by your firewall. Because the server doesn't receive the ACK, it's going to send the FIN over and over until it gives up and you'll see more dropped ACKs. That also means that by dropping those ACKs, you're needlessly using resources of the server (which needs to maintain a socket in the LAST-ACK state) for quite some time.
The behavior would have been different if the client had not requested "Keep-alive" or the server had not replied with "Keep-alive".
As already mentioned, if you're using the connection tracker, what you want to do is let every packet in the ESTABLISHED and RELATED states through and only worry about NEW packets.
If you allow NEW packets from user x but not packets from user y, then other packets for established connections by user x will go through, and because there can't be established connections by user y (since we're blocking the NEW packets that would establish the connection), there will not be any packet for user y connections going through.
| Iptables: matching outgoing traffic with conntrack and owner. Works with strange drops |
1,494,519,046,000 |
I have computer with battery power supply that allows running the computer for approximately one minute after power loss. I want to trigger suspend-to-disk immediately after power loss so it can be resumed later. The initrd (default Devuan initrd) looks for suspend signature in the swap partition and resumes from it when the signature is found. I am not sure what happens when power is completely interrupted while writing data to the swap partition. That could happen when the battery fails or the system hangs while suspending. Will the system resume from the corrupted swap partition or it will just ignore the swap partition? I consider the second option better – it is better to have incorrectly unmounted filesystem than corrupted system state.
Is the signature written to the swap partition after or before the other data? Does it use checksums?
|
If power is lost prior to explicitly entering S4 or S5 state (hereafter just referred to as "hibernation state" for simplicity), then the partially filled data in the swap partition will be ignored completely, because there's no hibernation state persisted. Swap partitions and files are also volatile, and the data in it will be ignored after a reboot without hibernation state.
In the kernel, restoration from hibernation is requested by the configured platform_hibernation_ops->leave, which is only called on resumption from hibernation state. For example, on most modern platforms where S5 is supported, we configure a reboot notifier.
Losing power prior to hibernation state being entered (and thus the hibernation file being completely written) won't have configured any hibernation to resume from, so there's no chance it will try to thaw using the partially-filled swap space. As such, you don't have to worry about the kernel trying to restore from a partially complete hibernation.
| Is it safe to boot computer that lost power while suspending to disk? |
1,494,519,046,000 |
I need to find out how many services are listening to my interfaces (ipv4 only, not localhost)
$ ifconfig
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.129.56.137 netmask 255.255.0.0 broadcast 10.129.255.255
inet6 dead:beef::250:56ff:feb9:8c07 prefixlen 64 scopeid 0x0<global>
inet6 fe80::250:56ff:feb9:8c07 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:b9:8c:07 txqueuelen 1000 (Ethernet)
RX packets 3644 bytes 330312 (330.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3198 bytes 679711 (679.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 15304 bytes 895847 (895.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15304 bytes 895847 (895.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
$ nmap 10.129.56.137
Starting Nmap 7.60 ( https://nmap.org ) at 2020-12-05 05:23 UTC
Nmap scan report for 10.129.56.137
Host is up (0.000086s latency).
Not shown: 991 closed ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
80/tcp open http
110/tcp open pop3
139/tcp open netbios-ssn
143/tcp open imap
445/tcp open microsoft-ds
993/tcp open imaps
995/tcp open pop3s
Nmap done: 1 IP address (1 host up) scanned in 10.57 seconds
I thought the answer was 9 but there must be a way to find the correct answer.
Cheers in advance!
|
netstat -tunleep4 | grep -v "127\.0\.0"
| How many services are listening on the target system on all interfaces? (Not on localhost and IPv4 only) |
1,494,519,046,000 |
I have an Alienware Aurora R7, running Arch Linux. On shutdown, the kernel panics, with something like this in the panic message (omitting timestamps):
BUG: Unable to handle kernel NULL pointer dereference at (null)
IP: i2c_dw_isr+0x3ef/0x6d0
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT SMP PTI
From various sources (1, 2), this seems to be related to the i2c-designware-core module, and the workaround is blacklisting it. However, with recent kernels (seems to be 4.10 and above), this doesn't seem to be built as a module:
# uname -srv
Linux 4.15.2-2-ARCH #1 SMP PREEMPT Thu Feb 8 18:54:52 UTC 2018
# zgrep DESIGNWARE /proc/config.gz
CONFIG_I2C_DESIGNWARE_CORE=y
CONFIG_I2C_DESIGNWARE_PLATFORM=y
CONFIG_I2C_DESIGNWARE_SLAVE=y
CONFIG_I2C_DESIGNWARE_PCI=m
CONFIG_I2C_DESIGNWARE_BAYTRAIL=y
CONFIG_SPI_DESIGNWARE=m
CONFIG_SND_DESIGNWARE_I2S=m
CONFIG_SND_DESIGNWARE_PCM=y
So I have resorted to making the kernel reboot on panic:
# cat /proc/cmdline
root=UUID=e5018f7e-5838-4a47-b146-fc1614673356 rw initrd=/intel-ucode.img initrd=/initramfs-linux.img panic=10 sysrq_always_enabled=1 printk.devkmsg=on
(The odd paths in the /proc/cmdline are because I boot directly from UEFI, with entries created using efibootmgr. The paths are rooted at /boot, where my ESP is mounted.)
This seems to be something for touchpads, but I don't have a touchpad and won't get one. What can I do to disable this thing? Do I have to build a custom kernel?
Since linux-lts is also newer than 4.10, (4.14, currently), there doesn't seem to be an easy way to install an older kernel either, where blacklisting might presumably work.
Using nolapic as a kernel parameter solves the shutdown panic problem, but it causes the system to freeze a few minutes after boot, so I can't use it.
|
After reading kernel sources, I found a function we need to blacklist!
Thanks to Stephen Kitt for the hint about initcall_blacklist.
Add initcall_blacklist=dw_i2c_init_driver to the kernel command line. This works for me on kernel 4.15.0.
For anyone else who'll find this answer. You can do it by editing /etc/default/grub:
Run in the terminal: sudo -H gedit /etc/default/grub.
Append blacklist string to the GRUB_CMDLINE_LINUX_DEFAULT:
GRUB_CMDLINE_LINUX_DEFAULT="… initcall_blacklist=dw_i2c_init_driver".
Save the file, close the editor.
Run in the terminal: sudo update-grub.
Reboot and test!
| How do I disable I2C Designware support when it's not built as a module? |
1,494,519,046,000 |
I know that I can interrupt a make process anytime without having to recompile the entire source tree again. As I know, make only compiles a target if it's not compiled yet, or the source code is modified after the last compilation.
But if I interrupt make, there will surely be one or more (depending on the concurrency level) half-ready binaries. What does it do with them the next time I run make? Or does it finish the current target when I press Ctrl+C to avoid partly compiled binaries?
|
In simple terms, you can think of make as having a (possibly large) number of steps, where each step takes a number of files as input and creates one file as output.
A step might be "compile file.c to file.o" or "use ld to link main.o and file.o into program". If you interrupt make with CtrlC, then the currently executing step will be terminated which will (or should) remove the output file it was working on. There are usually not any "half-ready binaries" left behind.
When you restart make, it will look at the timestamps of all the input and output files and rerun the steps where:
an input file has a newer timestamp than the output file
the output file does not exist
This generally means that if a step takes a long time to run (it's rare on modern computers, but the ld step for large programs could easily take many minutes when make was designed), then stopping and restarting make will start that step over from the beginning.
The reality of your average Makefile is considerably more complicated than the above description, but the fundamentals are the same.
| How does make continue compilation? |
1,494,519,046,000 |
I am porting C/pro*c code from UNIX to Linux. The code is:
#define __NFDBIT (8 * sizeof(unsigned long))
#define __FD_SETSIZ 1024
#define __FDSET_LONG (__FD_SETSIZ/__NFDBIT)
typedef struct {
unsigned long fds_bits [__FDSET_LONG];
} __ernel_fd_set;
typedef __ernel_fd_set fd_set_1;
int main()
{
fd_set_1 listen_set;
int listen_sd;
int socket_id;
FD_ZERO(&listen_set);
socket_id = t_open("/dev/tcp", O_RDWR|O_NONBLOCK, (struct t_info *) 0);
if ( socket_id <0 )
{
exit(FAILURE);
}
return 0;
}
In UNIX the value of socket_id is > 0 in Linux it is -1. Reason is in UNIX, there is a /dev/tcp. This is not present on Linux. Also in UNIX this tcp file is character special file which is different from normal file.
Is there any way to create same character special file in Linux as in UNIX or how to proceed this further?
|
t_open() and its associated /dev/tcp and such are part of the TLI/XTI interface, which lost the battle for TCP/IP APIs to BSD sockets.
On Linux, there is a /dev/tcp of sorts. It isn't a real file or kernel device. It's something specially provided by Bash, and it exists only for redirections. This means that even if one were to create an in-kernel /dev/tcp facility, it would be masked in interactive use 99%[*] of the time by the shell.
The best solution really is to switch to BSD sockets. Sorry.
You might be able to get the strxnet XTI emulation layer to work, but you're better off putting your time into getting off XTI. It's a dead API, unsupported not just on Linux, but also on the BSDs, including OS X.
(By the way, the strxnet library won't even build on the BSDs, because it depends on LiS, a component of the Linux kernel. It won't even configure on a stock BSD or OS X system, apparently because it also depends on GNU sed.)
[*] I base this wild guess on the fact that Bash is the default shell for non-root users in all Linux distros I've used. You therefore have to go out of your way on Linux, as a rule, to get something other than Bash.
| /dev/tcp not present in Linux |
1,494,519,046,000 |
Is there a way to mount multiple hard drives to a single mount point? Let's say I run out of space on /home and decide to add an extra hard drive to the computer. How do I scale the space on a mount point? If I use RAID, can I add drives on the fly to increase space as I run out of them? Is there an alternative to using RAID if I am not interested in maintaining a high level of redundancy?
|
You can use lvm for this. It was designed to separate physical drive from logical drive.
With lvm, you can :
Add a fresh new physical drive to a pool (named Volume Group in LVM terminology)
pvcreate /dev/sdb my_vg
Extend space of a logical volume
lvextend ...
And finish with an online resize of your filesystem
e2resize /mnt/my/path
But beware it's not a magic bullet. It's far more harder to reduce a filesystem, even with LVM.
| Mounting multiple devices at a single mount point on Linux |
1,494,519,046,000 |
How can I use grep to find a string in files, but only search in the first line of these files?
|
I have implemented the comment of @Rob and succeeded to get the desired result.
Replace string by your string.
grep -Rin "string" . | grep ":1:.*string" > result.txt
This does a recursive case-insensitive search for string in the current directory and prints the line numbers. Then it searches for occurrences in files which are on line 1 and saves the output to a file called result.txt.
| How can I use grep to search only on the first line of files for a specific string? |
1,494,519,046,000 |
in the cp manpage, it lists the -f/--force option as:
if an existing destination file cannot be opened, remove it and try again
for the --remove-destination option it says:
remove each existing destination file before attempting to open it (contrast with --force)
So, the former first checks if it can be opened, if not, it deletes in anyway, while the latter just bypasses that step. I combined each with the -i option, and in both cases, it indicates what the permissions of the files are if it's write-protected.
The latter would seem to be more efficient, especially if recursively copying/overwriting large directories, but why maintain both options? What's the advantage of checking something it's going to over-ride anyway?
|
There's a distinction between the two (emphasis mine):
if an existing destination file cannot be opened, remove it and try again
remove each existing destination file before attempting to open it
In the first case, if the file can be opened, cp will attempt to replace only the contents. cp is not going to remove the file unnecessarily. This will retain the permissions and ownerships of the original file unless you specify that they're to be copied too.
The second case is useful when the contents can't be read (such as dangling symlinks).
| how is cp -f different from cp --remove-destination? |
1,494,519,046,000 |
Is there any way to overload or wrap the ls command so that it will highlight / underline / otherwise make obvious the last three modified files?
I know that I can simply ls -rtl to order by reverse modification time, but I usually do need an alphabetical list of files despite the fact that I would like to quickly identify the last file that myself or another dev modified.
|
The following seems to work for me
grep --color -E -- "$(ls -rtl | tail -n3)|$" <(ls -l)
It uses grep with highlight on input ls -l and uses a regular expression to search for either of the inputs for the three oldest command. It also search for the end-of-line $ in order to print the whole file.
You can also put it in a function, such that you can use lll * with multiple arguments, just as you would use ls
function lll ()
{
command grep --color -E -- "$(ls -rtl $@ | tail -n3)|$" <(ls -l $@)
}
| Highlight the three last updated files in ls output |
1,494,519,046,000 |
I'm trying to add a user to a group wireshark as explained here.
I have already executed multiple different commands and was under the impression that the user was successfully added.
~$ sudo adduser $USER wireshark
The user `user' is already a member of `wireshark'.
And have re logged into the system.
~$ groups
user adm cdrom sudo dip plugdev lpadmin sambashare
but it seems as if the user hasn't been added to the group (which is in contrast with the first command). Also the assumption that it wasn't added is supported by Wireshark not working correctly.
Which should I consider correct?
|
It starter to show the appropriate groups only after a system restart. The logout - login wasn't enough.
Don't know what to make of it.
| Can't add user to group without a restart? |
1,494,519,046,000 |
i want to login as a member of non primary group to create files whose owner as a member of non primary group of current user.
-rwxr-xr-x 2 gowtham gowtham 4096 Sep 5 14:48 defaultNewFile
drwxr-xr-x 2 gowtham specificgrp 4096 Sep 5 14:50 requiredNewFile
i don't want to change group ownership after creation of file with chown.
I am more interested in login as a member of a non primary group.
|
Depending on why you want to do this, there may be another way.
If the setgid bit of a directory's permissions is set, then all files created in that directory will be owned by the same group that owns the directory.
$ chgrp specificgrp .
$ chmod g+s .
$ touch newfile
$ ls -l newfile
-rw-r--r-- 2 gowtham specificgrp 4096 Sep 5 14:48 newfile
$ ls -ld .
drwxr-sr-x 2 gowtham specificgrp 4096 Sep 5 14:48 .
| how to login as a member of specific group |
1,494,519,046,000 |
My /proc/cpuinfo says my processor is 800Mhz, when I know the thing is actually 2.8Ghz. This is due to idle throttling where the cpu clock is slowed when idle to save power.
Is there a way in Linux to find the true cpu speed?
|
The file /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq contains the maximum frequency in KHz (that directory, /sys/devices/system/cpu/cpu0/cpufreq, also contains a bunch of other cpu-frequency related information). It contains just a single ASCII number, so is much easier to parse than the stuff in /proc/cpuinfo or the dmesg output.
Note that this info is per-cpu, but of course maximum frequency will be the same for all cpus on most systems, so I just used cpu0.
BTW, on my system, the maximum frequency can be read by any user, but the current frequency (.../cpuinfo_cur_freq) can only be read by root; I don't know if this is true on all systems...
| How to find processor speed on Linux w/throttling |
1,494,519,046,000 |
As probably most of you I have been long using Ubuntu. I'm not an expert, but I have been using different distros until I settled with Ubuntu.
I started using SuSE 5.x, Conectiva (that later became Mandriva, so it seems), RedHat, Mac OS X (yeah, I know, not Linux) and Ubuntu running mostly as a VM in the last couple of years.
But ever since SUSE released the SUSE Studio I was tempted to switch back to it. It is way too convenient to keep your installation in the cloud and download your system ready to go.
Here is my question. What to expect from the switch. I know that SUSE uses RPM as its package manager, and I have no idea of the completeness of its repository compared to Ubuntu.
When trying openSUSE on a VM I also miss the sudo command, but I am sure that it must have been some lack of configuration on my part.
So, what else would be different? My main use for Linux is as a desktop and a bit of Java and Ruby programming.
|
I have use openSUSE for several years and have dabbled in Ubuntu and other distributions.
What to expect:
Centralised configuration is possible using Yast. You may or may not like this - it seems to generate quite strong opinions in a lot of people but I don't care about it much.
Different desktops which work. The openSUSE DVD includes several desktops, and each one seems to work properly. I have seen people having problems about programs which work in Ubuntu but not in Kubuntu etc. This may be relevant if you are using virtual machines over the could and want a lighter desktop.
sudo works differently (as you seem to have noticed). The most obvious point is that root has a password in openSUSE, and you use that rather than the user password (although the root password is usually the same as the first user). A less obvious point is that the path (or permissions or something?) is not changed to be root's rather than the user's. (If you want to run ifconfig for example you have to su then ifconfig rather than sudo ifconfig.)
There seems to be less stuff in the repositories; but there is everything I want, so I don't know what isn't there. Perhaps there are only 50 text editors rather than 100.
| What should I expect if I switch from Ubuntu to openSuse [closed] |
1,494,519,046,000 |
I have put a CD into my drive. How can I find the rainbow book color on Linux (Red book/Yellow book/Blue book/...)?
|
You can use cd-info from the libcdio project. This will list all your CD’s tracks, and for each one, give you information about its contents: CD-DA (red book), Photo CD (beige), Video CD (white), etc.
| How can I determine the rainbow book color of a CD on Linux? |
1,494,519,046,000 |
When I apply default ACL in a directory I see default:mask or just mask in the following two scenario.
Scenario 1
-bash-4.2$ ls -ld test/
drwxr-x---. 2 test test 4096 Oct 15 19:12 test/
-bash-4.2$ setfacl -d -m u:arif:rwx test/
-bash-4.2$ getfacl --omit-header test
user::rwx
group::r-x
other::---
default:user::rwx
default:user:arif:rwx
default:group::r-x
default:mask::rwx
default:other::---
Scenario 2
-bash-4.2$ ls -dl dir/
drwxr-x---. 2 test test 4096 Oct 15 18:17 dir/
-bash-4.2$ getfacl dir
# file: dir
# owner: test
# group: test
user::rwx
group::r-x
other::---
-bash-4.2$ setfacl -m user:arif:rwx dir
-bash-4.2$ getfacl --omit-header dir
user::rwx
user:arif:rwx
group::r-x
mask::rwx
other::---
So what is the purpose of mask here?
|
What
This 3-bit ACL system has its roots in TRUSIX. Other ACL systems, such as the NFS4-style ones in FreeBSD, MacOS, AIX, Illumos, and Solaris, work differently and this concept of a mask access control entry is not present.
The mask is, as the name says, a mask that is applied to mask out permissions granted by access control entries for users and groups. It is the maximum permission that may be granted by any acccess control entry, other than by a file owner or an "other" entry. Its 3 bits are anded with the 3 bits of these other entries.
So, for example, if a user is granted rw- by an access control entry, but the mask is r--, the user will only actually have r-- access. Conversely, if a user is only granted --x by an access control entry, a mask of rwx does not grant extra permissions and the user has just --x access.
The default mask on a parent directory is the mask setting that is applied to things that are created within it. It is a form of inheritance.
Why
It's a shame that IEEE 1003.1e never became a standard and was withdrawn in 1998. In practice, nineteen years on, it's a standard that a wide range of operating systems — from Linux through FreeBSD to Solaris (alongside the NFS4-style ACLs in the latter cases) — actually implement.
IEEE 1003.1e working draft #17 makes for interesting reading, and I recommend it. In appendix B § 23.3 the working group provides a detailed, eight page, rationale for the somewhat complex way that POSIX ACLs work with respect to the old S_IRWXG group permission flags. (It's worth noting that the TRUSIX people provided much the same analysis ten years earlier.) This covers the raison d'être for the mask, which I will only précis here.
Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000). In the presence of ACLs, this only turns off all user and group permissions if there is a mask and the old S_IRWXG maps to it. Without this, setting the old file permissions to 000 wouldn't affect any non-owner user or group entries and other users would, surprisingly, still have access to the object.Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use even in the 21st century. (Advisory locking has been easily usable from scripts with portable well-known tools such as setlock since the late 1990s.)
Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again, this doesn't work unless there is a mask and the old S_IRWXG permissions map to it; because otherwise that chmod command wouldn't turn off any non-owner user or group access control entries, leading to users other than the owner and non-owning groups retaining access to something that is expected to be accessible only to the owner.And again — as you can see — this sort of chmod command was still the received wisdom twelve years later. The rationale still holds.
Other approaches without a mask mechanism have flaws.
An alternative system where the permission bits were otherwise separate from and anded with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff.
An alternative system where the permission bits were otherwise separate from and ored with the ACLs would have the chmod(…,000) problem mentioned before.
Hence an ACL system with a mask.
Further reading
Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System. NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
Portable Applications Standards Committee of the IEEE Computer Society (October 1997).
Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17.
Winfried Trümper (1999-02-28). Summary about Posix.1e
https://unix.stackexchange.com/a/406545/5132
https://unix.stackexchange.com/a/235284/5132
How can I grant owning group permissions when POSIX ACLs are applied?
Performing atomic write operations in a file in bash
| What is the exact purpose of `mask` in file system ACL? |
1,375,634,929,000 |
My Confusion
a.out is the output of the programs which I execute in my Ubuntu 12.10.
In Red Hat system when I execute a.out in the terminal it executes. While in Ubuntu I have to execute ./a.out to get the output. 'a.out' doesn't work.
Can somebody explain what is the difference between the commands?
|
The behaviour you experience depends most likely on differences in the environment variable $PATH. The $PATH is essentially a colon-separated list of directories, which are searched in order for a particular executable when a program is invoked using anexec operating system call. The $PATH can contain relative path components, typically . or an empty string, which both refer to the current working directory. If the current directory is part of $PATH, files in the current working directory can be executed by just their name, e.g. a.out. If the current directory is not in $PATH, one must specify a relative or absolute path to the executable, e.g. ./a.out.
Having relative path components in $PATH has potential security implications as executables in directories earlier in $PATH overshadow executables in directories later in the list. Consider for example an attack on a system where the current working directory path . preceeds /bin in $PATH. If an attacker manages to place a malicious script sharing a name with a commonly used system utility, for instance ls, in the current directory (which typically is far easier that replacing binaries in root-owned /bin), the user will inadvertently invoke the malicious script when the intention is to invoke the system ls. Even if . is only appended at the end of $PATH, a user could be tricked to inadvertently invoke an executable in the current directory which shares a name with a common utility not found on that particular system. This is why it is common not to have relative path components as part of the default $PATH.
| Difference between a.out and ./a.out |
1,375,634,929,000 |
export LD_PRELOAD=/usr/lib/libtsocks.so
It's ok that I can export in this way, but how can I make it permanent? I want LD_PRELOAD to still be changed after a reboot. I'm using Ubuntu and Fedora
|
Ordinarily, you'd put your "export" line into whatever shell startup file is appropriate: .profile, .bash_profile, .zprofile, whatever, in your $HOME directory.
If you want to make it permanent for every user, the various shells usually have system-wide config files in /etc/: /etc/profile exists on this linux box, but do read the man page to figure out which user-specific and which system-wide file to put it in.
| How to make exported shell variables permanent? |
1,375,634,929,000 |
I'm trying to identify an embedded Linux distribution. Here are the commands I have typed so far:
$ uname -a
Linux LIN-SRV-EMB01 3.10.105 #25556 SMP Sat Aug 28 02:14:22 CST 2021 x86_64 GNU/Linux synology_bromolow_rs3412rpxs
$ lsb_release
-sh: lsb_release: command not found
$ ls /usr/lib/os-release
ls: cannot access /usr/lib/os-release: No such file or directory
$ cat /proc/version
Linux version 3.10.105 (root@build1) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) #25556 SMP Sat Aug 28 02:14:22 CST 2021
$ cat /proc/cmdline
root=/dev/md0 netif_seq=2130 ahci=0 SataPortMap=34443 DiskIdxMap=03060e0a00 SataLedSpecial=1 ihd_num=0 netif_num=4 syno_hw_version=RS3412rpxs macs=001132109b1e,001132109b1f,001132109b20,001132109b21 sn=LDKKN90098
$ dmesg | grep "Linux version"
[ 0.000000] Linux version 3.10.105 (root@build1) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) #25556 SMP Sat Aug 28 02:14:22 CST 2021
[ 342.396803] Loading modules backported from Linux version v3.18.1-0-g39ca484
$ python -m platform
Linux-3.10.105-x86_64-with-glibc2.2.5
$ which python2 && python2 -c "import platform;print platform.linux_distribution()[0]"
/bin/python2
$ which python3 && python3 -c "import distro;print(distro.name())"
$ more /etc/issue /etc/*release /etc/*version /boot/config*
more: stat of /etc/issue failed: No such file or directory
more: stat of /etc/*release failed: No such file or directory
more: stat of /etc/*version failed: No such file or directory
more: stat of /boot/config* failed: No such file or directory
$ zcat /proc/config.gz /usr/src/linux/config.gz | more
gzip: /proc/config.gz: No such file or directory
gzip: /usr/src/linux/config.gz: No such file or directory
$ which dpkg apt apt-get rpm urpmi yum dnf zypper
/bin/dpkg
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/md0 2.3G 1.1G 1.1G 50% /
$ sudo parted /dev/md0 print
Password:
Model: Linux Software RAID Array (md)
Disk /dev/md0: 2550MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 2550MB 2550MB ext4
$ sudo mdadm -Q /dev/md0
/dev/md0: 2.37GiB raid1 10 devices, 0 spares. Use mdadm --detail for more detail.
$ which lsblk lscsci lshw lspci dmidecode
/bin/lspci
/sbin/dmidecode
EDIT0: Tried two more commands :
$ strings $(ps -p 1 -o cmd= | cut -d" " -f1) | egrep -i "ubuntu|debian|centos|redhat" -o | sort -u
-sh: strings: command not found
[remoteserver] $ ssh embedded-linux 'cat $(ps -p 1 -o cmd= | cut -d" " -f1)' | strings | egrep -i "ubuntu|debian|centos|redhat" -o | sort -u
ubuntu
EDIT1: Tried three more commands :
$ which initctl && initctl --version
/sbin/initctl
initctl (upstart 1.13.2)
Copyright (C) 2006-2014 Canonical Ltd., 2011 Scott James Remnant
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ which systemctl && systemctl --version
$ cat /sys/class/dmi/id/product_name
To be filled by O.E.M.
$
EDIT2: Tried one more command (specific to Synology) :
$ grep productversion /etc/VERSION
productversion="6.2.4"
EDIT3: Just in case one wants to identify the hardware :
$ uname -u # Specific to Synology ?
synology_bromolow_rs3412rpxs
$ sudo dmidecode -t system | grep Product
Product Name: To be filled by O.E.M.
$
$ cat /sys/devices/virtual/dmi/id/product_name
To be filled by O.E.M.
$
EDIT4 : On another Synology, I get :
$ uname -u
synology_broadwell_rs3618xs
I guess it's based on Ubuntu+upstart.
EDIT5 : Distrib can be identified using this Observium script or this LibreNMS script.
What other commands can I use to look a little deeper?
|
The uname -a output identifies this as a Synology device. Such devices run Synology DiskStation Manager. This is Linux-based, but it is not managed like a typical Linux system running a “traditional” Linux distribution. It has its own package manager, synopkg, for which third-party packages are made available by SynoCommunity. The DiskStation CLI guide describes a few administration tools available in DSM.
If you’re interested in automating administrative tasks on such devices, you might find Synology’s Central Management System useful.
| How can I identify an embedded Linux distribution? |
1,375,634,929,000 |
I'm using htop and looking at a process (rg) which launched multiple threads to search for text in files, here's the tree view in htop:
PID Command
1019 |- rg 'search this'
1021 |- rg 'search this'
1022 |- rg 'search this'
1023 |- rg 'search this'
Why am I seeing PIDs for the process' threads? I thought threads didn't have a PID and they just shared their parent's PID.
|
In Linux, each thread has a pid, and that’s what htop shows. The “process” to which all the threads belong is the thread whose pid matches its thread group id.
In your case, grep Tgid /proc/1021/status would show the value 1019 (and this would be true for all the rg identifiers shown by htop).
See Are threads implemented as processes on Linux? for details.
| Why do threads have their own PID? |
1,375,634,929,000 |
I am learning command top, know how to change color and columns mod, switch from one mode to another. After closing top's window and running again, all comes to default configuration - 4 default modes of columns and colors . Is there any way to save changes befor closing top's window.
|
Once you have your configuration set the way you want, type W (that is a capital W) and your configuration will be saved.
From the top manpage:
´W´ :Write_the_Configuration_File
This will save all of your options and toggles plus the current display mode and delay time. By issuing this command just before quitting top, you will be able restart later in exactly that same state.
| linux command top: saving configuration |
1,375,634,929,000 |
Trying to find files that contains specific strings in name, but don't know how to sort output, in a way, that I will get file names only.
I've tried
OLDDATA=`find . -regex ".*/[0-9.]+" | ls -t`
But ls -t is not working on find result but on whole directory
edit: Result of this statement should be sorted by modification day directories. This regex suppose to match directories that contains only numbers and dots in name.
|
get file names only ... sorted by modification day
find + sort + cut approach:
find . -regex ".*/[0-9.]+" -printf "%T@ %f\n" | sort | cut -d' ' -f2
%T@ - File's last modification time, where @ is seconds since Jan. 1, 1970, 00:00 GMT, with fractional part
%f - File's name with any leading directories removed (only the last element)
To sort in descending order:
find . -regex ".*/[0-9.]+" -printf "%T@ %f\n" | sort -k1,1r | cut -d' ' -f2
| How to sort results of Find statement by date? |
1,375,634,929,000 |
How can I open 8080 port for listening?
In normal situation, I have tomcat7 that listens on port 8080.
sudo netstat -tanpu | grep ":8080"
tcp6 0 0 :::8080 :::* LISTEN 7519/java
After that, I stop tomcat7 with sudo service tomcat7 stop. So, now 8080 port is closed.
I did sudo iptables -A INPUT -p tcp --dport 8080 -j ACCEPT to open it, but the port is not listening.
sudo netstat -tanpu | grep ":8080"
tcp6 0 0 127.0.0.1:8080 127.0.0.1:37064 TIME_WAIT -
How can I open this port (8080) for listening for another application ( not tomcat)?
|
You are confusing two concepts. Iptables handles access control for your networking. When you accept input traffic with a destination of TCP port 8008 that you just means you are letting the internet send traffic to that port. It has no effect on what, if anything, is listening on the port.
To listen on a port you need a program set up to do that. In your original case, tomcat was that program. You stopped it so now nothing is listening on that port. To open it back up as a listener you need to start tomcat, or any other program that you want, to listen on that port. What program you select to listen on that port is entirely dependent on what service you want to provide on that port.
The iptables commands don't affect whether or not your program is listening, it just affects whether or not traffic from the internet is allowed to talk to that program.
If you just want to open up a network port that dumps whatever is sent to it, the program you want is netcat. The command
nc -l -p 8080
This will cause netcat to listen on port 8080 and dump whatever is sent to that port to standard output. You can redirect its output to a file if you want to save the data sent to that port. If you want anything more sophisticated than a raw data dump, you will need to determine what specific program(s) are capable of handling your data and start one of those instead.
| open port 8080 for listening |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.