date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,433,997,497,000
I copied /etc/DIR_COLORS to ~/.dir_colors because /etc/DIR_COLORS.xterm was being used and colours in ls --color=auto were too dark. Now, how do I get this file to take effect immediately? (i.e. Without restarting the shell?) Is there something like what Ctrl-X Ctrl-R does for ~/.inputrc?
From man dir_colors: The program ls(1) uses the environment variable LS_COLORS to determine the colors in which the filenames are to be displayed. This environment variable is usually set by a command like eval `dircolors some_path/dir_colors` So you need to run eval "$(dircolors ~/.dir_colors)" now, and every time you launch a shell. The simplest way to do that is put the command in ~/.profile
How do I reload ~/.dir_colors?
1,433,997,497,000
I have a 3G/GPS device that creates 5 tty nodes, although it's only one physical USB connection. Basically, a multi port usb-serial adapter. I'm trying to create some udev rules to make sure those nodes always have the same name, or at least a symlink to them. I can indeed find the device at /sys/devices/platform/pxa27x-ohci/usb1/1-2/1-2.2/. Inside are 1-2.2:1.0/ to 1-2.2:1.4/, for the 5 nodes it creates. I can also find it at /sys/bus/usb/devices/1-2.2 . The udev info for the device is as follows: udevadm info -a -p /sys/bus/usb/devices/1-2.2/1-2.2\:1.0 looking at device '/bus/usb/devices/1-2.2/1-2.2:1.0': KERNEL=="1-2.2:1.0" SUBSYSTEM=="usb" DRIVER=="option" ATTR{bInterfaceNumber}=="00" ATTR{bAlternateSetting}==" 0" ATTR{bNumEndpoints}=="03" ATTR{bInterfaceClass}=="ff" ATTR{bInterfaceSubClass}=="01" ATTR{bInterfaceProtocol}=="01" ATTR{modalias}=="usb:v12D1p1506d0000dc00dsc00dp00icFFisc01ip01" ATTR{supports_autosuspend}=="0" From this point on, all the nodes have the same info. And the only thing varying between nodes is the bInterfaceNumber property, and the device path. So, I thought of writing a rule by dev path. Now, for some reason, the following rule gets matched by all those nodes. ACTION=="add", DEV="/devices/platform/pxa27x-ohci/usb1/1-2/1-2.2/1-2.2:1.0" SYMLINK+="huawey0" So basically, huawey0 points to the last node enumerated. The device created nodes from ttyUSB2 to 6, and this link points to USB6. So, I tried by kernel node: ACTION=="add", KERNEL=="1-2.2:1.0" SYMLINK+="huawey0" Now, nothing appears on /dev. After this, I tried using the bInterfaceNumber to separate them. I used the following rule ACTION=="add", DEV="/devices/platform/pxa27x-ohci/usb1/1-2/1-2.2/1-2.2:1.[0-4]" ATTR{bInterfaceNumber}=="00" SYMLINK+="huawey0" And still, nothing happens. I even tried a trimmed down version of the rule.. ACTION=="add", ATTR{bInterfaceNumber}=="00" SYMLINK+="huawey0" And still nothing happens. Why is it not matching?
Your rules all have syntax errors in them: = is for assignment == is for comparison, so you were not actually looking at what DEV equaled, you were assigning it. You need , between all the statements, there were none before SYMLINK+=. Fist Rule ACTION=="add", DEV=="/devices/platform/pxa27x-ohci/usb1/1-2/1-2.2/1-2.2:1.0", SYMLINK+="huawey0" Second Rule ACTION=="add", KERNEL=="1-2.2:1.0", SYMLINK+="huawey0" Third Rule ACTION=="add", DEV=="/devices/platform/pxa27x-ohci/usb1/1-2/1-2.2/1-2.2:1.[0-4]", ATTR{bInterfaceNumber}=="00", SYMLINK+="huawey0" Fourth Rule ACTION=="add", ATTR{bInterfaceNumber}=="00", SYMLINK+="huawey0" All these rules should do what you want now (I would use the first one personally).
Udev rule to match multiple node USB device
1,433,997,497,000
I'd like to know how I can check to see if my cronjob will run at the specified time I set it at. Is there anyway I can test this without having to wait for that time? Here are my crontab -l results: root@work:~$ crontab -l 3 */23 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null If I did the values right, will my cronjob run at exactly 11pm every night and log any output to /dev/null for cleanliness? Thanks.
The only way to be sure is to let it run and inspect the results. You can modify your command to log the output somewhere and inspect that, or let it email you the output. You could add another identical line which runs the command within 5 minutes or so, for debugging. eg. If it's 3:13 pm right now, I might add this line to test the command after 3 minutes from now: # Run at 15:16 16 15 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null BTW. To run at 11 PM every night, you probably want this instead; I have also redirected stderr to stdout (2>&1) to ensure that all output goes to /dev/null: # At minute 0 of hour 23 on every day, every month, every day of the week: 0 23 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null 2>&1
How can I ensure my cronjob will run at specified time?
1,433,997,497,000
I never thought about this before, but I was wondering why they chose runlevel 2. Every other distro and OS I've used default to runlevel 3 (with the exception of AIX which also defaults to runlevel 2).
The Debian distribution (and hence Ubuntu, which is derived from it) does not define any differences between runlevels 2-5 as a matter of policy. It is up to the local system administrator to make use of runlevels as they see fit. Since there is no difference between runlevels 2-5, a default runlevel 2 was chosen.
Why does debian and ubuntu default to runlevel 2?
1,433,997,497,000
I need to randomize the time before a given command starts. I realize it's relatively trivial to do this within a script, or to write a "wrapper" script to do this, but I'm wondering if anyone knows a lightweight binary program that's already out there that will accomplish this without requiring an interpreter to be loaded. EDIT: More specifically, I do not want to involve bash in any way. Assume for the sake of argument that no shell is available and I'm invoking this from a non-interactive program.
If you want finer-grained control than maxschlepzig's nice bash incantations, it's a reasonably easy thing to just code up: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <time.h> int main(int argc, char**argv){ useconds_t mdelay=0, delay; if (argc<3){ fprintf(stderr,"%s <delay (in milli-seconds)> <command> <args>* :\n\trun commands with a random delay\n",argv[0]); exit(1); } mdelay=atol(argv[1]); /* seed random number generator with the time */ srand(((unsigned int)time(NULL))%RAND_MAX); delay = mdelay * (rand() / (1.0 + RAND_MAX)); usleep(delay*1000); execvP(argv[2],getenv("PATH"),argv+2); return 0; } Compile with something like gcc randomdelay.c -o randomdelay and invoke it like $ randomdelay 10000 echo Hi! If you are doing this is a programming context you might be better off just grabbing the part of the code that picks the random delay for you and calls an exec family function (which one you want depends on exactly how you have it specified internally). Issue here: This assumes your systems rand/srand are sane (not good PRNGs mind you just that they do what the man page says they do). I've been having some trouble with it on my Mac OS X box, where those functions are deprecated in favor of random/srandom. Historically, many implementation of rand have had poor numeric characteristics. That shouldn't be a problem in this application, but if it is, replace it with a better PRNG. The command line argument handling in this toy is a bit primitive. The delay is chosen uniformly from some range. For some applications you might prefer an unbounded distribution like an exponential. Stack Overflow has many questions on how to get non-uniform distributions from a uniform PRNGs.
Lightweight utility/program to run a command after a random delay
1,433,997,497,000
What criteria distinguishes various distributions of Linux, such as Debian, Ubuntu, Fedora, OpenSUSE? In other words, given a release of a Linux OS, what features mean it is classified into one distribution not the other? I heard that different distributions are grouped differently, for example, Debian-based, Gentoo-based, RPM-based, Slackware-based? I was wondering what criteria are used for the grouping? Within a distribution, what distinguishes different releases? For example, within Ubuntu, Ubuntu 10.04 and 10.10. As far as the concepts of release and distribution are concerned, is Windows 7 more of a counterpart of Ubuntu distribution or of Ubuntu 10.10? Is Windows NT family more of a counterpart of Ubuntu or of Debian-based Linux OSes? Thanks and regards!
From the Linux distributions Wikipedia entry: A Linux distribution is a member of the family of Unix-like operating systems built on top of the Linux kernel. Such distributions (often called distros for short) are Operating systems including a large collection of software applications such as word processors, spreadsheets, media players, and database applications. What distinguishes them is the hardware they supposrt, packaging, kernel patches, what set and versions of applications they ship, their documentation, install methods etc. Other "classifications" are whether they are more oriented towards end users or servers. Some of the distributions (Debian, Gentoo, Fedora and others) are used as a "starting point" for other distributions (Ubuntu is derived from Debian for instance). That means that the creators of for instance Sabayon Linux used a Gentoo distribution to start their development effort, and keep track of Gentoo's evolution to some extent. You can look at the Distrowatch search page for this kind of examples. "RPM-based" distributions is a different classification. RPM is a package management system, not a distribution. Some distributions use it (RedHat and Suse comes to mind) directly or via one of its frontends. Others use different systems (pacman for Arch, portage for Gentoo). The package management system is one of the important differences between distributions. Regarding versions, there are no strict criteria. The distribution developers/managers decide on what versions/patches/new software they want to include in a new version, polish it, and when it's ready, they ship it. There isn't a consistent versioning scheme across distributions. For your last question I'm not sure I understand, but you could say that Windows NT, 2000, XP, 2003/Vista, 2008 and Windows 7 are "versions" of the Windows "distribution". And they are all in the Windows NT family of Windows releases. So if you want to draw a parallel with Linux distributions, yes, each windows "release" is closer to a version of a Linux distribution. And the "Windows NT" lineage is equivalent to the RedHat or Suse lineage for instance. (One of the similarities of these "lineages" is that there usually is a major revision of the kernel between Windows releases, and that's also the case for a lot of Linux distros.)
Classification of Linux distributions
1,433,997,497,000
What is the convention for numbering the linux kernels? AFAIK, the numbers never seem to decrease. However, i think I've seen three kinds of schemes 2.6.32-29 2.6.32-29.58 2.6.11.10 Can anybody explain what are the interpretations of these numbers and formats?
2.6.32-29: 2.6.32: base kernel, -29 final release by ubuntu 2.6.32-29.58: 2.6.32: base kernel, -29.58 ongoing release (-29) by ubuntu 2.6.11.10: 2.6.11: base kernel, .10 tenth patch release of it. (2.6.11 was chosen by volunteers (read Greg KH) to be a "long term maintenance" release).
Numbering convention for the linux kernel?
1,433,997,497,000
What exactly are shmpages in the grand scheme of kernel and memory terminology. If I'm hitting a shmpages limit, what does that mean? I'm also curious if this applies to more than linux
User mode processes can use Interprocess Communication (IPC) to communicate with each other, the fastest method of achieving this is by using shared memory pages (shmpages). This happens for example if banshee plays music and vlc plays a video, both processes have to access pulseaudio to output some sound. Try to find out more about shared memory configuration and usage with some of the following commands: Display the shared memory configuration: sysctl kernel.shm{max,all,mni} By default (Linux 2.6) this should output: kernel.shmmax = 33554432 kernel.shmall = 2097152 kernel.shmmni = 4096 shmmni is the maximum number of allowed shared memory segments, shmmax is the allowed size of a shared memory segment (32 MB) and shmall is the maximum total size of all segments (displayed as pages, translates to 8 GB) The currently used shared memory: grep Shmem /proc/meminfo If enabled by the distribution: ls -l /dev/shm ipcs is a great tool to find out more about IPC usage: ipcs -m will output the shared memory usage, you can see the allocated segments with the corresponding sizes. ipcs -m -i <shmid> shows more information about a specified segment including the PID of the process creating (cpid) and the last (lpid) using it. ipcrm can remove shared memory segments (but be aware that those are only get removed if no other processes are attached to them, see the nattach column in ipcs -m). ipcrm -m <shmid> Running out of shared memory could be a program heavily using a lot of shared memory, a program which does not detach the allocated segments properly, modified sysctl values, ... This is not Linux specific and also applies to (most) UNIX systems (shared memory first appeared in CB UNIX).
What are shmpages in laymans terms?
1,433,997,497,000
Can the initramfs image be compressed by a method other than gzip, such as lzma?
Yes. I use in-kernel initrd and it offers at least the following methods: None (as it is compressed with kernel) GZip BZip LZMA (possibly zen-only) You can use it on external file and with LZMA (at least on Ubuntu). Wikipedia states that Linux kernel supports gzip, bzip and lzma (depending, of course, what algorithms are compiled in).
Can the initramfs image use a compression format other than gzip?
1,433,997,497,000
After a comment from OP, I discovered that /dev/stdout gives blocks of 10 KiB even after disabling buffering, but - does not. Why is this? I could not find anything regarding this in man tar nor man stdout. Note that /dev/stdout goes to 00002800, not 000000a1. The output is correct, except is padded with null bytes. > mkdir -p /tmp/747613 > cd /tmp/747613 > echo 747613 > file.txt > tar czf /tmp/archive_tgz . > hd /tmp/archive_tgz 00000000 1f 8b 08 00 00 00 00 00 00 03 ed d1 41 0a c2 30 |............A..0| 00000010 10 85 e1 ac 3d 45 4e 90 66 9a 49 72 9e 2e 22 08 |....=EN.f.Ir..".| 00000020 ea a2 46 f0 f8 a6 8b a2 9b 76 21 04 11 ff 6f f3 |..F......v!...o.| 00000030 20 33 90 81 e7 06 d3 9d 6f 72 8e 4b 4a 8e fe 3d | 3......or.KJ..=| 00000040 57 46 54 43 6c 8f 69 6c 7b 22 da c6 36 f6 3f cd |WFTCl.il{"..6.?.| 00000050 98 fb ad 4e b3 b5 2d cb 7c 9d 2e 65 7b 6f 7f fe |...N..-.|..e{o..| 00000060 a3 dc 70 3c 9d 8b ab 8f da ef 8f a5 e0 94 74 a7 |..p<..........t.| 00000070 ff bc f6 2f 41 a5 f5 1f 64 14 63 7d bf 93 5e fe |.../A...d.c}..^.| 00000080 bc ff ac 39 49 38 7c fb 0c 00 00 00 00 00 00 00 |...9I8|.........| 00000090 00 00 00 00 00 00 1f 78 02 88 2a 27 ac 00 28 00 |.......x..*'..(.| * 000000a1 > tar czf /dev/stdout . | hd 00000000 1f 8b 08 00 00 00 00 00 00 03 ed d1 41 0a c2 30 |............A..0| 00000010 10 85 e1 ac 3d 45 4e 90 66 9a 49 72 9e 2e 22 08 |....=EN.f.Ir..".| 00000020 ea a2 46 f0 f8 a6 8b a2 9b 76 21 04 11 ff 6f f3 |..F......v!...o.| 00000030 20 33 90 81 e7 06 d3 9d 6f 72 8e 4b 4a 8e fe 3d | 3......or.KJ..=| 00000040 57 46 54 43 6c 8f 69 6c 7b 22 da c6 36 f6 3f cd |WFTCl.il{"..6.?.| 00000050 98 fb ad 4e b3 b5 2d cb 7c 9d 2e 65 7b 6f 7f fe |...N..-.|..e{o..| 00000060 a3 dc 70 3c 9d 8b ab 8f da ef 8f a5 e0 94 74 a7 |..p<..........t.| 00000070 ff bc f6 2f 41 a5 f5 1f 64 14 63 7d bf 93 5e fe |.../A...d.c}..^.| 00000080 bc ff ac 39 49 38 7c fb 0c 00 00 00 00 00 00 00 |...9I8|.........| 00000090 00 00 00 00 00 00 1f 78 02 88 2a 27 ac 00 28 00 |.......x..*'..(.| 000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00002800 > # Even with buffering disabled > stdbuf -i0 -o0 -e0 tar czf /dev/stdout . | stdbuf -i0 -o0 -e0 hd 00000000 1f 8b 08 00 00 00 00 00 00 03 ed d1 41 0a c2 30 |............A..0| 00000010 10 85 e1 ac 3d 45 4e 90 66 9a 49 72 9e 2e 22 08 |....=EN.f.Ir..".| 00000020 ea a2 46 f0 f8 a6 8b a2 9b 76 21 04 11 ff 6f f3 |..F......v!...o.| 00000030 20 33 90 81 e7 06 d3 9d 6f 72 8e 4b 4a 8e fe 3d | 3......or.KJ..=| 00000040 57 46 54 43 6c 8f 69 6c 7b 22 da c6 36 f6 3f cd |WFTCl.il{"..6.?.| 00000050 98 fb ad 4e b3 b5 2d cb 7c 9d 2e 65 7b 6f 7f fe |...N..-.|..e{o..| 00000060 a3 dc 70 3c 9d 8b ab 8f da ef 8f a5 e0 94 74 a7 |..p<..........t.| 00000070 ff bc f6 2f 41 a5 f5 1f 64 14 63 7d bf 93 5e fe |.../A...d.c}..^.| 00000080 bc ff ac 39 49 38 7c fb 0c 00 00 00 00 00 00 00 |...9I8|.........| 00000090 00 00 00 00 00 00 1f 78 02 88 2a 27 ac 00 28 00 |.......x..*'..(.| 000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00002800 > # Works fine (- means stdout) > tar czf - . | hd 00000000 1f 8b 08 00 00 00 00 00 00 03 ed d1 41 0a c2 30 |............A..0| 00000010 10 85 e1 ac 3d 45 4e 90 66 9a 49 72 9e 2e 22 08 |....=EN.f.Ir..".| 00000020 ea a2 46 f0 f8 a6 8b a2 9b 76 21 04 11 ff 6f f3 |..F......v!...o.| 00000030 20 33 90 81 e7 06 d3 9d 6f 72 8e 4b 4a 8e fe 3d | 3......or.KJ..=| 00000040 57 46 54 43 6c 8f 69 6c 7b 22 da c6 36 f6 3f cd |WFTCl.il{"..6.?.| 00000050 98 fb ad 4e b3 b5 2d cb 7c 9d 2e 65 7b 6f 7f fe |...N..-.|..e{o..| 00000060 a3 dc 70 3c 9d 8b ab 8f da ef 8f a5 e0 94 74 a7 |..p<..........t.| 00000070 ff bc f6 2f 41 a5 f5 1f 64 14 63 7d bf 93 5e fe |.../A...d.c}..^.| 00000080 bc ff ac 39 49 38 7c fb 0c 00 00 00 00 00 00 00 |...9I8|.........| 00000090 00 00 00 00 00 00 1f 78 02 88 2a 27 ac 00 28 00 |.......x..*'..(.| * 000000a1
The difference in behaviour comes from tar: when writing, it applies a “blocking factor”, which by default uses 10240-byte records (that’s 2800 in hexadecimal). This happens even when compressing, which is why some tarballs result in error messages from gzip when they are extracted. The behaviour is disabled at the end of the archive when it is written to a regular file, or to standard out (although the manual says it isn’t). When writing to /dev/stdout, tar believes it’s writing to a device, and applies its blocking factor. You can verify this by changing the blocking factor: $ tar czfb /dev/stdout 1 . | hd 00000000 1f 8b 08 00 00 00 00 00 00 03 ed d1 31 0e 83 30 |............1..0| 00000010 0c 05 50 cf 9c 22 27 08 76 b0 e3 f3 74 48 25 24 |..P.."'.v...tH%$| 00000020 26 08 88 e3 03 43 55 c4 00 53 8a 2a fc 16 0f b6 |&....CU..S.*....| 00000030 e4 2f 7d 5f 43 71 b8 52 91 6d 92 0a ee e7 07 10 |./}_Cq.R.m......| 00000040 73 13 31 10 62 04 24 0c 2c e0 a4 7c 34 80 71 c8 |s.1.b.$.,..|4.q.| 00000050 af de 39 18 72 9a d2 c9 dd d5 fe 4f f9 fa dd 76 |..9.r......O...v| 00000060 c9 e7 39 97 fb b1 15 1c 99 4f fa d7 43 ff a4 21 |..9......O..C..!| 00000070 80 c3 72 91 be 1e de bf b2 46 6a aa bb 63 18 63 |..r......Fj..c.c| 00000080 8c f9 b1 05 02 0c 89 df 00 0a 00 00 00 00 00 00 |................| 00000090 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200
Why does tar's handling of stdout and - differ?
1,433,997,497,000
The system clock is maintained by the kernel, whereas the hardware clock is maintained by the Real Time Clock (RTC). Do both clock run at same frequency? Are both independent of each other? What happens when Real time clock fails? Does it affect the system clock? Can anyone let me know the difference between both the clocks.
both clock run at same frequency? Usually there are two clocks inside a computer/device/system. One is powered from a battery (usually a CR2032, could be the main battery or even a supercap in an embedded system) and runs from an dedicated chip. The other one is driven by the CPU clock source (with its own quartz crystal). One usually runs from a 32.768kHz crystal. The other one from a CPU crystal Mhz or GHz range. There is a lot of variation as there are a lot of CPU models. both are independent of each other? Yes, most of the time. But one could adjust the other (on embedded linux you typically have the hwclock command with options -r or -w). The CPU clock is set by the chip clock on boot (the CPU has no idea of what time it is when booting). For a system in a network, the CPU clock might find better time value from the network via the NTP (Network Time Protocol) and then adjust or correct the value inside the clock chip. what happens when Real time clock fails does it affect system clock? Yes, sure, if the battery runs out, for example, the computer boots up with a completely out of wack idea of real time But, nowadays, most of the systems have some network connectivity and update their concept of real time pretty soon after boot via the NTP protocol. can anyone let me know the difference between both the clocks. As said above, one clock source is a chip, the other is the CPU. Note that I have avoided calling the chip clock the RTC clock as there are internal values on the CPU also called RTC. But yes that is the common name for it. Related: Real-Time -Clock Kernel reference Red-Hat reference
System Clock vs. Hardware Clock (RTC) in embedded systems
1,433,997,497,000
I am new to Linux. I know that there are ways to convert between file formats in Linux through the terminal. But is there any way I achieve the following? I have 3 folders, inside those folders, there are 15 more folders. Each of those folders has 12 files in .xls format, and some other .R files and pdf files. Is there a way to create an exact duplicate of these 3 folders, inside which all the .xls are converted to .xlsx format? If the following seems too unsubstantial, what is the way to just convert all the .xls files in one folder into .xlsx file format? Then I can replicate the aforesaid. note: I don't have LibreOffice, I use WPS. If the following requires LibreOffice I can install it if necessary.
It will require libreoffice From this Stack Overflow question Unix command to convert xls file into xlsx file? : libreoffice --convert-to xlsx my.xls --headless Note that the --convert-to option implies the --headless option (it did not in Libreoffice version 4.2.3 when the SO answer above was written in 2014, but has done so since at least Libreoffice 6 in 2018, and probably much longer than that). It doesn't hurt to add it to the command line, but it isn't required. Then you can integrate it into a find command : find . -type f -name "*.xls" -exec libreoffice --convert-to xlsx {} \; This command will look for files, in the current directory, with a name ending with .xls and execute the command replacing {} by the filename. man libreoffice states : --convert-to output_file_extension[:output_filter_name] [--outdir output_dir] file... Batch converts files. If --outdir is not specified then the current working directory is used as the output directory for the converted files. It implies --headless Examples: --convert-to pdf *.doc Converts all .doc files to PDFs. --convert-to pdf:writer_pdf_Export --outdir /home/user *.doc Converts all .doc files to PDFs using the settings in the Writer PDF export dialog and saving them in /home/user. From @cas in the comments, libreoffice can take multiple filename arguments (the examples from the man page show this) so it would be faster to run as find . -type f -name '*.xls' -exec libreoffice --convert-to xlsx {} + - only needing to run libreoffice once instead of once per file find . -type f -name '*.xls' -exec libreoffice --convert-to xlsx {} +
Converting multiple .xls format files to .xlsx format
1,433,997,497,000
A common scenario for setting up a container/sandbox is wanting to create a minimal set of device nodes in a new tmpfs (rather than exposing the host /dev), and the only (unprivileged) way I know to do this is by bind-mounting the desired ones into it. The commands I'm using (inside unshare -mc --keep-caps) are: mkdir /tmp/x mount -t tmpfs none /tmp/x touch /tmp/x/null mount -o bind /dev/null /tmp/x/null with the intend of moving the mount on top of /dev. However, even before doing the move, running echo > /tmp/x/null produces a "Permission denied" error (EACCES). Yet if I additionally perform: mkdir /tmp/x/y touch /tmp/x/y/null mount -o bind /dev/null /tmp/x/y/null echo > /tmp/x/y/null the write succeeds as it should. I've played around with this quite a bit, but can't find a root cause or reason this should be happening. It's possible to work around it by putting the bind-mounted nodes in a subdirectory and symlinks to them in the top-level of the filesystem that will become the new /dev, but it seems like this shouldn't be necessary. What's going on? Is there a reasonable explanation for this? Or is it some access control logic gone wrong?
Well, this seems to be a very interesting effect, which is a consequence of three mechanisms combined together. The first (trivial) point is that when you redirect something to the file, the shell opens the target file with the O_CREAT option to be sure that the file will be created if it does not yet exist. The second thing to consider is the fact that /tmp/x is a tmpfs mountpoint, while /tmp/x/y is an ordinary directory. Given that you mount tmpfs with no options, the mountpoint's permissions automagically change so that it becomes world-writable and has a sticky bit (1777, which is a usual set of permissions for /tmp, so this feels like a sane default), while the permissions for /tmp/x/y are probably 0755 (depends on your umask). Finally, the third part of the puzzle is the way you set up the user namespace: you instruct unshare(1) to map UID/GID of your host user to the same UID/GID in the new namespace. This is the only mapping in new namespace, so trying to translate any other UID between the parent/child namespaces will result in so-called overflow UID, which by default is 65534 — a nobody user (see user_namespaces(7), section Unmapped user and group IDs). This makes /dev/null (and its bind-mounts) be owned by nobody inside the child user namespace (as there is no mapping for host's root user in the child user namespace): $ ls -l /dev/null crw-rw-rw- 1 nobody nobody 1, 3 Nov 25 21:54 /dev/null Combining all the facts together we come to the following: echo > /tmp/x/null tries to open an existing file with O_CREAT option, while this file resides inside the world-writable sticky directory and is owned by nobody, who is not the owner of the directory containing it. Now, read openat(2) carefully, word by word: EACCES Where O_CREAT is specified, the protected_fifos or protected_regular sysctl is enabled, the file already exists and is a FIFO or regular file, the owner of the file is neither the current user nor the owner of the containing directory, and the containing directory is both world- or group-writable and sticky. For details, see the descriptions of /proc/sys/fs/protected_fifos and /proc/sys/fs/protected_regular in proc(5). Isn't this brilliant? This seems almost like our case... Except the fact that the man page tells only about ordinary files and FIFOs and tells nothing about device nodes. Well, let's take a look at the code which actually implements this. We can see that, essentially, it first checks for exceptional cases which must succeed (the first if), and then it just denies the access for any other case if the sticky directory is world-writable (the second if, first condition): static int may_create_in_sticky(umode_t dir_mode, kuid_t dir_uid, struct inode * const inode) { if ((!sysctl_protected_fifos && S_ISFIFO(inode->i_mode)) || (!sysctl_protected_regular && S_ISREG(inode->i_mode)) || likely(!(dir_mode & S_ISVTX)) || uid_eq(inode->i_uid, dir_uid) || uid_eq(current_fsuid(), inode->i_uid)) return 0; if (likely(dir_mode & 0002) || (dir_mode & 0020 && ((sysctl_protected_fifos >= 2 && S_ISFIFO(inode->i_mode)) || (sysctl_protected_regular >= 2 && S_ISREG(inode->i_mode))))) { const char *operation = S_ISFIFO(inode->i_mode) ? "sticky_create_fifo" : "sticky_create_regular"; audit_log_path_denied(AUDIT_ANOM_CREAT, operation); return -EACCES; } return 0; } So, if the target file is a char device (not a regular file or a FIFO), the kernel still denies opening it with O_CREAT when this file is in the world-writable sticky directory. To prove that I found the reason correctly, we may check that the problem disappears in any of the following cases: mount tmpfs with -o mode=777 — this will not make the mountpoint have a sticky bit; open /tmp/x/null as O_WRONLY, but without O_CREAT option (to test this, write a program calling open("/tmp/x/null", O_WRONLY | O_CREAT) and open("/tmp/x/null", O_WRONLY), then compile and run it under strace -e trace=openat to see the returned values for each call). I'm not sure whether this behavior should be considered a kernel bug or not, but the documentation for openat(2) clearly does not cover all the cases when this syscall actually fails with EACCES.
Why do bind mounts of device nodes break with EACCES in root of a tmpfs?
1,433,997,497,000
I tried this: xprop -id $(gedit & echo $!) -f MY_VAR1 8s -set MY_VAR1 MyCustomVar Than i tried to xprop and click on the gedit window - MY_VAR1 was not present there. So i thought maybe i should put sleep in there... i tried: xprop -id $(gedit & sleep 5 & echo $!) -f MY_VAR1 8s -set MY_VAR1 MyCustomVar Waited 5 seconds and tried xprop and clicked on the new window.. still nothing Thanks
As Jeff noted, PID and Window ID are different things, and there isn't always an easy way to map one to another — some processes have no window, some processes share a window, and others still have many windows (at least they do at the X level, even if you only see a single window). When I start gedit I have one visible window, but 3 discrete X Windows (xwininfo -root -tree -all) with name or class "gedit", one of which is a window manager window (I use fvwm2, yours may differ), and one of which is the "client leader", along with up to 20 other anonymous "windows" which are really parts of the user interface (depending on gedit version, number of tabs, and GTK+). To partly solve that coordination problem you can use properties _NET_WM_PID and WM_CLIENT_LEADER, these should hold the PID of the owning process, and leader ID where there are multiple windows (though the latter is really for session management, it might be helpful here). Now, there may some problems with _NET_WM_PID, it requires that processes and the window manager behave correctly, but in general, on a modern desktop, this should be reliable (with the exception of a few old programs like rxvt). Think of properties like environment variables, it should be set to the PID, but nothing enforces this, though some WMs are more proactive than others about this I believe. Ordinarily, for this type of problem, you would write a short script that would enumerate the windows for gedit, query the _NET_WM_PID property in a loop for the PID of the process you just started, then set the property. However, everything will conspire against you: there is no X property with the Window ID in it xprop oddly lacks the ability to output the ID of a window that you query the window name changes depending on what gedit opens, xprop doesn't support wildcard/patterns, and won't match by window class both xwininfo and xprop only output the first window that matches (e.g. by -name) rather than all of them, and neither make it easy to parse the output the number of X "windows" can exceed the number of visible windows by a factor of 50 gedit runs by default as a single process, so if you start a second gedit that process exits as soon as it has made contact with the main process. However, on recent versions, you can use gedit -s to run independent processes/windows. This is the reason that utilities like xdotool, xwit and wmctl exist ;-) Unfortunately, not even any of those do exactly this without help. If you are running standalone instances, this will do the trick, as a shell script so it's understandable (and supports filename arguments): #!/bin/bash gedit -s "$@" & _pid=$! _wid=$(xdotool search --sync --onlyvisible --pid $_pid) xprop -f MY_VAR1 8s -set MY_VAR1 MyCustomVar -id $_wid # xprop -id $_wid MY_VAR1 ## for testing This uses xdotool to do the heavy lifting, in "sync" mode to give the window time to start up and set properties, and with gedit -s so the process is standalone and long-lived and doesn't just hand over to an existing instance and then disappear (leaving xdotool hanging around). Or an equivalent one-liner: gedit -s & xdotool search --sync --onlyvisible --pid $! | xargs -r xprop -f MY_VAR1 8s -set MY_VAR1 MyCustomVar -id Noting: xdotool can search by PID, it can also set a few properties by name, but cannot set arbitrary property names as required xprop has poor search and output options xdotool outputs decimal windows IDs, xprop accepts either decimal or hex there's not much error handling You could do this without xdotool, but you'd likely end up with a convoluted mess that needs to list every window on the system and process each one in turn. I tried, it's just too ugly to paste here :-) For an alternative approach: a standard GTK+ client allows you to set properties via command-line options, even if the application doesn't document them (gedit --help-gtk). Sadly not arbitrary properties, but you can set the "Class" to any arbitrary string. Since the class is a multi-valued property each window will still have the "gedit" class (so settings/resources will still apply to it, if selected that way, but it can prevent "Gedit" settings being applied, though that can be an advantage too). $ gedit --class MyCustomVar $ xprop -notype -name gedit WM_CLASS _NET_WM_PID WM_CLASS = "gedit", "MyCustomVar" _NET_WM_PID = 1517 WM_NAME = "gedit" There are a couple of other options for window/process mapping (ferreting in /proc/PID/environ for WINDOWID, though this only works for processes started by terminal emulator that observes that convention; or possibly write a gedit plugin ) but neither is appealing. See also https://stackoverflow.com/questions/151407/how-to-get-an-x11-window-from-a-process-id - one of the more interesting answers there has a link for an LD_PRELOAD hack to wrap XCreateWindow() and a couple of other API functions to set arbitrary properties.
How to set custom property with xprop and open that program in one line?
1,433,997,497,000
Is there any way to read total running time of a linux system from BIOS or CPU? I've searched BIOS informations by dmidecode. But it gives release date which is not proper for my question. Then I've checked out /proc. But it holds uptime values just from last reboot. Maybe, writing these uptime values for every boot could be an option. Then I've checked dumpe2fs. It gives total running time of a particular hard drive. It's useless for me because hdd could be changed while my application is running. Except these above, how can I read or calculate the total runtime of my system ? Where can I read from ?
This isn’t something the firmware tracks, as far as I’m aware. Even BMCs don’t measure total uptime. This won’t help with past uptime from previous boots, but you can start recording uptimes now, by installing a tool such as uptimed and setting it up so that it never discards values (set LOG_MAXIMUM_ENTRIES to 0 in uptimed.conf). That will measure operating system uptime, not total CPU “on” time, but it should be close enough... Once you’ve got uptimed running, you can run uprecords to view the totals, for example up 1492 days, 02:57:18 | since Sat Sep 7 00:50:06 2013 down 61 days, 08:11:24 | since Sat Sep 7 00:50:06 2013 %up 96.051 | since Sat Sep 7 00:50:06 2013 As pointed out by quixotic, you’ll be able to get some idea of historical uptime by looking at your logs. If you’re running systemd, you can view the boots which have been logged using journalctl --list-boots. Log rotation means that this is likely to miss quite a lot of uptime though. As pointed out by JdeBP, last reboot might give you a longer list of boots with the associated uptime.
Total runtime of machine
1,433,997,497,000
I have a script that asks for the user's password and I want to check if the given password is wrong or not. #A little fragment of my script read -sp 'Your password: ' password; if [[ $password -ne WHAT GOES HERE? ]]; then MORE CODE HERE else MORE CODE HERE fi
There's no fully portable way to check the user's password. This requires a privileged executable, so this isn't something you can whip up from scratch. PAM, which is used on most non-embedded Linux systems as well as many embedded systems and most other Unix variants, comes with a setuid binary but it doesn't have a direct shell interface, you need to go via the PAM stack. You can use a PAM binding in Perl, Python or Ruby. You can install one of several checkpassword implementations. If the user is allowed to run sudo for anything, then sudo -kv will prompt for authentication (unless this has been disabled in the sudo configuration). But that doesn't work if there's no sudoers rule concerning the user. You can run su. This works on most implementations. This is probably the best bet in your case. if su -c true "$USER"; then echo "Correct password" fi
Check user's password with a shell script
1,433,997,497,000
I'm currently trying to scp a file from one server to another, using an ssh key on my local computer. this is the command I'm currently using: sudo scp -r -o "ForwardAgent yes" <new_folder> <second-server-path> and I've followed this github doc to verify that my ssh agent is being forwarded to the second server's terminal. -o "ForwardAgent yes" comes from this reference, but does not appear on my man scp reference. However, after all this, the command still asks for a password (which we are trying to avoid). Any ideas on how to use the ssh forwarding?
scp does not support to forward your agent (hardcoded to be disabled in the code) so this is not possible what you are trying. The problem is in sudo. Connection to ssh-agent is stored in environment variable SSH_AUTH_SOCK (echo $SSH_AUTH_SOCK) and this variable is not preserved during the sudo so there are two possibilities: Do not use sudo to scp. Run just scp to some sane location and then sudo cp the file to the desired location. Force sudo to preserve env. variables using the -E switch: sudo scp -r <new_folder> <second-server-path> When you want to copy the file between two servers, use -3 switch, which will make both authentications from your host, where you have access to your agent.
Using scp with a forwarded ssh agent
1,433,997,497,000
I want to make a very minimal linux os which only have a terminal interface and basic commands/applications (busybox is my choice for commands/apps). I don't want the installation option on my os. I just want it to be booted and run completely from RAM. I'm planning to use ISO-Linux as bootloader. No networking, no virtualization support, no unnecessary drivers, etc. I want it to be very very basic os. I've downloaded the latest stable kernel (v4.5) source code from kernel.org and the build environment ready. My one more confusion is that does a kernel by default has any user interface (shell, terminal, ...) where i can type commands and see output?
Technically you can achieve this. Though, kernel do not have any built-in user-interface. You need to follow steps as: 1. Create a initramfs with static busybox and nothing else. This initramfs will have few necessary directories: like proc, sys, tmp, bin, usr, etc 2. Write a "/init" script, whose main job will be: a. mount the procfs,tmpfs and sysfs. b. Call busybox's udev i.e. mdev c. Install the busybox command onto virtual system by executing busybox install -s d. Calling /bin/sh 3. Source the initramfs directory while compiling the kernel. You can do so by flag: CONFIG_INITRAMFS_SOURCE 4. Compile your kernel. 5. Boot off this kernel and you will get the shell prompt with minimal things. Though, I write above notes in a very formal way. You can fine tune it the way you desire. UPDATE: Follow this link for some guidelines.
How to make a minimal bootable linux (only with terminal) from kernel source code? [duplicate]
1,433,997,497,000
The following command produced the result of the following image using convert, where an overlay box containing the letter "A" was layered over the PDF: convert online_gauss.pdf -fill white -undercolor '#00000080' -pointsize 40 -gravity South -annotate +0+5 ' A ' online_gauss_annot.pdf However, convert rasterizes the source. Since I would like to keep the original PDF format (vectorial) for publishing, is there a simple way of achieving this type of annotation via command line over a single PDF image? I would be happy just with the letter, even in the bottom left corner. I've seen some examples using Ghostscript, pdftk (stamp) but they involve several intermediate steps that are difficult to get right for different sized PDF images.
Well, I've come up with a solution using TikZ within a crafted LaTex document. The result is not exactly the same, but I think it is even nicer: This required having a tex document with placeholders that will be replaced by the arguments to a sh script. % file: add_legend.tex \documentclass{standalone} \usepackage{graphicx} \usepackage{tikz} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LaTeX Overlay Generator - Annotated Figures v0.0.1 % Created with (omitted http) ff.cx/latex-overlay-generator/ % If this generator saves you time, consider donating 5,- EUR! :-) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\annotatedFigureBoxCustom{bottom-left}{top-right}{label}{label-position}{box-color}{label-color}{border-color}{text-color} \newcommand*\annotatedFigureBoxCustom[8]{\draw[#5,thick,rounded corners] (#1) rectangle (#2);\node at (#4) [fill=#6,thick,shape=circle,draw=#7,inner sep=4pt,font=\huge\sffamily,text=#8] {\textbf{#3}};} %\annotatedFigureBox{bottom-left}{top-right}{label}{label-position} \newcommand*\annotatedFigureBox[4]{\annotatedFigureBoxCustom{#1}{#2}{#3}{#4}{white}{white}{black}{black}} \newcommand*\annotatedFigureText[4]{\node[draw=none, anchor=south west, text=#2, inner sep=0, text width=#3\linewidth,font=\sffamily] at (#1){#4};} \newenvironment {annotatedFigure}[1]{\centering\begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (-0.75,-0.75) { #1};\begin{scope}[x={(image.south east)},y={(image.north west)}]}{\end{scope}\end{tikzpicture}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{annotatedFigure} {\includegraphics[width=1.0\linewidth]{_image_}} \annotatedFigureBox{0,0}{0.000,0.0}{_letter_}{0,0}%bl \end{annotatedFigure} \end{document} And the sh script: #!/bin/sh # Call this script with at least 2 parameters, for example # sh scriptname <image_file> <letter_of_legend> cat add_legend.tex | sed "s/_image_/$1/g" | sed "s/_letter_/$2/g" | pdflatex #rename output to match <letter_of_legend>_<image_file> format mv texput.pdf $2_$1 #clean up rm texput.* exit 0 Finnaly, by calling: $> ./legend.sh online_gauss.pdf A the output drawn in "A_online_gauss.pdf"!
Overlay text in PDF via command line
1,433,997,497,000
I tried to copy /dev/Storage/Storage (an LV under LVM) to an image file using a dd | pv | dd pipeline. dd reported an error, and I want to know whether dd finished copying my disk or stopped due to the error. I'm not sure since it gave me two different outputs: one with an error at the top and one without. I'd take a guess and say it didn't because between the two there's only an extra 0.1 seconds and no extra data, but I'm not sure if it did or not. /dev/Storage/Storage is a 1 TB disk (terabyte = 1012 = 10004) or 931.51 GiB (gibibyte = 230 = 10243) or 1953513472 sectors. The filesystem on the disk is messed up and doesn't work properly. $ sudo dd if=/dev/Storage/Storage | pv | dd of=Storage.img dd: error reading ‘/dev/Storage/Storage’: Input/output error ] 1627672400+0 records ins] [ <=> ] 1627672400+0 records out 833368268800 bytes (833 GB) copied, 75181 s, 11.1 MB/s 776GB 20:53:01 [10.6MB/s] [ <=> ] 1627672400+0 records in 1627672400+0 records out 833368268800 bytes (833 GB) copied, 75181.1 s, 11.1 MB/s
You are using the default 512 bytes dd block size. You would significantly improve performance by using a larger block size, say 128k or even 1m. There are two outputs because you are running two dd commands, the first one is the device reader and it shows an I/O error. You are likely using LVM given the device name you use: /dev/Storage/Storage. Are you sure this is the whole disk and not a subset? Use lvdisplay to figure out what is behind this device name.
Interpreting dd Input/Output error
1,433,997,497,000
I have a situation where I need to find the files having World Write (WW) permission 666 and I need to re mediate such files with 664 .. for this I have used this command find /dir/stuct/path -perm -0002 -type f -print > /tmp/deepu/permissions.txt when I execute the command I get the files which have WW permissions.. Now my requirement is like find /dir/stuct/path -perm -0002 -type f chmod 664 Is my syntax correct?
Think about your requirement for a moment.  Do you (might you possibly) have any executable files (scripts or binaries) in your directory tree?  If so, do you want to remove execute permission (even from yourself), or do you want to leave execute permission untouched?  If you want to leave execute permission untouched, you should use chmod o-w to remove (subtract) write permission from the others field only. Also, as Anthon points out, the find command given in the other answer executes the chmod program once for each world-writable file it finds.  It is slightly more efficient to say find top-level_directory -perm -2 -type f -exec chmod o-w {} + This executes chmod with many files at once, minimizing the number of execs. P.S. You don’t need the leading zeros on the 2.
How to chmod the files based on the results from find command
1,433,997,497,000
I am trying to install ap-hotspot on Ubuntu-14.04 When I enter the command: sudo add-apt-repository ppa:nilarimogard/webupd8 It gives me this message "Cannot add PPA: 'ppa"nilarimogard/webupd8' Please check that the PPA name and format is correct" How do I proceed? Since I am using college proxy to access the Internet, so I tried sudo -E add-apt-repository ppa:nilarimogard/webupd8 but it didn't help. But I am able to run sudo apt-get update so there is no problem with the internet connection. I also tried reinstalling ca-cerficates by using the command sudo apt-get install --reinstall ca-certificates it also didn't solve the problem. I also tried from Ubuntu Software Center, but there I am also unable to add a PPA repository. please help me in resolving this problem...
Since you can't add the repository, you can always add them from a terminal using the command line. Browse to the list of the repositories at the WebUpd8 Website. Copy down the address of the master repository, which is Master Repository. You want to add this one because it contains all the others. sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup sudo gedit /etc/apt/sources.list Visit Master Repository in a Web Browser. Find the Dropdown Arrow that reads: Technical Details about this PPA Click Your Ubuntu Version in the List Labeled Choose Your Version Add the resulting output into the file in Step 2. Save the File sudo apt-get update. This update command should now fetch the Private Keys of the new Repository. If you receive the error as you stated in your comment, then this repository has no private key. You may want to contact the PPA maintainer at that point who will either give you the key, or tell you to ignore the Warning.
Unable to add PPA repository from terminal
1,433,997,497,000
Moved from Stack Overflow, where I realize it was off-topic since it was asking for sources - far as I can tell, the rules forbid that there but not here. I know that the kernel in Android is now mostly the Linux kernel with a few exceptions like wakelocks (as described by John Stultz.) But is it close enough to be compliant with the Linux Standard Base? (Or for that matter with POSIX and/or the Single Unix Specification?) I'm writing about this in an academic term paper, so as well as the answer itself it would be great to have a relatively reliable source I can cite for it: a peer-reviewed article or book would be ideal, but something from Google's developer docs or a person with established cred (Torvalds, Andrew Josey, etc.) would be fine.
The LSB, POSIX, and the Single UNIX Specification all significantly involve userland. Simply using a kernel that is also used as the basis of a "unix-like", "mostly POSIX compliant" operating system -- GNU/Linux -- is not sufficient to make Android such as well. There are, however, some *nix-ish elements, such as the shell, which is a "largely compatible" Korn shell implementation (on pre-4.0, it may actually be the ash shell, which is used on embedded GNU/Linux systems via busybox) and various POSIX-y command line utilities to go along with it. There is not the complete set most people would recognize from the "unix-like" world, however. is it close enough to be compliant with the Linux Standard Base? A centrepiece of the LSB is the filesystem hierarchy, and Android does not use this. LSB really adds stuff to POSIX, and since Android is not nearly that, it is even further from being LSB compliant. This is pretty explicitly not a goal for the platform, I believe. The linux kernel was used for its own properties, and not because it could be used as the core of a POSIX system; it was taken up by GNU originally for both reasons. To clarify this distinction regarding a user space oriented specification -- such as POSIX, Unix, or the LSB extensions -- consider some of the things POSIX has to say about the native C library. This is where we run into platform specific things such as networking and most system calls, such as read() -- read() isn't, in fact, standard C. It's a Unix thing, historically. POSIX does define these as interfaces but they are implemented in the userland C library, then everything else uses this library as its foundation. The C library on GNU/Linux is the GNU C Library, a completely separate work from the kernel. Although these two things work together as the core of the OS, none of the standards under discussion here say anything about how this must happen, and so in effect, they don't say anything about what the kernel is or must do. They say a lot of things about what the C library is and must do, meaning, if you wrote a C library to work with a given kernel -- any kernel, regardless of form or characteristics -- and that library provides a user land API that satisfies the POSIX spec, you have a POSIX compliant OS. LSB does, I think, have some things to say about /proc, which linux provides as a kernel interface. However, the fact that this (for example) is provided directly by the kernel does not mean that the LSB says it has to be -- it just says this should/could be available, and if so what the nature of the information is.
Is Android compatible with the Linux Standard Base?
1,433,997,497,000
Out of curiosity, is it possible to find out which bootloader was used to start a given system? Was the system booted by GRUB, LILO or any other boot loader? I guess there must exist some /sys or /proc variable for the same? EDIT: Boot Info Summary: => Lilo is installed in the MBR of /dev/sda sda1: ___________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' /dev/sda is the only device I have to boot with. I wonder, if there is no known file system on the only available single partition, then how did it manage to boot?
I don't believe this info is tracked in meaningful way under either /sys or /proc. About the only way I can fathom this would be accessible to you after a boot is by interrogating the system either by looking to see if a GRUB or Lilo configuration file was present, or by making use of a script such as bootinfoscript. Example - check boot device If you know which device your system was booted with you can use dd to dump the contents of the boot loader and then grep for either GRUB or LILO.                        You can use these commands to determine whether you're using GRUB or LILO: $ sudo dd if=/dev/sda bs=512 count=1 2>&1 | grep GRUB $ sudo dd if=/dev/sda bs=512 count=1 2>&1 | grep LILO Which ever returns this string is the boot loader you're using: Binary file (standard input) matches Example - using bootinfoscript $ sudo ./bootinfoscript --stdout Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img, but core.img can not be found at this location. sda1: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: /grub2/grub.cfg ... Neither of these approaches is "conclusive", however, since multiple boot loaders can coexist, but at least it gives you a rough idea of the boot loaders that "might" be in use. References How do I find out which boot loader I have?
Find bootloader
1,433,997,497,000
I really like hexdump, especially because you can define a custom format; say: $ echo -e '\x00\x01\x02\x03' | hexdump -v -e '1/1 "%_ad: "' -e '4/1 "%02X "' -e '1/1 " : "' -e '4/1 "%_p"' -e '1/1 "\n"' 0: 00 01 02 03 : .... 4: 0A : . So, I can choose to have, say, 4 bytes per line, written as hexadecimal first, then as characters. But, what I'm missing here, is a "binary string" (or "bit string") formatting character; e.g. I'd like to write something like -e '4/1 "%08b "' somewhere in that command line, and get, e.g.: 0: 00 01 02 03 : 00000000 00000001 00000010 00000011 : .... 4: 0A : 00001010 : . Of course, then probably one would have to specify endianness (if groups of more than one byte should be formatted) etc... But in any case, this kind of formatting doesn't exist, as far as I can see in the hexdump manual. So my question is - what alternatives do I have on a Linux command line, so that I could obtain a formatted dump that includes binary strings as above, and yet to the greatest extent possible preserves the customizability of the hexdump program (in terms of byte grouping) when using its -e option?
Failing a dump program with suitable dump options, you can always cobble something together by using both hexdump and xdd and then joining the output with paste. Its not pretty, but using a shell that supports process substitution (bash will do): mkfifo fifo echo -e '\x00\x01\x02\x03' | tee fifo | paste -d' ' \ <(hexdump -v -e '1/1 "%_ad: "' -e '4/1 "%02X "' -e '1/1 " :\n"') \ <(xxd -b -c 4 fifo | cut -d' ' -f 2-) Output: 0: 00 01 02 03 : 00000000 00000001 00000010 00000011 .... 4: 0A : 00001010 .
Achieving hexdump-like format, that includes binary strings, on the command line?
1,433,997,497,000
What is ALSA? I read the Wikipedia page but couldn't understand it. All I understand is that it is a module in the kernel that has something to do with sound. What happens if you don't include it in the kernel, does it mean that the speakers won't work or something. I am asking because I am trying to install a version of Linux but I don't know if the kernel should include ALSA
ALSA stands for Advanced Linux Sound Architecture, I'd encourage you to poke around their project website if you're truly curious. Specifically I'd take a look at the "I'm new to ALSA pages & tutorials. The ArchLinux wiki probably describes it the best. The Advanced Linux Sound Architecture (ALSA) is a Linux kernel component which replaced the original Open Sound System (OSSv3) for providing device drivers for sound cards. Besides the sound device drivers, ALSA also bundles a user space library for application developers who want to use driver features with a higher level API than direct interaction with the kernel drivers. This diagram is also helpful in understanding where the various components, ALSA, JACK, etc. fit with respect to each other & the kernel.                        And finally one more excerpt - How it works: Linux audio explained: When it comes to modern Linux audio, the beginning is the Advanced Linux Sound Architecture, or ALSA. This connects to the Linux kernel and provides audio functionality to the rest of the system. But it's also far more ambitious than a normal kernel driver; it can mix, provide compatibility with other layers, create an API for programmers and work at such a low and stable latency that it can compete with the ASIO and CoreAudio equivalents on the Windows and OS X platforms. So the bottom line is that ALSA is the layer that provides other audio software components access to the kernel, so to answer your question, yes you need it.
What is ALSA in the Linux kernel?
1,433,997,497,000
I have an LDAP user who accesses a server based on having the appropriate LDAP host attribute via sssd. This user does not show up in /etc/passwd because he is not local. How do I modify his home dir location if he has already logged in and it was created in the default location? RHEL 6 Is it just usermod -d /new/location -m?
This is actually shockingly easy. If your nsswitch is files ldap; just add an entry for them in /etc/passwd and modify whatever parameter you want. If they don't already exist in /etc/passwd, you could do getent passwd <username> | sed 's|/home/<username>|/home/remoteusers/<username>|g' >> /etc/passwd for instance to change their home directory from the root of /home to a subfolder of home called remoteusers. The caveat is that you cannot use useradd or usermod, you must edit the file with an editor.
Edit home directory for an LDAP user in Linux
1,433,997,497,000
I would like to get the list of files that are used during the process of Linux boot. We are developing the protected enterprise system based on the RHEL 6.4. The integrity of specified files will be checked by a special hardware. So the question is - how to get the list of these files (with resolved dependencies coming from different booting services and daemons).
Thanks to RHEL support, the clear solution has been discovered. It is based on systemtap kernel module usage. Quoted from here to avoid link rot. And thank you again for all of your advice :) I could not even imagine that systemtap is able to start even before the init script and track the booting process. I very appreciate the Red Hat Support and personally Pushpendra Chavan for help with this perfect tool (unfortunately I don't know developers this method belongs to exactly - otherwise I'd credit them in the first place). So we need to create two simple scripts: bootinit.sh: #!/bin/sh # Use tmpfs to collect data /bin/echo "Mounting tmpfs to /tmp/stap/data" /bin/mount -n -t tmpfs -o size=40M none /tmp/stap/data # Start systemtap daemon & probe /bin/echo "Loading bootprobe2.ko in the background. Pid is :" /usr/bin/staprun \ /root/bootprobe2.ko \ -o /root/bootprobe2.log -D # Give daemon time to start collecting... /bin/echo "Sleeping a bit.." sleep 5 # Hand off to real init /bin/echo "Starting." exec /sbin/init 3 and bootprobe2.1.stp written in embedded systemtap scripting language: global ident function get_usertime:long() { return task_utime() + @cast(task_current(), "task_struct", "kernel<linux/sched.h>")->signal->utime; } function get_systime:long() { return task_stime() + @cast(task_current(), "task_struct", "kernel<linux/sched.h>")->signal->stime; } function timestamp() { return sprintf("%d %s", gettimeofday_s(), ident[pid()]) } function proc() { return sprintf("%d \(%s\)", pid(), execname()) } function push(pid, ppid) { ident[ppid] = indent(1) ident[pid] = sprintf("%s", ident[ppid]) } function pop(pid) { delete ident[pid] } probe syscall.fork.return { ret = $return printf("%s %s forks %d \n", timestamp(), proc(), ret) push(ret, pid()) } probe syscall.execve { printf("%s %s execs %s \n", timestamp(), proc(), filename) } probe syscall.open { if ($flags & 1) { printf("%s %s writes %s \n", timestamp(), proc(), filename) } else { printf("%s %s reads %s \n", timestamp(), proc(), filename) } } probe syscall.exit { printf("%s %s exit with user %d sys %d \n", timestamp(), proc(), get_usertime(), get_systime()) pop(pid()) } <linux sched.h=""><linux sched.h=""> </linux></linux> In order to receive the list of files accessed during the booting process in systemtap log format we should implement the following: Download and install the PROPERLY named versions of systemtap and kernel debuginfo packages (I have been given this link, but you'd better use this if you're on CentOS); Create /tmp/stap and /tmp/stap/data mkdir -p /tmp/stap/data Place bootprobe2.1.stp and bootinit.sh into /root and make them executable: chmod +x /root/boot* Edit bootinit.sh and change 'exec /sbin/init 3' to 'exec /sbin/init 5' if 5 is your default runlevel. Create the .ko module from bootprobe2.stp cd /root stap bootprobe2.1.stp -m bootprobe2 -p4 Reboot. Halt grub (press Esc or Shift) and press 'a' on the default kernel. At the end of the kernel line enter the following and press enter: init=/root/bootinit.sh, Normal boot will resume. After logging in, kill the stapio process, copy bootprobe2.log out of the tmpfs /tmp/stap/data directory and unmount it. killall stapio cp /tmp/stap/data/bootprobe2.log /tmp/stap/ umount /tmp/stap/data Now check the file /tmp/stap/bootprobe2.log file for the list of all files which are read during boot.
How to list all the files used during Linux boot
1,433,997,497,000
I'm trying to measure the disk IO latencies of a running process to make a histogram. I could do this with DTrace in operating systems that provide it (e.g. as in this Joyent paper), but my application is running in Linux. My first thought was to try perf, and I can get counters but I can't find any way to get time deltas. I can get time deltas with strace (e.g. strace -e read -T), but I'm not sure if I can restrict the trace to disk IO (this system has a busy network interface as well). Is there any way to do this in Linux?
This is actually complicated. But there is hints: Learn about SystemTap, this is linux analog of DTrace. I think they may even have example script for similar task. Learn blktrace. You may be able to parse its output, in theory. This will be more device latency (service time) than response time program get on read(). Yes strace may not be appropriate, since it will trace everything (all syscalls, even when you use -e filter) and will load server and slower process considerably. Perf is very obscure tool, you may have moments you think you understand its output, but you actually didn't, and its feature set is highly depend on kernel version. Basically and currently perf is suitable for measuring CPU time (cycles), and [yet] unsuitable to measuring reponse times (which you actually need). I heard they wanted to implement something to ease that, so on very recent development kernels there may have something. (Look also in perf-scripts (perf script -l) if you will investigate further.) May be you will be able to get something from ftrace. Read this article http://lwn.net/Articles/370423/ (And this for the intro.) As I can see you can limit ftracing by pid and function, then trace with something like sys_read. I tried this as example for you: # mount -t debugfs debugfs /sys/kernel/debug # if it's not already mounted # cd /sys/kernel/debug/tracing # echo $$ > set_ftrace_pid # pid of process to trace # echo sys_read sys_write > set_ftrace_filter # echo function_graph > current_tracer # head trace # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 0) 8.235 us | sys_write(); 0) 3.393 us | sys_write(); 0) ! 459859.3 us | sys_read(); 0) 6.289 us | sys_write(); 0) 8.773 us | sys_write(); 0) ! 1576469 us | sys_read();
Measure disk IO latencies of a running process
1,433,997,497,000
Jiffies in most Linux system are defaulted to 250 (4ms). The questions is that what happens when a program has a usleep() less then 4ms ? Of course it works as should when it is scheduled. But what happens when linux scheduler takes out this program to wait, because another program has to operate ? How does the preemption works in this case? Should I avoid custom programs with such a small waiting? They couldn't be accurate, could it ?
See time(7), and the manpages it references. An excerpt: High-Resolution Timers Before Linux 2.6.21, the accuracy of timer and sleep system calls (see below) was also limited by the size of the jiffy. Since Linux 2.6.21, Linux supports high-resolution timers (HRTs), optionally configurable via CONFIG_HIGH_RES_TIMERS. On a system that supports HRTs, the accuracy of sleep and timer system calls is no longer constrained by the jiffy, but instead can be as accurate as the hardware allows (microsecond accuracy is typical of modern hardware). You can determine whether high-resolution timers are supported by checking the resolution returned by a call to clock_getres(2) or look‐ ing at the "resolution" entries in /proc/timer_list. HRTs are not supported on all hardware architectures. (Support is pro‐ vided on x86, arm, and powerpc, among others.) A comment suggests that you can't sleep less than a jiffy. That is incorrect; with HRTs, you can. Try this program: /* test_hrt.c */ #include <time.h> main() { struct timespec ts; int i; ts.tv_sec = 0; ts.tv_nsec = 500000; /* 0.5 milliseconds */ for (i = 0; i < 1000; i++) { clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, NULL); } } Compile it: $ gcc -o test_hrt test_hrt.c -lrt Run it: $ time ./test_hrt real 0m0.598s user 0m0.008s sys 0m0.016s As you can see, 1000 iterations of a 0.5 millisecond delay took just a little over 0.5 seconds, as expected. If clock_nanosleep were truly waiting until the next jiffy before returning, it would have taken at least 4 seconds. Now the original question was, what happens if your program was scheduled out during that time? And the answer is that it depends on the priority. Even if another program gets scheduled while your program is running, if your program is higher priority, or the scheduler decides that it's your program's time to run, it will start executing again after the clock_nanosleep timeout returns. It does not need to wait until the next jiffy for that to happen. You can try running the test program above while running other software that takes the CPU, and you'll see that it still executes in the same amount of time, especially if you increase the priority with e.g. $ time sudo schedtool -R -p 99 -e ./test_hrt
How preemption works on Linux when a program has a timer less then 4ms?
1,433,997,497,000
/usr/src/linux-3.2.1 # make install scripts/kconfig/conf --silentoldconfig Kconfig sh /usr/src/linux-3.2.1/arch/x86/boot/install.sh 3.2.1-12-desktop arch/x86/boot/bzImage \ System.map "/boot" You may need to create an initial ramdisk now. -- /boot # mkinitrd initrd-3.2.1-12-desktop.img 3.2.1-12-desktop Kernel image: /boot/vmlinuz-2.6.34-12-desktop Initrd image: /boot/initrd-2.6.34-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1-12-desktop.img/boot/System.map, please specify a correct file with -M. There was an error generating the initrd (9) See the error during mkinitrd command. What's the point that I am missing? What does this mean? Kernel Modules: <not available> OpenSuse 11.3 64 bit EDIT1: I did "make modules". I copied the System.map file from the /usr/src/linux-3.2.1 directory to /boot, now running initrd command gives the following error: linux-dopx:/boot # mkinitrd initrd-3.2.1.img 3.2.1-desktop Kernel image: /boot/vmlinuz-2.6.34-12-desktop Initrd image: /boot/initrd-2.6.34-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. Kernel image: /boot/vmlinuz-3.2.1-12-desktop Initrd image: /boot/initrd-3.2.1-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. Kernel image: /boot/vmlinuz-3.2.1-12-desktop.old Initrd image: /boot/initrd-3.2.1-12-desktop.old Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. There was an error generating the initrd (9)
You should be using mkinitramfs, not mkinitrd. The actual initrd format is obsolete and initramfs is used instead these days, even though it is still called an initrd. Better yet, just use update-initramfs. Also you need to run make modules_install to install the modules.
How to create an initrd image on OpenSuSE linux?
1,433,997,497,000
I have a question about giving a shell account to somebody. How safe is it? He could read /etc. How can I give a secured shell account that will only restrict the user to some bins and his own home? Is the only way a chroot jail?
One of the most easy/efficient way to control what a user can do is lshell. lshell is a shell coded in Python, that lets you restrict a user's environment to limited sets of commands, choose to enable/disable any command over SSH (e.g. SCP, SFTP, rsync, etc.), log user's commands, implement timing restriction, and more.
How can I safely give a shell to somebody?
1,433,997,497,000
Following on from topics raised in other questions, I believe that when the Linux Kernel is trying to free up physical RAM it makes some kind of decision between discarding pages from its disk cache or flushing other pages into swap. I'm not certain of this, perhaps I've completely misunderstood the mechanism. In any case there is synergy between: the kernel keeping it's own internal disk cache programs memory mapping files "regular" application memory being placed in a swap file All three may be used to free up physical RAM, all three rely on either writing "dirty" pages to disk, or simply knowing the pages are already cleanly duplicated on-disk and discarding them from physical RAM allocation. With swap space, individual swap partitions and files can be given a priority so that faster devices are used before slower devices. I'm not aware of any such configurations for other, non-swap devices. My question is: How does the Kernel decide which RAM to free up? Is it purely based on last access? Does the Kernel use all three above equally? is there any prioritisation between them (and individual disks) is it configurable? This question driven by experiences similar to this one but I want this question to focus on the underlying mechanism NOT troubleshooting a particular problem.
The name given to the overall task of freeing physical pages of memory is reclaim, and it covers a number of tasks. Reclaim is mostly driven by page allocations, with various levels of urgency. On an unloaded system, page allocations can be satisfied without effort and don’t trigger any reclaim. On moderately loaded systems, page allocations can still be satisfied immediately, but they will also cause kswapd to be woken up to perform background reclaim. On loaded systems where page allocations can no longer be satisfied immediately, reclaim is performed synchronously. Reclaimable pages are pages which store content which can be found or be made available elsewhere. This is where the typical balancing act comes into play: memory whose contents are also in files (or are supposed to end up files), v. memory whose contents are not (and need to be swapped out). The former are stored in the page cache, the latter not, which is why balancing explanations usually talk about page cache v. swap. The decision to favour one over the other is determined in a single place in the kernel, get_scan_count, controlled by settings in struct scan_control. This function’s purpose is described as follows: Determine how aggressively the anon and file LRU lists should be scanned. The relative value of each set of LRU lists is determined by looking at the fraction of the pages scanned we did rotate back onto the active list instead of evict. Perhaps surprisingly for a function named get_..., this doesn’t use a return value; instead, it fills in the array pointed at by the unsigned long *nr pointer, with four entries corresponding to anonymous inactive pages (un-backed, not-recently-used pages), anonymous active pages (un-backed, recently-used pages), file inactive pages (not-recently-used pages in the page cache), and file active pages (recently-used pages in the page cache). get_scan_count starts by retrieving the appropriate “swappiness” value, from mem_cgroup_swappiness. If the current memory cgroup is an enabled, non-root v1 cgroup, its swappiness setting is used; otherwise, it’s the infamous /proc/sys/vm/swappiness. Both settings share the same purpose; they tell the kernel the relative IO cost of bringing back a swapped out anonymous page vs reloading a filesystem page Before it actually uses this value though, get_scan_count determines the overall strategy it should apply: if there’s no swap, or anonymous pages can’t be reclaimed in the current context, it will go after file-backed pages only; if the memory cgroup disables swapping entirely, it will go after file-backed pages only; if swappiness isn’t disabled (set to 0), and the system is nearly out of memory, it will go after all pages equally; if the system is nearly out of file pages, it will go after anonymous pages only; if there is enough inactive page cache, it will go after file-backed pages only; in all other cases, it adjusts the “weight” given to the various LRUs depending on the corresponding I/O cost. Once it has determined the strategy, it iterates over all the evictable LRUs (inactive anonymous, active anonymous, inactive file-backed, active file-backed, in that order) to determine how many pages from each should be scanned; I’ll ignore v1 cgroups: if the strategy is “go after all pages equally”, all pages in all LRUs are liable to be scanned, up to the size determined by scan_control’s priority shift factor; if the strategy is “go after file-backed pages only” or “go after anonymous pages only”, all pages in the corresponding LRUs are candidates (again, shifted by priority), none in the other LRUs; otherwise, the values are adjusted according to swappiness. The actual page scanning is driven by shrink_lruvec, which uses the scan lengths determined above, and repeatedly shrinks the LRUs until the targets have been reached (adjusting the targets as it goes in various ways). Once this is done, the active/inactive LRUs are rebalanced. Getting back to your questions: the page cache and memory-mapped files are treated equally; page reclaim isn’t based purely on last access (I haven’t explained how the LRUs are used, or how rebalancing works; read the corresponding chapter in Mel Gorman’s Understanding the Linux Virtual Memory Manager for details); the kernel doesn’t use them equally; they are prioritised differently based on circumstances, and can be configured via a number of controls (swappiness, cgroups, low watermark thresholds...). Swap priority only determines where a page goes once it’s been decided to swap it out. (Incidentally, the swappiness documentation and the explanations above should make it clear that there isn’t enough granularity in I/O cost to handle mixed ZRAM/disk swap setups nicely...) There is a lot more to explain, including how scan_control is set up, but I suspect this is already too long! If you want to track reclaim cost, you can see it in task delay accounting (see also struct taskstats).
How does the kernel decide between disk-cache vs swap?
1,433,997,497,000
Is there any limit for the maximum nested directories in the ext4 filesystem? For example ISO-9660 filesystem AFAIK cannot have more than 7 level of sub-directories.
There isn’t any limit inherent in the file system design itself, and experimentation (thanks ilkkachu) shows that directories can be nested to a depth exceeding limits one might naïvely expect (PATH_MAX, 4096 on Linux, although that limits the length of paths passed to system calls and can be worked around with relative paths). Part of the implementation apparently assumes that the overall path length, inside a given file system, never goes above PATH_MAX; see the directory hashing functions which allocate PATH_MAX bytes. The only directory-related limit which seems to be checked in the file system implementation is the length of an individual path component, which is limited to 255 bytes; but that doesn’t have any bearing on the nested depth.
Nested directory depth limit in ext4
1,433,997,497,000
There are three spin_lock functions in the kernel I am currently busy with. spin_lock spin_lock_irq spin_lock_irqsave I only find contributions covering only two of them (including Linux documentation). Then answers or explanations are formulated ambigouos or contrary to each other or even contain comments saying the explanation is wrong. This makes it hard to get an overview. Some basics are clear to me, as for example in interrupt context a simple spin_lock() can result in a deadlock. But I'd really appreciate a complete picture about this subject. I need to understand: When should or we use which version, when shouldn't we? When is it not necessary to use a more safe version but doesn't hurt (except for performance)? What is the reason to use a version in a particular situation?
A brief description is given in Chapter 5. Concurrency and Race Conditions of Linux Device Drivers, Third Edition void spin_lock(spinlock_t *lock); void spin_lock_irqsave(spinlock_t *lock, unsigned long flags); void spin_lock_irq(spinlock_t *lock); spin_lock_irqsave disables interrupts (on the local processor only) before taking the spinlock; the previous interrupt state is stored in flags. If you are absolutely sure nothing else might have already disabled interrupts on your processor (or, in other words, you are sure that you should enable interrupts when you release your spinlock), you can use spin_lock_irq instead and not have to keep track of the flags. The spin_lock_irq* functions are important if you expect that the spinlock could be held in interrupt context. Reason being that if the spinlock is held by the local CPU and then the local CPU services an interrupt, which also attempts to lock the spinlock, then you have a deadlock.
spin_lock vs. spin_lock_irq vs. spin_lock_irqsave
1,433,997,497,000
From Understanding The Linux Kernel Unix is a multiprocessing operating system with preemptable processes. Even when no user is logged in and no application is running, several system processes monitor the peripheral devices. In particular, several processes listen at the system terminals waiting for user logins. When a user inputs a login name, the listening process runs a program that validates the user password. If the user identity is acknowledged, the process creates another process that runs a shell into which commands are entered. When a graphical display is activated, one process runs the window manager, and each window on the display is usually run by a separate process. When a user creates a graphics shell, one process runs the graphics windows and a second process runs the shell into which the user can enter the commands. For each user command, the shell process creates another process that executes the corresponding program. What does "graphics shell" mean here? Is gnome shell a graphics shell? Is my earlier question Where does "graphical shell" stand in the hierarchy of "windowing system, window manager, desktop environment"? related to the one here? The question links to https://en.wikipedia.org/wiki/Shell_(computing)#GUI, which says Graphical shells provide means for manipulating programs based on graphical user interface (GUI), by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities. Does "the shell" near the end mean a "graphics shell"? Is it a command line shell running in a terminal emulator?
The term graphics shell can be both a graphical shell or a command line shell running under it. Meaning, the user graphical interface (GUI) or the command line that controls the GUI functions. First, let's begin with the shell, what "shell" means: the definition of the word "shell" means a program, or even a group of programs working together, it controls the operating system, and the hardware, so the shell is really the software gives you direct control over the computer. A graphical shell is a shell that presents output as 2D or 3D graphics, as opposed to plain text. In other words, it is the graphical user interface (GUI) that include windows, menus ...etc that the provide more flexible interaction between the user and the system instead of the plain dull-text offered by terminal interface. However, given that the core of the GUI is built as a shell, then all its functions can be controlled by the command line. For example, the command gnome-shell is the graphical shell for the GNOME desktop, this command provides core user interface functions for the GNOME desktop that can be adjusted by a command line. Another example, is nautilus which is the main GUI interface of files explorer in Gnome, this interface is available as a command line called nautilus. This command line has the following functions: $ nautilus --help Usage: nautilus [OPTION...] [URI...] Help Options: -h, --help Show help options --help-all Show all help options --help-gapplication Show GApplication options --help-gtk Show GTK+ Options Application Options: -c, --check Perform a quick set of self-check tests. --version Show the version of the program. -w, --new-window Always open a new window for browsing specified URIs -n, --no-default-window Only create windows for explicitly specified URIs. -q, --quit Quit Nautilus. -s, --select Select specified URI in parent folder. --display=DISPLAY X display to use Meaning, you can control the GUI functions through the command line. In Linux, a graphical shell is usually made of a couple of layers of software. The operating system should provide graphics drivers, and keyboard and mouse drivers. Then on top of the drivers, you have a windowing system like X11 or Wayland. It creates higher-level wrappers around input (like providing keyboard layouts), to manage the memory that store the 2D images that are transmitted to the display driver, and to provide apps with capabilities to paint to these 2D images in memory. Above the windowing system you have a window manager, and this is how an application translates keyboard and mouse events into system calls that manipulate the windows that the apps are painting to. This includes tasks such as launching, pausing, hiding, showing, and closing apps, detecting when an app has failed and cleaning up after it. There are dozens of popular window managers, including Unity, Gnome Shell, Xfwm, OpenBox, i3, Xmonad and many others. Apps can draw graphics as they see fit, however app developers usually prefer to make use of a common set of drawing tools, so their app looks consistent with all other apps running on the system. These are software libraries that you import into your app. You then call their functions to draw menus, buttons, text input, and display images like PNG and JPG images. These common drawing tools are called "widget toolkits." The two most popular widget toolkits on Linux are Gtk+ and Qt. You can use both Gtk+ and Qt at the same time, and this is often why different apps on Linux can sometimes have inconsistencies in their look and feel. These layers are pretty specific to the Linux software ecosystem. Mac OS, Windows, and Android all do things differently, but they all tend to integrate each of these layers into a single monolithic graphical shell software. It simplifies things, but also prevents a lot of customization. The reason Linux complicates things is because people prefer to have choices, and they enjoy to customize their shells. If you are managing your own Linux distribution, it is a good idea to put some effort into choosing your default set of apps so that they all use the same widget toolkits, and provide a consistent look and feel. On top of the graphical shell, you can build graphical apps, such as file system browsers, app launchers, notification and system status apps, and system configuration ("control panel") apps. These apps taken collectively make up what we call the "desktop environment."
graphics shell vs graphical shell
1,433,997,497,000
Does anyone have documentation on ext4-rsv-conver? $ pgrep -a -f ext4-rsv-conver 153 ext4-rsv-conver 161 ext4-rsv-conver 7451 ext4-rsv-conver $ dpkg -S ext4-rsv-conver dpkg-query: no path found matching pattern *ext4-rsv-conver* I can't find anything about ext4-rsv-conver in Google. My system is Debian 9.
These processes are kernel threads, used by the ext4 implementation to handle conversion work from writeback, i.e. “completed IOs that need unwritten extents handling and have transaction reserved”. That probably doesn’t explain much, but it does mean they’re nothing to worry about. Basically the kernel ends up with work which needs to be dealt with “out of band”, and uses a work queue with dedicated threads to handle it (instead of blocking the calling process or interrupt).
What is "ext4-rsv-conver" process?
1,516,899,074,000
I am running syslog server (rsyslog 8) on my centos machine. I want to map other device in my network to send the logs to this syslog server. If mapping is done correctly where exactly the syslogs will get stored in centos machine. /var/log/messages folder ?
Syslog is a standard logging facility. It collects messages of various programs and services including the kernel, and stores them, depending on setup, in a bunch of log files typically under /var/log. In some datacenter setups there are hundreds of devices each with its own log; syslog comes here handy too. One just sets up a dedicated syslog server which collects all the individual device logs over the network. Syslog can also save logs to databases, and other stuff. According to my /etc/syslog.conf, default /var/log/kern.log captures only the kernel's messages of any loglevel; i.e. the output of dmesg. /var/log/messages instead aims at storing valuable, non-debug and non-critical messages. This log should be considered the "general system activity" log. /var/log/syslog in turn logs everything, except auth related messages. Other insteresting standard logs managed by syslog are /var/log/auth.log, /var/log/mail.log. Regarding your question : /var/log/messages is not a folder.
File location for Syslogs in Centos machine
1,516,899,074,000
I read about a developer saying it doesn't work, "and probably never will". Why is this? The two OSes aren't that radically different, I think.
Wine on macOS can run 64-bit applications since version 2.0. Here are the release notes for 2.0 The main highlights are the support for Microsoft Office 2013, and the 64-bit support on macOS.
Why can wine run 64bit programs on Linux but not on Mac?
1,516,899,074,000
So on my Linux desktop, I'm writing some large file either to a local disk or an NFS mount. There is some kind of system buffer that the to-be-written data is cached in. (Something in the range of 0.5-2GB on my system, I think?) If the buffer is full, all file access blocks, effectively freezing the system until the write is done. (I'm pretty sure even read access is blocked.) What do I need to configure to make sure that never happens? What I want is: If a process can't write data to disk (or network mount etc) fast enough, that process can block until the disk catches up, but other processes can still read/write data at a reasonable rate and latency without any interruption. Ideally, I'd be able to set how much of the total read/write rate of the dsik is available to a certain type of program (cp, git, mplayer, firefox, etc), like "all mplayer processes together get at least 10MB/s, no matter what the rest of the system is doing". But "all mplayer instances together get at least 50% of the total rate, no matter what" is fine too. (ie, I don't care much if I can set absolute rates or proportions of the total rate). More importantly (because most important read/writes are small), I want a similar setup for latency. Again, I'd have a guarantee that a single process's read/write can't block the rest of the system for more than say 10 ms (or whatever), no matter what. Ideally, I'd have a guarantee like "mplayer never has to wait more than 10ms for a read/write to get handled, no matter what the system is doing". This must work no matter how the offending process got started (including what user it's running under etc), so "wrap a big cp in ionice" or whatever is only barely useful. It would only prevent some tasks from predictably freezing everything if I remember to ionice them, but what about a cron job, an exec call from some running daemon, etc? (I guess I could wrap the worst offenders with a shell script that always ionices them, but even then, looking through ionice's man page, it seems to be somewhat vague about what exact guarantees it gives me, so I'd prefer a more systematic and maintainable alternative.)
Typically Linux uses a cache to asynchronously write the data to the disk. However, it may happen that the time span between the write request and the actual write or the amount of unwritten (dirty) data becomes very large. In this situation a crash would result in a huge data loss and for this reason Linux switches to synchronous writes if the dirty cache becomes to large or old. As the write order has to be respected as well, you cannot just bypass a small IO without guaranteeing, that the small IO is completely independent of all earlier queued writes. Thus, depended writes may cause a huge delay. (This kind of dependencies may also be caused on the file system level: see https://ext4.wiki.kernel.org/index.php/Ext3_Data%3DOrdered_vs_Data%3DWriteback_mode). My guess is, that you are experiencing some kind of buffer bloat in combination with dependent writes. If you write a large file and have a large disk cache, you end up in situations where a huge amount of data has to be written before a synchronous write can be done. There is a good article on LWN, describing the problem: https://lwn.net/Articles/682582/ Work on schedulers is still going on and the situation may get better with new kernel versions. However, up to then: There are a few switches that can influence the caching behavior on Linux (there are more, see: https://www.kernel.org/doc/Documentation/sysctl/vm.txt): dirty_ratio: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which a process which is generating disk writes will itself start writing out dirty data. The total available memory is not equal to total system memory. dirty_background_ratio: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data. dirty_writeback_centisecs: The kernel flusher threads will periodically wake up and write `old' data out to disk. This tunable expresses the interval between those wakeups, in 100'ths of a second. Setting this to zero disables periodic writeback altogether. dirty_expire_centisecs: This tunable is used to define when dirty data is old enough to be eligible for writeout by the kernel flusher threads. It is expressed in 100'ths of a second. Data which has been dirty in-memory for longer than this interval will be written out next time a flusher thread wakes up. The easiest solution to reduce the maximum latency in such situations is to reduce the maximal amount of dirty disk cache and cause the background job to do early writes. Of course this may result in a performance degradation in situations where an otherwise large cache would prevent synchronous writes at all. For example you can configure the following in /etc/sysctl.conf: vm.dirty_background_ratio = 1 vm.dirty_ratio = 5 Please note, that the values suitable for you system depend on the amount of available RAM and the disk speed. In extreme conditions, the above dirty ration might still be to large. E.g., if you have 100GiB available RAM and you disk writes with a speed of about 100MiB, the above settings would result a maximal amount of 5GiB dirty cache and that may take about 50 seconds to write. With dirty_bytes and dirty_background_bytes you can also set the values for the cache in an absolute manner. Another thing you can try out is to switch the io scheduler. In current kernel releases, there are noop, deadline, and cfq. If you are using an older kernel you might experience a better reaction time with the deadline scheduler compared to cfq. However, you have to test it. Noop should be avoided in your situation. There is also the non-mainline BFQ scheduler which claims to reduce latency compared to CFQ (http://algo.ing.unimo.it/people/paolo/disk_sched/). However, it is not included in all distributions. You can check and switch the scheduler on runtime with: cat /sys/block/sdX/queue/scheduler echo <SCHEDULER_NAME> > /sys/block/sdX/queue/scheduler The first command will give you also a summary of the available schedulers and their exact names. Please note: The setting is lost after reboot. To choose the schedular permanently you can add a kernel parameter: elevator=<SCHEDULER_NAME> The situation for NFS is similar, but includes other problems. The following two bug reports may give some inside about the handling stat on NFS and why a large file write can cause stat to be very slow: https://bugzilla.redhat.com/show_bug.cgi?id=688232 https://bugzilla.redhat.com/show_bug.cgi?id=469848 Update: (14.08.2017) With kernel 4.10 a new kernel options CONFIG_BLK_WBT and its sub-options BLK_WBT_SQ and CONFIG_BLK_WBT_MQ have been introduced. They are preventing buffer bloats that are caused by hardware buffers, which's sizes and prioritization cannot be controlled by the kernel: Enabling this option enables the block layer to throttle buffered background writeback from the VM, making it more smooth and having less impact on foreground operations. The throttling is done dynamically on an algorithm loosely based on CoDel, factoring in the realtime performance of the disk Furthermore, the BFQ-Scheduler is mainlined with kernel 4.12.
Prevent large file write from freezing the system
1,516,899,074,000
We have a Linux server running Debian 4.0.5 (Kernel 4.0.0-2) with 32G RAM installed and 16G Swap configured. The system uses lxc containers for compartmentalisation, but that shouldn't matter here. The issue exists inside and out of different containers. Here's a typical free -h: total used free shared buff/cache available Mem: 28G 2.1G 25G 15M 936M 26G Swap: 15G 1.4G 14G /proc/meminfo has Committed_AS: 12951172 kB So there's plenty of free memory, even if everything allocated was actually used at once. However, the system is instantly paging even running processes. This is most notable with Gitlab, a Rails application using Unicorn: newly forked Unicorn workers are instantly swapped, and when a request comes in need to be read from disk at ~1400kB/s (data from iotop) and runs into timeouts (30s for now, to get it restarted in time. No normal request should take more than 5s) before it gets loaded into memory completely, thus getting instantly killed. Note that this is just an example, I have seen this happen to redis, amavis, postgres, mysql, java(openjdk) and others. The system is otherwise in a low-load situation with about 5% CPU utilization and a loadavg around 2 (on 8 cores). What we tried (in no particular order): swapoff -a: fails at about 800M still swapped Reducing swappiness (in steps) using sysctl vm.swappiness=NN. This seems to have no impact at all, we went down to 0% and still exactly the same behaviour exists Stopping non-essential services (Gitlab, a Jetty-based webapp...), freeing ca. 8G of committed-but-not-mapped memory and bringing Committed_AS down to about 5G. No change at all. Clearing system caches using sync && echo 3 > /proc/sys/vm/drop_caches. This frees up memory, but does nothing to the swap situation. Combinations of the above Restarting the machine to completely disable swap via fstab as a test is not really an option, as some services have availability issues and need planned downtimes, not "poking around"... and also we don't really want to disable swap as a fallback. I don't see why there is any swapping occuring here. Any ideas what may be going on? This problem has existed for a while now, but it showed up first during a period of high IO load (long background data processing task), so I can't pinpoint a specific event. This task is done for some days and the problem persists, hence this question.
Remember how I said: The system uses lxc containers for compartmentalisation, but that shouldn't matter here. Well, turns out it did matter. Or rather, the cgroups at the heart of lxc matter. The host machine only sees reboots for kernel upgrades. So, what were the last kernels used? 3.19, replaced by 4.0.5 2 months ago and yesterday with 4.1.3. And what happened yesterday? Processes getting memkilled left, right and center. Checking /var/log/kern.log, the affected processes were in cgroups with 512M memory. Wait, 512M? That can't be right (when the expected requirement is around 4G!). As it turns out, this is exactly what we configured in the lxc configs when setting this all up months ago. So, what happened is that 3.19 completely ignored the memory limit for cgroups; 4.0.5 always paged if the cgroup required more than allowed (this is the core issue of this question) and only 4.1.3 does a full memkiller-sweep. The swappiness of the host system had no influence on this, since it never was anywhere near being out of physical memory. The solution: For a temporary change, you can directly modify the cgroup, for example for an lxc container named box1 the cgroup is called lxc/box1 and you may execute (as root in the host machine): $ echo 8G > /sys/fs/cgroup/memory/lxc/box1/memory.limit_in_bytes The permanent solution is to correctly configure the container in /var/lb/lxc/... lxc.cgroup.memory.limit_in_bytes = 8G Moral of the story: always check your configuration. Even if you think it can't possibly be the issue (and takes a different bug/inconsistency in the kernel to actually fail).
Permanent swapping with lots of free memory
1,516,899,074,000
I wonder whether for the average Linux user it is considered as bad -from a point of view of security or any other relevant viewpoints- to have no or almost no entropy left in /dev/random. Edit: I don't need to generate random numbers (I would use /dev/urandom for that and even for password generation and disk encryption). Just for the fun of it, I have a bash script that generates strings of random characters out of /dev/random and of course, after playing a bit with it, I am left without entropy in /dev/random and it blocks. On IRC I was told it's "bad" to do so, but I wasn't given any reason. Is it bad because the average Linux user automatically generates random things using /dev/random? If so, which program(s) is/are involved? I also understand that having no entropy left in /dev/random makes the generation of numbers deterministic. But again, is my computer (the average Linux user) in need of truly random numbers? Edit 2: I've just monitored the entropy level every second in /dev/random during about 3 minutes, where I launched my bash script that uses entropy to generate a string of random character around the beginning of the monitoring. I've made a plot. We can see that indeed, the entropy level oscillates somehow, so some program(s) on my computer are using /dev/random to generate stuff. Is there a way I can list all programs using the file /dev/random? We can also see that it takes less than a minute to yield "acceptable levels" of entropy once the entropy pool has been emptied.
Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random, it's pretty common for the entropy to be low. Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness. In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random, which blocks when it shouldn't, and /dev/urandom, which never blocks. Fortunately, in practice, /dev/urandom is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys). The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like.
Is it bad to have a low entropy in /dev/random?
1,516,899,074,000
Is it possible to have a vanilla installation of Ubuntu 14.04 (Trusty) and run inside it containerized older Ubuntu versions that originally came with older kernels? For example for 12.04 I'd assume the answer is yes as it has linux-image packages for subsequent Ubuntu releases, such as linux-image-generic-lts-saucy and linux-image-generic-lts-quantal. For 10.04 that isn't the case, though, so I'm unsure. But is there documentation available that I can use to deduce what's okay to run? The reason I am asking is because the kernel interface undergoes updates every now and then. However, it's sometimes beneficial to run newer versions of the distro and at the same time keeping a build environment based on a predecessor.
You can run older Linux programs on newer kernels. Linux maintains backward compatibility (at least for all documented interfaces), for the benefit of people who are running old binaries for one reason or another (because they don't want to bother recompiling, because they've lost the source, because this is commercial software for which they don't have the source, etc.). If you want to have a build environment with older development tools, or even a test environment for anything that doesn't dive deeply into kernel interfaces, then you don't need to run an older kernel, just an older userland environment. For this, you don't need anything complex: a chroot will do. Something more advanced like LXC, Docker, … can be useful if you want the older (or newer, for that matter) distribution to have its own network configuration. If you don't want that, you can use what Debian uses precisely to build software in a known environment (e.g. build software for Debian stable on a machine with a testing installation): schroot. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu? for a guide on setting up an alternate installation of Debian or a derivative in a chroot. If you want to run the older distribution's kernel, you'll need an actual virtual machine for that, such as KVM or VirtualBox. Linux-on-Linux virtualization with LXC or the like runs the same kernel throughout.
Is it possible to run a 10.04 or 12.04 or earlier LTS containerized under LXC or Docker on Trusty?
1,516,899,074,000
I have been using sendmail to send out mails using internally available mail server. But currently port 25 is blocked for security reasons. I would like to know if there is a way to specify port number in the sendmail utility. I am trying to make use of the secure SMTP-MSA port 587 as an alternative assuming I could get that port opened up. I was not able to find anything in the man pages for sendmail. Is there any alternate utility that could do this?
Unless explicitly configured otherwise, mail will be transmitted over port 25. You can route mail using other ports, or even other protocols than SMTP but that will typically only work within your own network. The mailservers from your intended recipients will most likely only accept incoming email via SMTP on port 25. For instance when I configure sendmail to listen to port 587 it will typically only accept incoming e-mail over that port when the user has authenticated. DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea') Most networks that restrict incoming and/or outgoing SMTP traffic (a good and common practice for both consumer ISP's and corporate networks to prevent open mailrelays, spam and other abuse) provide relay servers, allowing you to send mail, but not unrestricted. Relay servers may check content (viruses, spam) or enforce policies (adding the standard disclaimer, archiving messages for compliance, restricting recipients) etc. If you're provided with a relay server; in sendmail that is called a smarthost and configured in # sendmail.mc define(`SMART_HOST',`relay.example.com`)dnl If your relay server is listening on a port 587 that becomes: # sendmail.mc define(`SMART_HOST',`relay.example.com`)dnl define(`RELAY_MAILER',`esmtp')dnl define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl The assumption is that sendmail forwards all your email traffic to the relay which transports the messages to the intended recipients and the relay server not requiring authentication. You can fine tune your email routing with the mailertable. To route some email domains to one remote TCP port and mail for other domains to another requires some editing in the sendmail.cf to set up a new mailer. Copy the settings from the existing esmtp mailer and add a port number: # sendmail.cf # <snip> Mesmtp587, P=[IPC], F=mDFMuXa, S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP, E=\r\n, L=990, T=DNS/RFC822/SMTP, A=TCP $h 587 Mesmtp2525, P=[IPC], F=mDFMuXa, S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP, E=\r\n, L=990, T=DNS/RFC822/SMTP, A=TCP $h 2525 transport channel esmtp587 will now deliver to port 587 instead of the default 25 or and similarly to 2525 or whatever alternative port you specify. Then in your mailertable: example.com esmtp587:example.com example2.com esmtp2525:example2.com The line above will allow sendmail to look up the MX records for example.com, if only a single (relay) smtp server for example.com supports the non-default port the syntax will become: example.com esmtp587:[smtp.example.com] The brackets tell sendmail to ignore possible MX records for smtp.example.com and to route all mail for @example.com to smtp.example.com:587.
Using port 587 with sendmail
1,516,899,074,000
I am using iptables to to mark the package and want to route based on the marks. First I added the ip rule: sudo ip rule add fwmark 1 prohibit (The "prohibit" is just for test, I will change it to some route table later.) Then I began to mark the packages: sudo iptables -A OUTPUT -d 192.168.1.0/24 -j MARK --set-mark 1 But the computer can still access the 192.168.1.0/24 networks. After a long time's googling and struggling, I tried: sudo iptables -t mangle -A OUTPUT -d 192.168.1.0/24 -j MARK --set-mark 1 It works and the connection was blocked. In the first case, the default table of filter is used. So my question is what is the difference between mangle table and filter table? Which one should be used in what cases? As my understanding, all these tables will be consulted before the routing policy, then why the filter table doesn't work properly?
mangle is for mangling (modifying) packets, while filter is intended to just filter packets. A consequence of this, is that in LOCAL_OUT, after traversing the tables and getting the filtering decision, mangle may try to redo the routing decision, assuming the filtering decision is not to drop or otherwise take control of the packet, by calling ip_route_me_harder, while filter just returns the filtering decision. Details at net/ipv4/netfilter/iptable_mangle.c and net/ipv4/netfilter/iptable_filter.c.
iptables: what the difference between filter and mangle
1,516,899,074,000
I have acquired a new wireless keyboard, and I've tested it out on both a Windows and a Linux box. It worked on both, but with an initial difference - Windows took a minute or two, to look up the keyboard's (Logitech's) drivers on the Internet and install them. It visually notified my of doing so and displayed its progress. However, when I plugged it into my Debian computer - I did not notice such a progress. Also, I was almost immediately able to use it, and I'm not sure how it got working so fast. Is Linux using a combination of a generic Bluetooth dongle driver and a generic keyboard driver?
Linux hardware drivers are kernel modules. Because of the open source model and licensing of the kernel, very few of these are written by hardware manufacturers; most of them are reverse engineered or based on standardized public protocols. Pretty sure bluetooth is in the later realm, and also that things like mice and keyboards are in most cases totally generic. The modules are part and parcel of the kernel source tree; i.e., if you download the linux kernel source, it comes with the code for all the available modules. You do not have to include all of them when you build it, of course. Linux distros (generally) are a collection of pre-built binaries, and this includes the kernel. The kernel itself is one binary; modules may either be built into this, or separate binaries which the kernel can load and unload. Since building all the available modules into the one binary would result in a massive and ridiculous kernel, and the distros want to cover as much hardware as possible, distro kernel packages include a broad array of individual binary modules. You can see these in /lib/modules. Driver modules are registered with the kernel and built at the same time; the kernel is aware of what is available on the system. When you plug in some new hardware, it identifies itself to the system and the kernel chooses an appropriate driver from /lib/modules to load. You can see all your currently loaded modules with lsmod.
How are drivers for peripheral hardware installed in Linux?
1,516,899,074,000
I am learning how to create kernel modules and it was all working fine: I compiled, inserted the .ko with sudo insmod cheat.ko, and the printk messages inside the init function (set by module_init) appeared correctly in /etc/log/syslog. Then I made changes to the module, removed it with sudo rmmod cheat.ko, reinserted, and printk messages were good again. Then, when I tried a new feature the screen became like a tty, error messages all over, I did ctrl-alt-f2 ctrl-alt-f7 (I'm on ubuntu), and I got back to the X server. I undid the most recent modifications to the source file, recompiled, but the problem now is that I am unable to reinsert the module to test things out again, unless I reboot, which is too annoying for testing. How can I reinsert the modified module without rebooting? What I tried and info I got: cat /etc/log/syslog: The only relevant information to me was: BUG: unable to handle kernel NULL pointer dereference at 00000003 so it seems that was the cause of the problem, and then I got an oops: Oops: 0002 [#1] SMP Horrid debug information follows that, but nothing that seems to help me on how to reinsert the module. sudo insmod cheat.ko: command just hangs, outputs nothing, and the only way I can get on using that terminal emulator is killing it with c-c sudo rmmod cheat: Error: Module cheat is not currently loaded sudo modprobe -r cheat.ko FATAL: Module cheat.ko not found. lsmod | grep cheat: cheat 19009 -2 which has a very suspicious -2 usage count... cat /proc/modules | grep cheat cheat 19009 1 - Loading 0x00000000 (OF+) interesting, so the module is still loading... Edit As others have said, use a VM. And I strongly recommend you to use Vagrant to manage it. Edit 2 Nah, Vagrant is for newbs, use QEMU + Buildroot instead: https://github.com/cirosantilli/linux-kernel-module-cheat
The Linux kernel is only willing to unload modules if their module_exit function returns successfully. If some function from the module crashes, the kernel may be able to recover, but the module is locked in memory. It may be possible to rummage through the kernel data structures and forcibly mark the module as unloadable (try patching the module_exit function to do nothing), but that's risky. Your best bet is to reboot. The normal way to test a kernel module is in a virtual machine. Don't test the module on your development machine. A VM has the advantage over a physical machine that you can save the VM state in a ready-for-testing configuration and restore it as many times as you like, which saves the startup time between tests.
Cannot remove or reinsert kernel module after error while inserting it without rebooting
1,516,899,074,000
I'm intending to replace a NAS's Custom Linux with Arch Linux (details at the bottom), of course wishing to preserve all user data and (due to the SSH-only headless access) attempting to be fool-proof since a mistake may require firmware reinstallation (or even brick the device). So instead of running the installation (downloading the appropriate Arch Linux ARM release and probably using the chroot into LiveCD approach) without appropriate preparations, what must I keep in mind? (Please feel free to answer without restriction to Arch Linux) More precisely: Do I somehow have to bother with figuring out how a specific device's boot process works (e.g. which parts of the bootloader reside on the flash memory and which ones are on the harddisk) or can I rely on the distribution's installer to handle this correctly? How can I determine whether some (possibly proprietary) drivers are used and how can I migrate them into the new setup? Is the RAID configuration safe from accidental deletion? Is there a way to fake the booting process so I can check for correct installation while the original system remains accessible by simply rebooting? E.g. using chroot and kexec somehow? What else should I be aware of? The specific case is that I want to replace the custom Linux from a Buffalo LinkStation Pro Duo (armv5tel architecture, the nas-central description is a bit more helpful here and also provides instructions on how to gain SSH root access) with Arch Linux ARM. But a more general answer may be more helpful for others as well.
with the required skill and especially knowledge about the installed linux it is not worth while anymore to replace it. and whatever you do, you probably never want to replace the already installed kernel. however, you can have your arch linux relatively easy and fool proof! the concept: you install arch linux into some directory on your NAS and chroot (man chroot) into it. that way you don't need to replace the nas linux. you install and configure your arch linux and replace the native linux's services by arch linux services step by step. the more complete and powerful your arch linux installation gets you auotomate the chrooting procedure, turn off the services provided by the native linux one by one while automating the starting of services within the chrooted arch linux. when you're done the boot procedure of your NAS works like this: load the kernel and mount the hdds, chroot into arch linux, exec /sbin/init in your chrooted environment. you need to work out the precise doibg yourself, b/c i know neither arch linux nor your NAS and its OS. you need to create the target directory into which you want to install arch linux; it needs to be on a device with sufificient available writable space (mkdir /suitable/path/archlinux), then you need to bootstrap your arch linux cd /suitable/path/archlinux wget -nd https://raw.githubusercontent.com/tokland/arch-bootstrap/master/arch-bootstrap.sh bash arch-bootstrap.sh yournassarchitecture Now you have a basic arch linux in that path. You can chroot into it along the lines of cp /etc/resolv.conf etc/resolv.conf cp -a /lib/modules/$(uname -r) lib/modules mount -t proc archproc proc mount -t sysfs archsys sys mount -o bind /dev dev mount -t devpts archdevpts dev/pts chroot . bin/bash Then you should source /etc/profile. Now your currnt shell is in your arch linux and you can use it as if you had replaced your native linux ... which you have for the scope of your current process. Obviously you want to install stuff and configure your arch linux. When you use your current shell to execute /etc/init.d/ssh start you are actually starting the ssh daemon of your arch linux installation. When you're done and you really want to entirely replace your native linux (services) with arch linux, your NAS's native linux doesn't start any services anymore but executes the chroot procedure above with the difference that the last line is exec chroot . sbin/init. This is not as complete as a real replacement, but as fool proof as it gets. and as stated initially, with the knowledge and skill required for this, IMHO (!), a complete replacement is not necessary and worth while.
How to safely replace one Linux distribution with another one via SSH?
1,516,899,074,000
I have a Linux (Ubuntu 12.04) PC connected to the internet with a Greenpacket WiMax USB modem. I want to share the Internet connection with another computer running Windows 7 Home Premium, connected to the Linux PC over a LAN. Is this possible? How? Is the reverse possible instead (connecting the internet to the Windows computer and sharing it with Linux)?
You need to set up NAT on the Linux box. There are numerous howtos on the Net when you search for NAT and iptables, maybe including the distro you use. Here is a howto for Debian which should work on other distros as well: http://debianclusters.org/index.php/NAT_with_IPTables Here are some lines that come from a German Ubuntu howto: sysctl -w net.ipv4.ip_forward=1 iptables -A FORWARD -o eth0 -s 192.168.0.0/16 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Put them somewhere, where they are executed at startup (/etc/rc.local or you put "up" in front of every line and put the whole thing into /etc/network/interfaces) and replace eth0 by the network device that connects to the Internet and eth1 by the one that goes to your LAN. You might also have to tell your Windows box some name servers (DNS) manually if you don't want to set up bind on your Linux box. And I trust you don't need or already have a DHCP server in your LAN.
How do I share internet with Windows from my Linux box?
1,516,899,074,000
I am looking for a way to fragment an existing file in order to evaluate the performance of some tools. I found a solution for NTFS file system called MyFragmenter as described in this thread. However I can't find anything for ext2/3/4... I guest I can develop my own file fragmenter but due to time constraint I would like to find a faster solution. I found some tool like HJ-Split which split a file in smaller bits but I doubt this will simulate file fragmentation. Is their any solution available for my problem?
If you want to ensure fragmentation but not prevent it (so you only have partial control over what happens), and you don't care about the specifics of the fragmentation, here's a quick & dirty way of doing things. To create a file of n blocks in at least two fragments: Open the file with synchronous writes, write m < n blocks. Open another file. Add to it until there are at most n - m blocks free on disk. Don't make it sparse by mistake! Write the remaining n - m blocks to the first file. Close and unlink the second file. You can fragment in more pieces by interlacing more files. This assumes the filesystem is available for this sort of torture, i.e. not in a multi-user or mission-critical environment. It also assumes the filesystem has no reserved blocks, or the reserved blocks are reserved for your UID, or you're root. There's no direct way to ensure fragmentation, because Unix systems employ filesystem abstraction, so you never talk to the raw filesystem. Also, ensuring filesystem-level fragmentation tells you nothing about what happens at lower levels. LVM, software and hardware RAID, hardware-level sector remapping and other abstraction layers can play havoc with your expectations (and measurements).
How to deliberately fragment a file
1,516,899,074,000
Let's say I have two identical disks, and I set one LVM logical volume on them (no mirroring). Question: What will happen when one of the disks fails? I will lose all the data from all the disks I will lose the data from broken disk, but I get data from still working one something else (what) ? Thank you in advance for clarification. From what I've read mentioning RAIDs in LVM articles indicates I will lose everything, on the other hand shrinking LV indicates something opposite. Update Good read: http://www.redhat.com/magazine/009jul05/features/lvm2/ According to this article with linear mapping (default, it is my case) and no mirroring, in case of failure you should lose data only from broken disk. I hope it is true, and eventually I find out :-/
The worst and most likely case is that you will lose everything. If you have a single logical volume spanning both drives, and you lose one drive with no mirroring, you've essentially wiped out half the file system. From this point, it gets mildly better depending on what file system you are running on your volume. Assuming that you are not using striping, which will kill any chance you have, you may be able to get some of your data back by running recovery software on the second drive. I don't have personal experience with that case, but it should be theoretically possible to get some or most of the files that were exclusively on the still functional drive if you are using one of the more 'robust' file systems (i.e. ext3 or ext4). Your mileage will vary depending on what filesystem you are using on top of LVM and how your files are arranged on the disks though. If they are fragmented across both disks then you will still lose those files too.
Does LVM increase the risk of data loss?
1,516,899,074,000
I want to configure my Linux so that it will be used as a network router (gateway). Can anybody give me some hints on this? (links are welcome!)
For a simple router, there are really only two steps that need to be done. Enable routing The first step is to enable routing in the kernel. By default, the kernel drops packets that it doesn't recognize; once you enable routing, it'll forward them. You need to issue either of these two commands when the computer boots: sysctl -w net.ipv4.ip_forward=1 echo 1 >/proc/sys/net/ipv4/ip_forward Many distributions have a file called /etc/sysctl.conf, where you can put the line net.ipv4.ip_forward=1 to execute that command when the computer boots. If there's a directory /etc/sysctl.d, you can add a file in that directory instead of editing /etc/sysctl.conf; call the file something.conf. For IPv6, the corresponding setting is net.ipv6.conf.all.forwarding or /proc/sys/net/ipv6/conf/all/forwarding. You can also use net.ipv4.conf.all.forwarding or /proc/sys/net/ipv4/conf/all/forwarding for IPv4. Set routing tables The second step is to set routing tables. This can be simple or complicated depending on how much you need to do. For simple uses, configure each of your network interfaces' address and netmask, and add any needed extra route with the route command. Going beyond simple routing If you need to rewrite packets, the basic command is iptables (ip6tables for IPv6). (“Netfilter” is the name of the kernel packet handling facility, and “iptables” if the name of the program that controls it.) This is where to look for filtering, NAT and more. For complex setups, look at the ip command from the iproute2 package.
How to build a gateway from my Linux OS
1,516,899,074,000
I was forced to shutdown (using the power button) and reboot after my laptop froze with just a black screen. After such an incident, where should I look for error messages etc. that might indicate what caused the freeze? I am running Xubuntu (Lucid) with Fluxbox as my window manager. Any suggestions are welcome but I generally prefer to use the CLI.
If the screen and input devices (keyboard and mouse or trackpad) froze, the first place to start by looking would be in /var/log/Xorg.0.log (assuming that Xorg is running on the first display server). If that doesn't yield any immediate clues, the next logs to check would be /var/log/messages.log and /var/log/dmesg.log. If you are unable to find anything in the logs, and the freeze is happening with any frequency, you might be advised to check your memory with a utility like memtest86+.
Where should I look for error messages after a freeze-up and reboot in Linux?
1,516,899,074,000
After apt-get install chromium and running it on Debian 12, ps alx | grep -e ^F -e ^5.*chromium returns: F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 5 1000 3452315 3452313 20 0 33884428 16712 do_sys S ? 0:00 /usr/lib/chromium/chromium --type=zygote --crashpad-handler-pid=3452306 --enable-crash-reporter=,built on Debian 12.4, running on Debian 12.4 --change-stack-guard-on-fork=enable This executed on LUbuntu 18 after apt-get install chromium-browser (which does snap install chromium in its /var/lib/dpkg/info/chromium-browser.preinst): F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 5 1000 197953 197951 20 0 33909972 1228 do_sys S ? 0:00 /snap/chromium/2729/usr/lib/chromium-browser/chrome --type=zygote --crashpad-handler-pid=197944 --enable-crash-reporter=,snap --change-stack-guard-on-fork=enable where the flag F value 5 means used super-user privileges according to man ps. Why does the Chromium browser need and get super-user privileges when installed by the regular package management and run by a non-privileged user ? ChatGPT says this would be for installation or updating, but I don't believe that because I did installation using regular apt-get and updates would be done by unattended-upgrades on Debian or snapd on Ubuntu.
BTW : where the flag F value 5 means used super-user privileges according to man ps. Indeed but not only : 5 = 1 + 4 PROCESS FLAGS The sum of these values is displayed in the "F" column, which is provided by the flags output specifier: 1 forked but didn't exec 4 used super-user privileges The forked but didn't exec flag confirming that the --type=zygote parameter set in the chrome process you list has actually been taken into account, successfully making it a zygote process. In addition, please note the use of the past tense "used" meaning that the report does not reflect current status regarding capabilities that might well have been dropped right after process initialization. (It is considered as best practice for a process to drop privileges as soon as they are no longer needed) For obvious security reasons, Chromium will resort to sandboxing. Except in the case of now very old kernels, it will be using the User namespaces sandboxing technique. A zygote process is responsible for setting it up. Security-wise, the Zygote is responsible for setting up and bookkeeping the namespace sandbox. Even though this technique is based on unprivileged namespaces, the process responsible for setting it up actually needs the priviledged CAP_SYS_CHROOT capability but only until the sandbox is fully engaged. So, in short : That process actually needs super-user capabilities in order to… enable everything running unpriviledged… ;-) and will just drop them as soon as the appropriate environment is set.
Super-user privileges for Chromium browser
1,516,899,074,000
I am testing PAM scenarios on RedHat 7.5 The pam module pam_succeed_if.so looks like the most basic level of conditional testing that PAM has to offer, and it is not meeting my needs. You can only create tests on the user, uid, gid, shell, home, ruser, rhost, tty, and service fields. In my situation, I do want to test based off the 'rhost' field, however after putting the module into debug, I saw that the rhost field was not set. My goal is to only run a PAM module in /etc/pam.d/sudo if the user is logged in locally to the machine. If we can detect that the user is logged in through SSH, then I want to skip the module. I had actually come up with 3 different ideas that I thought would work, but all ended up failing. I'll share a couple of the solutions that ended up failing Conditional entry using pam_exec.so I wanted to add the following pam entry on top of the pam module that I want to conditionally skip: auth [success=ok default=1] pam_exec.so /etc/security/deny-ssh-user.sh The contents of /etc/security/deny-ssh-user.sh #!/bin/bash # Returns 1 if the user is logged in through SSH # Returns 0 if the user is not logged in through SSH SSH_SESSION=false if [ -n "${SSH_CLIENT}" ] || [ -n "${SSH_TTY}" ] || [ -n "${SSH_CONNECTION}" ]; then SSH_SESSION=true else case $(ps -o comm= -p $PPID) in sshd|*/sshd) SSH_SESSION=true;; esac fi if "${SSH_SESSION}"; then exit 1 else exit 0 fi I've gone through the source code for pam_exec.so at https://github.com/linux-pam/linux-pam/blob/master/modules/pam_exec/pam_exec.c and astonishingly, it will ALWAYS return PAM_SUCCESS, regardless of the exit code of the script. And I can't get the script to cause the pam_exec module to return PAM_SERVICE_ERR, PAM_SYSTEM_ERR, or PAM_IGNORE. Conditional entry using pam_access.so Again, I add the following pam entry on top of the pam module that I want to conditionally skip auth [success=ok perm_denied=1] pam_access.so accessfile=/etc/security/ssh-sudo-access.conf noaudit The contents of /etc/security/ssh-sudo-access.conf +:ALL:localhost 127.0.0.1 -:ALL:ALL Wow, super clean right? It will return success if you are logged in locally, and deny everything else. Well no. It turns out when the pam_access.so module is put into debug, it has no knowledge of remote hosts, only the terminal that is being used. So the pam_access can't actually block access by remote hosts. It has been an infuriating day of literally reading source code just to find out what black magic I have to cast just so I can skip a PAM module.
Well it turns out I'm actually an idiot, the pam_exec.so module is perfectly fine for creating PAM conditionals. Tim Smith was correct in assessing that both tests in my /etc/security/deny-ssh-user.sh script were NEVER setting the variable SSH_SESSION to true. I didn't take that into consideration because the script works in a normal shell, but the environment context is stripped when executed by pam_exec.so. I ended up rewriting the script to use the last utility just like his example, however I had to change some of it because the switches for last differ from Arch Linux to RedHat. Here is the revised script at /etc/security/deny-ssh-user.sh: #!/bin/bash # Returns 1 if the user is logged in through SSH # Returns 0 if the user is not logged in through SSH SSH_SESSION=false function isSshSession { local terminal="${1}" if $(/usr/bin/last -i | /usr/bin/grep "${terminal}" | /usr/bin/grep 'still logged in' | /usr/bin/awk '{print $3}' | /usr/bin/grep -q --invert-match '0\.0\.0\.0'); then echo true else echo false fi } function stripTerminal { local terminal="${1}" # PAM_TTY is in the form /dev/pts/X # Last utility displays TTY in the form pts/x # Returns the first five characters stripped from TTY echo "${terminal:5}" } lastTerminal=$( stripTerminal "${PAM_TTY}") SSH_SESSION=$(isSshSession "${lastTerminal}") if "${SSH_SESSION}"; then exit 1 else exit 0 fi Contents of /etc/pam.d/sudo .... auth [success=ok default=1] pam_exec.so /etc/security/deny-ssh-user.sh auth sufficient pam_module_to_skip.so ....
How to create a conditional PAM entry
1,516,899,074,000
We have a process that in recent weeks had a once-off memory leak that resulted in it consuming all memory on our RHEL 7 box We now wish to set limits around this such that it will never take any more than a certain amount We are using the ulimit -v setting to set this amount (as the -m setting does not work) Therefore, I'm wondering if this is sufficient or do we also need a way to limit physical memory as well? IF so, what is the best way to go about this? If virtual memory always grows with phyiscal memory then perhaps -v by itself is sufficient
Some description about how ulimit works: ulimit has deal with setrlimit and getrlimit system calls. It's easy to ensure by strace-ing of bash process (ulimit is component of the bash). I set 1024kb of max memory size: $ ulimit -m 1024 In another console: $ strace -p <my_bash_pid> . . . getrlimit(RLIMIT_RSS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0 setrlimit(RLIMIT_RSS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0 . . . setrlimit man page write next about RLIMIT_RSS: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED. madvice syscall is just advice to kernel and kernel may ignore this advice. Even bash man page about ulimit write following: -m The maximum resident set size (many systems do not honor this limit) That is the reason why -m doesn't work. About -v option: I set 1024 kb of virtual memory: $ ulimit -v 1024 In another console: $ strace -p <my_bash_pid> . . . getrlimit(RLIMIT_AS, {rlim_cur=RLIM64_INFINITY, rlim_max=RLIM64_INFINITY}) = 0 setrlimit(RLIMIT_AS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0 . . . setrlimit man page write next about RLIMIT_AS: RLIMIT_AS The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2) and mremap(2), which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a 32-bit long either this limit is at most 2 GiB, or this resource is unlimited. Program consist of 3 segments (data, code, stack) compose virtual program memory space. Code segment is const and contain program instructions. Data segment is controlled by following: brk syscall adjusts size of data segment (part of virtual memory) of the program. mmap syscall maps file or device to virtual memory of process. Many programs allocates memory (direct or indirect) by calling standard function from C Library (malloc) which allocates memory from heap (part of data segment). malloc adjust size of data segment by calling brk syscall. Stack stores functions variables (variable takes memory during allocation from stack). So, that's why the -v option is works for you. If -v is sufficient for your task, then there is no reasons to do something else and it's sufficient. If you want to take control about huge count of specific memory features for process (memory pressure, swap usage, RSS limit, OOM and so on) I suggest to you to use cgroups memory capabilities. If your application is a service I suggest to you to use systemd slice features, as the most convenient for controlling and limiting resources of service or group of services (also it easy to configure instead of configuring cgroups directly) which is managed by systemd.
Is setting ulimit -v sufficient to avoid memory leak
1,516,899,074,000
I'm dual-booting Linux Mint 18.2 and Windows 10. I've synchronized OneDrive from Windows, but I can't seem to access the OneDrive folder from Linux. Terminal shows that I have a OneDrive folder, but ls -all gives me the following error on the OneDrive folder: unsupported reparse point I've done a bit of Googling and the problem might have something to do with the fact that it's on an NTFS partition and Microsoft possibly compressing the OneDrive contents, but I haven't been able to verify conclusively. Anyone else have this problem? For context, I'm not needing to sync OneDrive from Linux- I'm just trying to access the OneDrive contents saved on my Windows partition from Linux.
I found it! Michael's WSL link provided the answer. I just need to delete the reparsepoint for OneDrive before I shutdown Windows. Here's my code: fsutil reparsepoint delete "C:\Path\To\OneDrive\Folder"
Accessing OneDrive folder on Windows partition
1,516,899,074,000
I have read that the /dev directory contains device files that points to device drivers. Now my question is, when i do ls -l, i get output something like this what does this 5th and 6th column value represent and its significance?
these are major, minor numbers, more info on which you can find here : http://www.makelinux.net/ldd3/chp-3-sect-2.shtml Traditionally, the major number identifies the driver associated with the device. For example, /dev/null and /dev/zero are both managed by driver 1, whereas virtual consoles and serial terminals are managed by driver 4; similarly, both vcs1 and vcsa1 devices are managed by driver 7. Modern Linux kernels allow multiple drivers to share major numbers, but most devices that you will see are still organized on the one-major-one-driver principle. The minor number is used by the kernel to determine exactly which device is being referred to. Depending on how your driver is written (as we will see below), you can either get a direct pointer to your device from the kernel, or you can use the minor number yourself as an index into a local array of devices. Either way, the kernel itself knows almost nothing about minor numbers beyond the fact that they refer to devices implemented by your driver.
ls -l output in /dev directory of Unix/Linux system [duplicate]
1,516,899,074,000
I recently purchased an external USB hard drive and wanted to use it as a portable boot drive. I installed Linux Mint 18.1 on it and got everything working. Then I started to think about using that drive to install Linux on other machines. I assumed that whatever a live boot USB does should be possible from a full-blown Linux installation. I looked around and the only option I found was from Ubuntu: Installation/From Linux. Their solution is to create a partition, fill it with the ISO contents and then boot from that to launch the installer. I did follow those instructions and got it working as expected, however, I still feel there must be a way to install Linux from Linux without booting into an ISO. I just found a related question: Installing without booting. There is an answer there that suggests there is some sequence of operations that could be run to install Linux on another partition, but I would need more detail than provided there. Is that process documented somewhere? Honestly, I would be more comfortable if I could just run the installers that are included in the live boot images of each distro. Or some kind of semi-authoritative script that would do the same thing. Is there a package in the repos that would provide such a thing (eg. a Linux Mint installer package that could be installed using apt-get or yum)?
There is an example to install debian from a Linux-mint live USB (or any debian based distro). If you have a debian based distribution already installed on your hdd , you can install other debian based distro using chroot and debootstrap from the existing OS. Boot from the live USB .Use gparted to create your root , swap ,/home... partitions. If you prefer the command line ( fdisk , parted ..) , there is how to activate the swap partition : mkswap /dev/sdaY sync swapon /dev/sdaY Let's say you need to install debian bullseye . Install the debootstrap package : sudo apt-get install debootstrap Create the /mnt/stable then mount your root partition (sdaX) sudo mkdir /mnt/stable sudo mount /dev/sdaX /mnt/stable Install the base system: sudo debootstrap --arch amd64 bullseye /mnt/stable http://ftp.fr.debian.org/debian sudo mount -t proc none /mnt/stable/proc sudo mount -o bind /dev /mnt/stable/dev sudo chroot /mnt/stable /bin/bash Set up your root password: passwd Add a new user: adduser your-username Set up the hostname : echo your_hostname > /etc/hostname Configure the /etc/fstab: add the following lines: /dev/sdaX / ext4 defaults 0 1 /dev/sdaY none swap sw 0 0 proc /proc proc defaults 0 0 use the debian documentation to edit your /etc/apt/sources.list. Configure locale : apt install locales dpkg-reconfigure locales Configure you keyboard: apt install console-data dpkg-reconfigure console-data Install the kernel: apt-cache search linux-image Then: apt install linux-image-5.10.0-2-amd64 Configure the network: editor /etc/network/interfaces and past the following: auto lo iface lo inet loopback allow-hotplug eth0 # replace eth0 with your interface iface eth0 inet dhcp allow-hotplug wlan0 # replace wlan0 with your interface iface wlan0 inet dhcp To manage the wifi network, install the following packages: apt install iproute2 network-manager iw Install grub : apt install grub2 grub-install /dev/sda update-grub You can install a desktop environment through the command tasksel : apt install aptitude tasksel Run the following command and install your favourite GUI: tasksel Finally, exit the chroot and reboot your system Documentation: D.3. Installing Debian GNU/Linux from a Unix/Linux System Debian wiki: chroot debootstrap
Install Linux from Linux
1,516,899,074,000
This might be a bit of a confused question... I've recently started playing around with docker and am trying to setup a basic lamp server. I have a docker image of centos with httpd, php, and mysql. However, in a docker container I can't start services in the way I would usually do via systemd / service. I can get httpd running directly via /usr/sbin/httpd So then what if any is the difference in running httpd via /usr/sbin/httpd rather than via systemctl start httpd? Is there a 'proper' way to stop or restart httpd? - I thought I could just kill the process but it appears to launch about 10 apache processes. I appreciate this isn't a particularly well focused question but any pointer to relevant material would be gratefully received.
You cannot use systemctl If your PID 1 is not systemd. You can find your PID 1 with ps -q 1. Being able to start and stop services the normal way is one advantage mentioned in this article about Running systemd in a non-privileged container. Others are logging or tracking of child processes as described in Andrei's answer.
Running a program as a service or directly, what's the difference
1,516,899,074,000
man bash says: backward-delete-char (Rubout) Delete the character behind the cursor. When given a numeric argument, save the deleted text on the kill ring. Is Rubout just Delete key on keyboard? Because it has the same function as bash describes backward-delete-char. But when I try: backward-kill-line (C-x Rubout) Kill backward to the beginning of the line. Consider following case: $ testa testb testc testd assume the point is on testc's t, now I press Control+x then press Delete key on keyboard. The result is: $ testa testb [3~testc testd I just can't understand it, am I missing something?
There are three concepts to be clarified in the simple description of: backward-delete-char (Rubout) Keys There is a key called Delete, one which you are using in your examples. That key erase "the next character". If the line contains test1 and the cursor (the blinking indicator) is over the letter s, Delete will erase the s. In contrast, there is a key called Backspace, which, in exactly the same conditions will erase the letter e. That is the letter which precede the cursor. That Backspace key is being described by "backward-delete-char (Rubout)" in the bash manual. That key, obviously "Delete the character behind the cursor". Numeric Argument To give it a "numeric argument" you need to press Alt-2, for example, which will place a 2 as an argument to the next command (or key). Again, if the word test is written in the line, and the cursor is at the s, press Alt-2 and then the Backspace. That will Back erase two characters, the te in the word test. The kill ring. When something is erased, in most cases, is placed in a kill ring. To get what is inside the "kill ring" use ctrl-y. If you erase several characters, with alt-3-Backspace, those characters will reapear using ctrl-y. In Detail: If you use an argument to the command Backspace, you will erase as many characters as the argument say "before" the current position of the cursor. If there is this string at the command prompt: $ testa testb testc And the cursor is under the letter "b", an Alt-3-Backspace will remove the characters "est": $ testa tb testc Those characters will be printed back with ctrl-y Now, the: backward-kill-line (C-x Rubout) Means to press: ctrl-x Backspace Which will place the whole line "before the cursor" in the kill ring. And, the keys: ctrl-x Delete have no action defined for them, which will make the equivalent ANSI code to be printed: [3~ In your terminal. That could be changed in the ~/.inputrc for the readline library which bash use. But that is outside the scope for this answer, I believe.
what is Readline backward-delete-char (Rubout)
1,516,899,074,000
How can I get a list which users are authorized to a folder and which permissions they have? I tried already the most common ones like 'ls', 'namei' or 'getfacl'
When you ls -ld */ you get a list (-l) of your directories (-d) in the current path. You may see the access rights of owner, group and others. For more details regarding the access rights you may check: This link When you check the output from the ls command you can see the owner of the file or directory and next to it the group owner of the file or directory. If for example the group is called "logistics" you can view the members of this group with the following command: grep 'logistics' /etc/group
List of user permissons for a specific folder
1,516,899,074,000
So I have a desktop with a fast SSD and large HDD. I am trying to get a well configured large, fast zpool out of it. I have read that I can carve separate partitions into the SSD for the ZIL and L2ARC which would seem to do what I want, except I have to manually configure how big each partition should be. What I don't like about it is that it's somewhat involved, potentially hard to reconfigure if I need to change the partitions, and it sounds like the maximum filesystem size is limited by the HDD alone since the intent is that everything on the ZIL and L2ARC has to also make it to disk, at least eventually. Also it's not clear if the L2ARC is retained after system reboot or if it has to be populated again. It also seems inefficient to have to copy data from ZIL to L2ARC if they are both on the same SSD, or even to HDD if there is currently no pressure on how much hot data I need on SSD. Alternatively, it seems I can also just have 1 partition on SSD and 1 on HDD and add them to a zpool directly with no redundancy. I have tried this, and noticed sustained read/write speeds greater than what HDD alone can muster. But I don't know if everything is just going to the SSD for now, and everything will go to HDD later once SSD is all filled up. Ideally, I would like to have ZFS transparently shuffle the data around behind the scenes to try to always keep the hot data on the SSD similarly to what L2ARC, and have a sensible amount of empty space on SSD for new writes. The ZIL should be automatically managed to be the right size and preferably live on the SSD as much as possible. If I go the manually configured ZIL + L2ARC route, it seems like the ZIL only needs to be about (10 sec * HDD write speed) big. Doing this maximizes the size of L2ARC which is good. But what happens if I add a striped disk which effectively doubles my HDD speed (and capacity)? Summary of questions if using SSD for ZIL + L2ARC: If I set up SSD for ZIL + L2ARC, how hard is it to re-set it up with different partition sizes? If I use SSD for L2ARC, is its capacity included in total available pool capacity, or is the pool capacity limited by HDD alone? Is L2ARC retained after system reboot, or does it have to be re-populated? Does data have to be copied from ZIL to L2ARC even if both are on same physical SSD? If ZIL is on SSD and there is still plenty of room for more intents to be logged, does the ZIL still automatically get flushed to SSD? If so, when/under what circumstances? Summary of questions if using SSD + HDD in a single zpool: ZFS obviously notices the difference in size between SSD and HDD partitions, but does ZFS automatically recognize the relative performance of SSD and HDD partitions? In particular, How are writes distributed across the SSD and HDD when both are relatively empty? Does ZFS try to do anything smart with data shuffling once the SSD part of the zpool fill up? In particular, If the SSD part of zpool is filled up, does ZFS ever anticipate that I will have more writes soon and tries to move data from SSD to HDD in the background? If the SSD part of zpool is filled up, and I start accessing a bunch of data off HDD, and not so much off SSD, does ZFS make any effort to swap the hot data to SSD? Finally, the most important question: Is it a good idea to set up SSD + HDD in same pool, or is there a better way to optimize my pair of drives for both speed and capacity?
While Marco's answer explained all the details correctly, I just want to focus on your last question/summary: Is it a good idea to set up SSD + HDD in same pool, or is there a better way to optimize my pair of drives for both speed and capacity? ZFS is a file system designed for large arrays with many smaller disks. Although it is quite flexible, I think it is suboptimal for your current situation and goal, for the following reasons: ZFS does no reshuffling of already written data. What you are looking for is called a hybrid drive, for example Apple's Fusion Drive allows to fuse multiple disks together and automatically selects the storage location for every block based on access history (moving data is done when there is no load on the system or on rewrite). With ZFS, you have none of that, neither automatically nor manually, your data stays were it was written initially (or is already marked for deletion). With just a single disk, you give up on redundancy and self-healing. You still detect errors, but you do not use the full capabilities of the system. Both disks in the same pool means even higher chance of data loss (this is RAID0 after all) or corruption, additionally your performance will be sub par because of the different drive sizes and drive speeds. HDD+SLOG+L2ARC is a bit better, but you need a very good SSD (better two different like Marco said, but a NVMe SSD is a good and expensive compromise) and most of the space on it is wasted: 2 to 4 GB for the ZIL are enough, and a large L2ARC only helps if your RAM is full, but needs higher amounts of RAM itself. This leads to sort of catch-22 - if you want to use L2ARC, you need more RAM, but then you can just use the RAM itself, because it is enough. Remember, only blocks are stored, so you do need not as much as you would assume by looking at plain files. Now, what are the alternatives? You could split by having two pools. One for system, one for data. This way, you have no automatic rebalance and no redundancy, but a clean system which can be extended easily and which has no RAID0 problems. Buy a second large HDD, make a mirror, use the SSD like you outlined: removes the problem of differently sized disks and disk speeds, gives you redundancy, keeps the SSD flexible. Buy n SSDs and do RAIDZ1/2/3. Smaller SSDs are pretty cheap nowadays and do not suffer slow rebuild times, making RAIDZ1 interesting again. Use another file system or volume manager with hybrid capabilities, ZFS on top if needed. This is not seen as optimal, but neither is working with two single disk vdevs in a pool... at least you get exactly what you want, and some nice things of ZFS (snapshots etc.) on top, but I wouldn't count on stellar performance.
Combining SSD + HDD into single fast, large partition?
1,516,899,074,000
I'm new with dbus, and saw different ways to log out from terminal depending on desktop env. But I'm curious is there any way to log out from any desktop env using dbus messages? On GNOME: dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.Logout uint32:1 On KDE: dbus-send --print-reply --dest=org.kde.ksmserver /KSMServer org.kde.KSMServerInterface.logout int32:0 int32:0 int32:0 Is there any command that would work on every desktop env (like using system dbus)?
On systemd setups you should be able to forcibly terminate a session via logind dbus interface: busctl call org.freedesktop.login1 /org/freedesktop/login1 \ org.freedesktop.login1.Manager TerminateSession s \ $(loginctl show-user $UID --property=Sessions --value) Note that busctl was introduced in systemd v. 221 - alternatively, on all setups you could run: dbus-send --system --print-reply --dest=org.freedesktop.login1 \ /org/freedesktop/login1 'org.freedesktop.login1.Manager.TerminateSession' \ string:c2 where c2 is the session ID, you can get it via dbus-send --system --print-reply --dest=org.freedesktop.login1 \ /org/freedesktop/login1 'org.freedesktop.login1.Manager.ListSessions' which returns something like this: array [ struct { string "c1" uint32 120 string "gdm" string "seat0" object path "/org/freedesktop/login1/session/c1" } struct { string "c2" uint32 1000 string "don" string "seat0" object path "/org/freedesktop/login1/session/c2" } ]
Universal way to logout from terminal via dbus
1,516,899,074,000
I need to loop over the network interfaces available in Linux. I'm interested in all kinds of interfaces (loopback, ethernet, vlan, bridge) - whatever shows up in ifconfig -a. Is there a way to enumerate the interfaces in Linux? By any command or by reading a file?
You can get a list of these interfaces on most systems from the following: ls -A /sys/class/net But beware of parsing the output from ls in your script. Edit To get a total number of network interfaces pipe the output of this command into wc as recommended in Nikolay's comment as in: ls -A /sys/class/net | wc -l
How to find out the number of network interfaces available in a linux system?
1,516,899,074,000
I'm using Linux Mint 13 MATE 32bit, I'm trying to build the kernel (primarily for experience and for fun). For now, I like to build it with the same configuration as precompiled kernel, so firstly I've installed precompiled kernel 3.16.0-031600rc6 from kernel.ubuntu.com, booted to it successfully. Then I've downloaded 3.16.rc6 kernel from kernel.org, unpacked it, configured it to use config from existing precompiled kernel: $ make oldconfig It didn't ask me anything, so, precompiled kernel contains all necessary information. Then I've built it (it took about 6 hours) : $ make And then installed: $ sudo make modules_install install Then I've booted into my manually-compiled kernel, and it works, boot process is somewhat slower though. But then I've found out that all the binaries (/boot/initrd.img-3.16.0-rc6 and all the *.ko modules in /lib/modules/3.16.0-rc6/kernel are about 10 times larger than precompiled versions! Say, initrd.img-3.16.0-rc6 is 160 658 665 bytes, but precompiled initrd.img-3.16.0-031600rc6-generic is 16 819 611 bytes. Each *.ko module is similarly larger. Why is this? I haven't specified any special options for build (I typed exactly the same commands as I mentioned above). How to build it "correctly"?
Despite what file says, it turns out to be debugging symbols after all. A thread about this on the LKML led me to try: make INSTALL_MOD_STRIP=1 modules_install And low and behold, a comparison from within the /lib/modules/x.x.x directory; before: > ls -hs kernel/crypto/anubis.ko 112K kernel/crypto/anubis.ko And after: > ls -hs kernel/crypto/anubis.ko 16K kernel/crypto/anubis.ko More over, the total size of the directory (using the same .config) as reported by du -h went from 185 MB to 13 MB. Keep in mind that beyond the use of disk space, this is not as significant as it may appear. Debugging symbols are not loaded during normal runtime, so the actual size of each module in memory is probably identical regardless of the size of the .ko file. I think the only significant difference it will make is in the size of the initramfs file, and the only difference it will make there is in the time needed to uncompress the fs. I.e., if you use an uncompressed initramfs, it won't matter. strip --strip-all also works, and file reports them correctly as stripped either way. Why it says not stripped for the distro ones remains a mystery.
Linux kernel manual build: resulting binary is 10 times larger than precompiled binaries
1,516,899,074,000
Having the following directory structure [sr@server directory]$ tree . ├── folder1 │   ├── fileA │   └── fileB └── folder2 └── fileC 2 directories, 3 files I want to set a default facl on folder1 and folder2 that, for the user jim has the following permissions . ├── folder1 --x │   ├── fileA r-- │   └── fileB r-- └── folder2 --x └── fileC r-- I.E. all files have r-- and all folders have --x Any files created under folder1 or folder2 should be given the r-- permission for user jim, any folders should be given the --x permission for user jim I can set the permissions so folders created have r-x and files have r-- but I can't figure out a way to set the default permissions so folders don't get the read permission. While I can manually set the permissions for the currently existing files I want those permissions to apply as defaults to all newly created files and folders. setfacl version 2.2.49 on RHEL 6.4
What you request is not supported by Linux's ACLs. setfacl -m u:jim:r-X (capital X) gives Jim permission to read all files including directories, and to execute only directories and files that are executable by their owner. Making directories non-readable has very limited usefulness. If you tell us what you're trying to accomplish, we might be able to offer a better solution.
setfacl default --x on directories and r-- on files for user
1,516,899,074,000
I am trying to perform loadkeys operation. For normal user, I am getting permission denied error. the error is as follows. <tim@testps>~% loadkeys mykeys Loading /usr/tim/mykeys Keymap 0: Permission denied Keymap 1: Permission denied Keymap 2: Permission denied KDSKBENT: Operation not permitted loadkeys: could not deallocate keymap 3
You need root capabilities to use loadkeys. It is common to set the setuid permission bit on loadkeys. Setting this bit will cause any processes spawned by executing the loadkeys file to run as the owner of the file (usually root). For added security, you should change loadkeys's permissions to 750, make a group for it, and add any users that need to use loadkeys to that group. $ groupadd loadkeys # you can use any group name $ chgrp loadkeys /bin/loadkeys $ chmod 4750 /bin/loadkeys # setuid, group- and user-only read and execution $ gpasswd -a user loadkeys # add user to the group
Loadkeys gives permission denied for normal user
1,373,495,376,000
I'm almost to the point where I can post the solution I wound up with for my complex port bonding question. However, in reading the bonding.txt file, I see this option text: ad_select Specifies the 802.3ad aggregation selection logic to use. The possible values and their effects are: stable or 0 The active aggregator is chosen by largest aggregate bandwidth. Reselection of the active aggregator occurs only when all slaves of the active aggregator are down or the active aggregator has no slaves. This is the default value. bandwidth or 1 The active aggregator is chosen by largest aggregate bandwidth. Reselection occurs if: - A slave is added to or removed from the bond - Any slave's link state changes - Any slave's 802.3ad association state changes - The bond's administrative state changes to up count or 2 The active aggregator is chosen by the largest number of ports (slaves). Reselection occurs as described under the "bandwidth" setting, above. The way this is written, I can't tell if a single bond can contain more than one aggregator, or not! If the bonding module is smart enough to sort out more than one aggregation within a bond, I'm golden! Let me simplify my drawing from over there: ____________ eth1 ________ eth2 ____________ | switch 1 |========| host |--------| switch 2 | ------------ eth3 -------- ------------ These switches do not do 802.3ad across switches. So, if I put all three interfaces into a single 802.3ad bond, do I get two aggregators? One containing eth1 & eth3, the other just holding eth2? Conceivably, the LACP signals between the host and the switches would be enough to do that. I just don't know if that capability is actually built in. Anyone? Anyone? Can I get two aggregators out of a single interface bond?
Yes, given the following config: .-----------. .-----------. | Switch1 | | Switch2 | '-=-------=-' '-=-------=-' | | | | | | | | .-=----.--=---.---=--.----=-. | eth0 | eth1 | eth2 | eth3 | |---------------------------| | bond0 | '---------------------------' Where each switch has its two ports configured in a PortChannel, the Linux end with the LACP bond will negotiate two Aggregator IDs: Aggregator ID 1 - eth0 and eth1 Aggregator ID 2 - eth2 and eth3 And the switches will have a view completely separate of each other. Switch 1 will think: Switch 1 PortChannel 1 - port X - port Y Switch 2 will think: Switch 2 PortChannel 1 - port X - port Y From the Linux system with the bond, only one Aggregator will be used at a given time, and will fail over depending on ad_select. So assuming Aggregator ID 1 is in use, and you pull eth0's cable out, the default behaviour is to stay on Aggregator ID 1. However, Aggregator ID 1 only has 1 cable, and there's a spare Aggregator ID 2 with 2 cables - twice the bandwidth! If you use ad_select=count or ad_select=bandwidth, the active Aggregator ID fails over to an Aggregator with the most cables or the most bandwidth. Note that LACP mandates an Aggregator's ports must all be the same speed and duplex, so I believe you could configure one Aggregator with 1Gbps ports and one Aggregator with 10Gbps ports, and have intelligent selection depending on whether you have 20/10/2/1Gbps available. If this doesn't make sense, let me know, I'd love to improve this answer. LACP is a fantastic protocol which can do a lot of things people don't know about, this is one of the common ones. People always want to "bond bonds" which cannot be done, but LACP allows the same setup with even more advantages and smart link selection. Note on VPC Some switches can be configured to "logically join" an Aggregator, so then the two switches act as one Aggregator ID. This is commonly called "Virtual Port Channel" or "Multi Chassis Link Aggregation" (MLAG). This is possible, but is not what we're talking about here. In this answer, we're talking about two discrete switches which have no knowledge of each other.
Bonds vs. Aggregators
1,373,495,376,000
I'm running on 4G RAM with an extra 6G swap partition, SSD is a pretty decent SAMSUNG MZMPA128HMFU model. System responds very well to workloads when things stay in RAM, but as soon as things reach the swap partition in any meaningful quantity (let say 1GB+ swap used), responsiveness goes completely down the drain during swapping episodes. SSD light stays on for several seconds while apparently loads of stuffs get paged in or out, during this all other IO is blocked. I've seen system load jump from 0.8 to 10 in a few seconds, then drop back down as IO gets going again. When swap is in active use (I keep a bunch of big apps open) these gagging swap episodes happen more and more often as uptime increases (at 26 days now). I am looking at latencytop, but it isn't telling me much I could go on. There seems to be no other solution at this point than stop enough apps to be able to do swapoff -a and just stop using swap. Not sure how this affects my usage patterns, I'm almost certain it's going to be enough for the set of apps I regularly run. Turning vm.swappiness down to 1 doesn't help things. At least not by itself. Is this some well known thing? What are my options to have decent desktop responsiveness while using virtual memory?
I'd strongly suggest getting more memory installed so that you are not swapping. Any swapping just KILLS the performance of a Linux or UNIX(tm) system. So install enough memory to stop the swap!
Linux (3.4) SSD swap partition usage causes extreme latency - how to eliminate?
1,373,495,376,000
I am setting up a machine to run a number of virtual machines. I am using a single HDD with a boot partition and an LVM partition. I read on the Arch wiki that logcial volumes which will be used for swap should be setup with -C y To create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents. How important is this? If i have a dozen VMs each with a contiguous logical volume for swap and a non-contiguous logical volume for /, how will having so many contiguous logical volumes affect my ability to resize logical volumes (either contiguous or non-contiguous)?
I am not aware of any requirement that swap space be contiguous on disk under Linux, with LVM or otherwise. I have never arranged for my swap LVs to be contiguous and have never run into any problem (it's possible that all my swap LVs just happened to be contiguous, I've never looked). Linux supports non-contiguous swap files, it would be odd to have such a restriction on LVM volumes. I can't find any reference to this in any official documentation or anyone explaning why swap LVs should be contiguous. This has all the hallmarks of an urban legend. The origin may lie in HP-UX, which Linux's LVM is partly inspired from, and which did (does?) require swap space to be contiguous. I don't know that this has ever been the case on Linux. There may be a perceived performance benefit, but with 4MB extents, I very much doubt there is any performance benefit, and I can't find any benchmark. If you have volume groups that span over multiple disks, you may want to constrain what PV the swap LV is on. But I wouldn't require a contiguous volume.
Does swap need to be on a contiguous LVM logical volume
1,373,495,376,000
I'm interested in running an rsync script whenever any new volume is mounted on my Debian box. What are some potential triggers / strategies for listening for a new volume mount?
You can create a new rule to /etc/udev/rules.d/. First read the file /etc/udev/rules.d/README. In the new rule file, add something like KERNEL=="sd?1",ACTION=="mount",RUN+="/path/to/script.sh" (I did not try the above line, try your own rules.) Note that the script will be run as root. You might want to use su to change that. Using ACTION=="add" would require script.sh first to mount the volume.
Run shell script when new volume mounted
1,373,495,376,000
I still have two old ATA drives (one is only 8GB, one is somewhat defect) and I've been thinking about putting them into my PC and activating swap on them, too. What will be effect of this? Will my system spread data across the drives evenly so that swapping becomes faster?
Firstly, using a slow or defective hard drive for swap is not a good idea. It's like having really slow or buggy memory in a way. How your system spreads data across your swap partitions depends on the priority you give them in your /etc/fstab As an example, /dev/hda5 none swap sw,pri=2 0 0 /dev/hdb5 none swap sw,pri=1 0 0 /dev/hdc6 none swap sw,pri=3 0 0 Your system will use the partition with the highest priority first (in this case /dev/hdc6). Priorities go from 0 to 32767. You can assign the same priority to the different partitions and that will make your system use them equally (or spread the load across different drives). The main reason for this is that you want to use a faster (or less used but still fast) drive first, as it can have a major impact on your system. You can change your system's tendency to write to swap by setting swappiness. More info here.
Can I accelerate swap by using multiple harddrives?
1,373,495,376,000
Is there a GNU Linux built-in for getting a list of the filenames of shared libraries? I have considered parsing the output of ldconfig -p. However, I'm not sure about consistency in output from system to system. I am already aware that I could use find but given that the linker / loader has a cache of libraries and locations, it seems more efficient to use it in some way. Additionally, the paths that the linker / loader searches are the only ones I'm concerned with--i.e libraries that have been processed by ldconfig.
If you want to catch them all, there is no other choice but to do full filesystem traversals. ldconfig knows only about the libraries in the standard paths and the extra paths it is configured to look in (usually defined in /etc/ld.so.conf*). The usual suspect places where other libraries can be found are $HOME and /opt, though there is no limit, especially when people build applications themselves. If you don't trust the output of ldconfig -p, you can parse its cache directly. It's binary, but using strings removes all the garbage (5 so /lib/ matches): strings -n5 /etc/ld.so.cache On my system, they both give the same results, which I verified with this quicky: a=$(ldconfig -p | awk -F'> ' '{print $2}' | sort); # get them to the same format b=$(strings -n5 /etc/ld.so.cache | sort -u); echo -e "$a\n$b\n$b" | sort | uniq -u Which checked if any library on the ldconfig -p list was not mentioned in the cache. Nothing was returned.
How do I get a list of shared library filenames under Linux?
1,373,495,376,000
I formatted an external hard drive (sdc) to ntfs using parted, creating one primary partition (sdc1). Before formatting the device SystemRescueCd was installed on the external hard drive using the command dd in order to be used as a bootable USB. However when listing devices with lsblk -f I am still getting the old FSTYPE (iso9660) and LABEL (sysrcd-5.2.2) for the formatted device (sdc): NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc iso9660 sysrcd-5.2.2 └─sdc1 ntfs sysrcd-5.2.2 /run/media/user/sysrcd-5.2.2 As shown in the output of lsblk -f only the FSTYPE of the partition sdc1 is correct, the sdc1 partition LABEL, sdc block device FSTYPE and LABEL are wrong. The nautilus GUI app is also showing the old device label (sysrcd-5.2.2). After creating a new partition table, parted suggested I reboot the system before formatting the device to ntfs, but I decided to unmount sdc instead of rebooting. Could it be that the kernel is still using the old FSTYPE and LABEL because I haven't rebooted the system? Do I have to reboot the system to get rid of the old FSTYPE and LABEL? As an alternative to rebooting is there a way to rename the FSTYPE and LABEL of a block device manually so that I can change them to the original FSTYPE and LABEL that shipped with the external hard drive?
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc: [root@fedora user]# wipefs /dev/sdc DEVICE OFFSET TYPE UUID LABEL sdc 0x8001 iso9660 sysrcd-5.2.2 sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1 DEVICE OFFSET TYPE UUID LABEL sdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd, with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f (--force) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc /dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdc DEVICE OFFSET TYPE UUID LABEL sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1 DEVICE OFFSET TYPE UUID LABEL sdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-label total 0 lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 12. For an ntfs file system changing the label is done with the command ntfslabel: ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
Why is lsblk showing the old FSTYPE and LABEL of a device that was formatted?
1,373,495,376,000
Did some googleing and looked into the man pages, but didn't find specific answer for this numbers. For Example, # ip -d -stat ne show dev eth1 | column -t | sort -V 192.168.200.41 used 1034/4635/1032 probes 6 FAILED 192.168.200.44 lladdr 00:c0:b7:xx:xx:xx used 1037/1032/266 probes 1 STALE 192.168.20.5 lladdr 00:40:9d:xx:xx:xx used 25080/25050/25021 probes 1 STALE 192.168.20.6 lladdr 00:40:9d:xx:xx:xx used 25076/25047/25018 probes 4 STALE
After a look at iproute2 source code, the fifth fields gives timer information on ARP cache entries: X/./. : Number of seconds since the ARP entry was last used ./X/. : Number of seconds since the ARP entry was last confirmed ././X : Number of seconds since the ARP entry was last updated Those timers are notably used to manage stale ARP entries and decide when a new ARP request should be issued. Refer to this insightful answer for more info about ARP age timeout.
What is the fifth coloum in the output of "ip -stat neighbour show" stand for?
1,373,495,376,000
head -num is the same as head -n num instead of head -n -num (where num is any number) Example: $ echo -e 'a\nb\nc\nd'|head -1 a $ echo -e 'a\nb\nc\nd'|head -n 1 a $ echo -e 'a\nb\nc\nd'|head -n -1 a b c This head -1 doesn't seem to be documented anywhere. $ head --help Usage: head [OPTION]... [FILE]... Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[-]NUM print the first NUM bytes of each file; with the leading '-', print all but the last NUM bytes of each file -n, --lines=[-]NUM print the first NUM lines instead of the first 10; with the leading '-', print all but the last NUM lines of each file -q, --quiet, --silent never print headers giving file names -v, --verbose always print headers giving file names -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit NUM may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Full documentation at: <https://www.gnu.org/software/coreutils/head> or available locally via: info '(coreutils) head invocation' Man page for head (on Fedora 28): HEAD(1) User Commands HEAD(1) NAME head - output the first part of files SYNOPSIS head [OPTION]... [FILE]... DESCRIPTION Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[-]NUM print the first NUM bytes of each file; with the leading '-', print all but the last NUM bytes of each file -n, --lines=[-]NUM print the first NUM lines instead of the first 10; with the leading '-', print all but the last NUM lines of each file -q, --quiet, --silent never print headers giving file names -v, --verbose always print headers giving file names -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit NUM may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. AUTHOR Written by David MacKenzie and Jim Meyering. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report head translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO tail(1) Full documentation at: <https://www.gnu.org/software/coreutils/head> or available locally via: info '(coreutils) head invocation' GNU coreutils 8.29 December 2017 HEAD(1)
The info page and the online manual for GNU head contain this part: For compatibility head also supports an obsolete option syntax -[NUM][bkm][cqv], which is recognized only if it is specified first. The idea that head -1 is the same as head -n 1 is that the dash is not a minus sign, but a marker for a command line option. That's the usual custom: things that start with dashes are options controlling how to do processing, other stuff in the command line are file names or other actual targets to process. In this case, it's not a single-character option, but a shorthand for -n, but it's still basically an option, and not a filename. In head +1 or head 1, the +1 or 1 would be taken as file names, however. A double dash -- or --something also has a distinct meaning, by itself (--) it stops option processing, and when followed by something else, it marks a GNU style long option. So having head --1 for head -n -1 wouldn't match the custom. If I were to guess, I'd assume the quaint shortcut for -n i exists for positive i but not for negative i since the former case is useful more often and easier to implement. (Besides, the standard head is only defined for a positive value of lines.)
Why isn't "head -1" equivalent with "head -n -1" but instead it's the same as "head -n 1"?
1,373,495,376,000
I'm working in an environment where I pretty much only have access to busybox tools, and trying to convert a date in the format Mon Jan 1 23:59:59 2018 GMT to a unix timestamp, in a shell script. I can't change the format of the input time I am parsing. It seems that busybox date can not understand this date format, or any other format with a named month. I have a really ugly script that can do it, but does anyone know of anything nicer? Edit: the date -D option doesn't work for me, I get date: invalid option -- 'D' BusyBox v1.24.1 (2018-01-11 16:07:45 PST) multi-call binary. Usage: date [OPTIONS] [+FMT] [TIME]`
The busybox date2 is fully capable of parsing the date in the given string with some help1 (except for the GMT time zone). $ gdate='Mon Jan 1 23:59:59 2018 GMT' $ TZ=GMT0 busybox date -d "$gdate" -D '%a %b %d %T %Y %Z' Mon Jan 1 23:59:59 GMT 2018 The help is given with the -D option: a description of the source format. To get a UNIX timestamp, just add the output format expected +'%s': $ TZ=GMT0 busybox date -d "$gdate" -D '%a %b %d %T %Y %Z' +'%s' 1514851199 1 The busybox date has most of the GNU date command's capabilities and one that the GNU date command doesn't: the -D option. Get the busybox help as follows: $ busybox date --help BusyBox v1.27.2 (Debian 1:1.27.2-2) multi-call binary. Usage: date [OPTIONS] [+FMT] [TIME] Display time (using +FMT), or set time [-s,--set] TIME Set time to TIME -u,--utc Work in UTC (don't convert to local time) -R,--rfc-2822 Output RFC-2822 compliant date string -I[SPEC] Output ISO-8601 compliant date string SPEC='date' (default) for date only, 'hours', 'minutes', or 'seconds' for date and time to the indicated precision -r,--reference FILE Display last modification time of FILE -d,--date TIME Display TIME, not 'now' -D FMT Use FMT for -d TIME conversion Note the -D FMT option. 2 Note that you may be able to call busybox date in two ways: $ busybox date Or, if a link to busybox with the name date has been installed in the correct PATH directory: $ date To verify, just ask for --version or --help to find out which date you have installed. With GNU date: $ date --version date (GNU coreutils) 8.28 Or (busybox date): $ date --help BusyBox v1.27.2 (Debian 1:1.27.2-2) multi-call binary. … …
How can I convert a date with a named month to a unix timestamp with only Busybox tools?
1,373,495,376,000
As you could notice from the topic it needs to be able get cpu's stepping code properly. As Wikipedia says there're stepping codes like A0, A2, B0 etc. So, commands in linux (ubuntu 16.04) give: # dmidecode -t 4 | grep Stepping | awk '{ printf $8": "$9"\n" }' # Stepping: 2 # lscpu | grep Stepping # Stepping: 2 # cpuid | grep stepping # stepping id = 0x2 (2) # cat /proc/cpuinfo | grep stepping # stepping: 2 The whole outputs: cat /proc/cpuinfo (one core): processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x13 cpu MHz : 2400.208 cache size : 12288 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt lahf_lm epb kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida arat bugs : bogomips : 4800.41 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ... cpuid (part): ... family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) ... (simple synth) = Intel Core i7-900 (Gulftown B1) / Core i7-980X (Gulftown B1) / Xeon Processor 3600 (Westmere-EP B1) / Xeon Processor 5600 (Westmere-EP B1), 32nm ... dmidecode -t 4 (part): ... Signature: Type 0, Family 6, Model 44, Stepping 2 ... Some screenshot from an internet of CPU-Z program: Some screenshot from an internet of CPU-G program: So what't the 0x2 or 2 ? Why not A0 or B1 as in Wikipedia mentioned ? How to get this letter before stepping number ? Best regards, V7
There’s no way to map stepping numbers to stepping names using only information from the CPU. You need to look at specification updates from Intel; these contain descriptions of the errata fixed in various revisions of CPUs, and also contain identification information allowing the various steppings to be identified (where appropriate). For example, for your E8500, the specification update lists two revisions, C0 and E0; C0 corresponds to processor signature 10676h, E0 to processor signature 1067Ah (see table 1 on page 16). The last four bits in these signatures are the stepping values given in /proc/cpuinfo, lscpu etc., and in CPU-Z’s “stepping” field; as you can see, there’s no obvious correlation between the numeric values and the stepping names (6 for the E8500 stepping C0, A for the E8500 stepping E0). Tools such as CPU-Z contain all this identification information and use it to provide stepping names in their GUI.
Get CPU stepping in Linux
1,373,495,376,000
When we perform this (on linux redhat 7.x) umount /grop/sdc umount: /grop/sdc: target is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) We can see that mount failed on busy. But when we do remount then ... remount is success as the following: mount -o rw,remount /grop/sdc echo $? 0 So very interesting. Does remount use the option like ( umount -l ) ? what the different between remount to umount/mount ?
man mount : remount Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point. The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified. The remount option is used when the file system isn't currently in use to modify the mount option from ro to rw. target is busy. If the file system is already in use you can't umount it properly , you need to find the process which accessed your files (fuser -mu /path/ ) , killing the running process then unmounting the file.
what the difference between remount to umount/mount?
1,373,495,376,000
Sometimes I forget to run systemctl daemon-reload after editing some unit file, and I get a warning about it when doing systemctl restart ***.service. Why it was decided to only issue a warning instead of reloading the unit automatically? (the warning happens, so change was detected and so it is possible to simply auto-reload the file).
It appears to be a specific design choice, based around race conditions and the complexity of reading the entire conf tree. However, one approach is to use systemctl edit, which triggers a unit reload after it exits. Some discussion of the issue here, and here.
Why can't systemd reload unit files automatically?
1,373,495,376,000
os: centos7 test file: a.txt 1.2G monitor command: iostat -xdm 1 The first scene: cp a.txt b.txt #b.txt is not exist The second scene: cp a.txt b.txt #b.txt is exist Why the first scene don't consume IO, but the second scene consume IO?
It could well be that the data had not been flushed to disk during the first cp operation, but was during the second. Try setting vm.dirty_background_bytes to something small, like 1048576 (1 MiB) to see if this is the case; run sysctl -w vm.dirty_background_bytes=1048576, and then your first cp scenario should show I/O. What's going on here? Except in cases of synchronous and/or direct I/O, writes to disk get buffered in memory until a threshold is hit, at which point they begin to be flushed to disk in the background. This threshold doesn't have an official name, but it's controlled by vm.dirty_background_bytes and vm.dirty_background_ratio, so I'll call it the "Dirty Background Threshold." From the kernel docs: vm.dirty_background_bytes Contains the amount of dirty memory at which the background kernel flusher threads will start writeback. Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read. dirty_background_ratio Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data. The total available memory is not equal to total system memory. vm.dirty_bytes and vm.dirty_ratio There's a second threshold, beyond this one. Well, more a limit than a threshold, and it's controlled by vm.dirty_bytes and vm.dirty_ratio. Again, it doesn't have an official name, so we'll call it the "Dirty Limit". Once enough data has been "written", but not committed to the underlying block device, further attempts to write will have to wait for write I/O to complete. (The precise details of what data they'll have to wait on is unclear to me, and may be a function of the I/O scheduler. I don't know.) Why? Disks are slow. Spinning rust especially so, so while the R/W head on a disk is moving to satisfy a read request, no write requests can serviced until the read request completes and the write request can be started. (And vice versa) Efficiency This is why we buffer write requests in memory and cache data we've read; we move work from the slow disk to faster memory. When we eventually go to commit the data to disk, we've got a good quantity of data to work with, and we can try to write it in a way that minimizes seek time. (If you're using an SSD, replace the concept of disk seek time with reflashing of SSD blocks; reflashing consumes SSD life and is a slow operation, which SSDs attempt--to varying degrees of success--to hide with their own write caching.) We can tune how much data gets buffered before the kernel attempts to write it to disk using vm.dirty_background_bytes and vm.dirty_background_ratio. Too much write data buffered! If the amount of data you're writing is too great for how quickly it's reaching disk, you'll eventually consume all your system memory. First, your read cache will go away, meaning fewer read requests will be serviced from memory and have to be serviced from disk, slowing down your writes even further! If your write pressure still doesn't let up, eventually like memory allocations will have to wait on your write cache getting freed up some, and that'll even more disruptive. So we have vm.dirty_bytes (and vm.dirty_ratio); it lets us say, "hey, wait up a minute, it's really time we got data to the disk, before this gets any worse." Still too much data Putting a hard stop on I/O is very disruptive, though; disk is already slow from the perspective of reading processes, and it can take several seconds to several minutes for that data to flush; consider vm.dirty_bytes's default of 20. If you have a system with 16GiB of RAM and no swap, you might find your I/O blocked while you wait for 3.4GiB of data to get flushed to disk. On a server with 128GiB of RAM? You're going to have services timing out while you wait on 27.5GiB of data! So it's helpful to keep vm.dirty_bytes (or vm.dirty_ratio, if you prefer) fairly low, so that if you hit this hard threshold, it will only be minimally disruptive to your services. What are good values? With these tunables, you're always trading between throughput and latency. Buffer too much, and you'll have great throughput but terrible latency. Buffer too little, and you'll have terrible throughput but great latency. On workstations and laptops with only single disks, I like to set vm.dirty_background_bytes to around 1MiB, and vm.dirty_bytes to between 8MiB and 16MiB. I very rarely find a throughput benefit beyond 16MiB for single-user systems, but the latency hangups can get pretty bad for any synchronous workloads like web browser data stores. On anything with a striped parity array, I find some multiple of the array's stripe width to be a good starting value for vm.dirty_background_bytes; it reduces the likelihood of needing to perform a read/update/write sequence while updating parity, improving array throughput. For vm.dirty_bytes, it depends on how much latency your services can suffer. Myself, I like calculating the theoretical throughput of the block device, use that to calculate how much data it could move in 100ms or so, and setting vm.dirty_bytes accordingly. A 100ms delay is huge, but it's not catastrophic (in my environment.) All of this depends on your environment, though; these are only a starting point for finding what works well for you.
why linux cp command don't consume disk IO?
1,373,495,376,000
It seems that every process has private memory mappings that are neither readable nor writeable nor executable (whose flags are "---p"): grep -- --- /proc/self/maps 7f2bd9bf7000-7f2bd9df6000 ---p 001be000 fc:00 3733 /lib/x86_64-linux-gnu/libc-2.19.so 7f2bd9e04000-7f2bda003000 ---p 00003000 fc:00 3743 /lib/x86_64-linux-gnu/libdl-2.19.so 7f2bda042000-7f2bda241000 ---p 0003d000 fc:00 36067 /lib/x86_64-linux-gnu/libpcre.so.3.13.1 returns some in shared libraries and doing that for java (JVM) processes even returns a dozen of anonymous mappings with hundreds of megabytes. Edit: If these mappings are placeholders, who will be using these kept places at which events and from which other activity are they protected - in other words: What is the wrong behavior that could happen if these placeholders were not there ? 2nd edit: Given these holes in shared libraries are in fact helpful for some purpose of the compiler and/or dynamic linker, there must be some other purpose for these holes in anonymous mappings visible in JVM processes. Sorted the anonymous mappings of a tomcat JVM process by size: 20 MB 00007FA0AAB52000-00007FA0AC000000 ---p 00000000 00:00 0 41 MB 00007FA0B1603000-00007FA0B4000000 ---p 00000000 00:00 0 50 MB 00007FA090D04000-00007FA094000000 ---p 00000000 00:00 0 53 MB 00007FA0F8A40000-00007FA0FC000000 ---p 00000000 00:00 0 61 MB 00007FA0C42C5000-00007FA0C8000000 ---p 00000000 00:00 0 61 MB 00007FA0CC29A000-00007FA0D0000000 ---p 00000000 00:00 0 61 MB 00007FA0D0293000-00007FA0D4000000 ---p 00000000 00:00 0 62 MB 00007FA0D814C000-00007FA0DC000000 ---p 00000000 00:00 0 62 MB 00007FA0E017E000-00007FA0E4000000 ---p 00000000 00:00 0 63 MB 00007FA0B803B000-00007FA0BC000000 ---p 00000000 00:00 0 63 MB 00007FA0BC021000-00007FA0C0000000 ---p 00000000 00:00 0 63 MB 00007FA0C0021000-00007FA0C4000000 ---p 00000000 00:00 0 63 MB 00007FA0D4075000-00007FA0D8000000 ---p 00000000 00:00 0 63 MB 00007FA0DC040000-00007FA0E0000000 ---p 00000000 00:00 0 63 MB 00007FA0E4067000-00007FA0E8000000 ---p 00000000 00:00 0 189 MB 00007FA0EC300000-00007FA0F8000000 ---p 00000000 00:00 0 1008 MB 0000000100FF5000-0000000140000000 ---p 00000000 00:00 0
Note that there are two memory regions of 2044KB with null permissions. As mentioned earlier, the ELF's 'execution view' is concerned with how to load an executable binary into memory. When ld.so brings in the dynamic libraries, it looks at the segments labelled as LOAD (look at "Program Headers" and "Section to Segment mapping" from readelf -a xxx.so command.) Usually there are two LOAD segments, and there is a "hole" between the two segments (look at the VirtAddr and MemSiz of these two segments), so ld.so will make this hole inaccessible deliberately: Look for the PROT_NONE symbol in _dl_map_object_from_fd in elf/dl-load.c http://www.cs.stevens.edu/~jschauma/810/elf.html It's also easy to see this happening as a mprotect call, using strace e.g. strace -f grep -- . /proc/self/maps 2>&1 |less. open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\25\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=471728, ...}) = 0 mmap(NULL, 2564360, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f0e3ad2a000 mprotect(0x7f0e3ad9c000, 2093056, PROT_NONE) = 0 mmap(0x7f0e3af9b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x71000) = 0x7f0e3af9b000 close(3) = 0 There are mirrors of the glibc repo on github, so it wasn't hard to search for PROT_NONE... /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps between them small enough that it's preferable to reserve all whole pages inside the gaps with PROT_NONE mappings rather than permitting other use of those parts of the address space). */ https://github.com/bminor/glibc/blob/73dfd088936b9237599e4ab737c7ae2ea7d710e1/elf/dl-map-segments.h#L21 /* _dl_map_segments ensures that any whole pages in gaps between segments are filled in with PROT_NONE mappings. So we can just unmap the whole range in one fell swoop. */ https://github.com/bminor/glibc/blob/73dfd088936b9237599e4ab737c7ae2ea7d710e1/elf/dl-unmap-segments.h#L25 Java OpenJDK ... uses PROT_NONE mappings to reserve uncommitted address space (which is then committed with mprotect calls, as needed). The natural assumption is that it wishes to have contiguous heap memory for some reason. It uses PROT_NONE to reserve space, until it actually needs it. The original context of this comment is a discussion about Linux VM overcommit: using unaccessible mappings avoids requiring any commitment from the kernel (until the mapping is needed & made accessible), in case the kernel is configure for strict commit mode. If you're wondering why it needs to make this reservation in advance in the context of the JVM, consider that native code linked in using JNI or equivalent might also be using mmap. https://lwn.net/Articles/627728/
What is the purpose of seemingly unusable memory mappings in linux?
1,373,495,376,000
I have a SSD drive with LUKS encrypted partition. How to discard all data with one command? Or damage it to non-recoverable state? Even if partition is in use.
If your SSD is encrypted with LUKS, erase the header is good enough e.g dd if=/dev/urandom of=/dev/sda1 bs=512 count=20480 See the following link for details https://wiki.archlinux.org/index.php/Dm-crypt/Drive_preparation#Wipe_LUKS_header
Emergency wipe SSD
1,373,495,376,000
What do the terms CC, LD and SHIPPED refer to during the Kernel Source compilation process? Am I correct to assume that [M] indicates that it is being compiled as a module?
CC means that the file listed is being compiled from C by the C compiler. LD means that the file listed is being linked from a number of object files by the linker (ld); in this case, aacraid is built from a number of files including src.o. SHIPPED means that the file listed was shipped in the kernel source and is being copied as-is rather than rebuilt; it can be rebuilt if you really want to but doing that may require extra work (e.g. cross-compilation toolchains for firmware). As you surmised, [M] means that the process is building a kernel module.
What do the terms CC, LD and SHIPPED refer to during the Kernel Source compilation process?
1,373,495,376,000
This question asks for the best way to create a directory when using mv if it doesn't exist. My question is why isn't this an inbuilt feature of mv? Is there some fundamental reason due to which this would not be a good idea?
Keep in mind that there is more than one implementation of mv. The mv you use on linux is not from the exact same source as the one on OSX or Solaris, etc. But it is desirable for them all to behave in the same way -- this is the point of standards. It's conceivable that a mv implementation could add an option for this purpose, although since it is so simple to deal with, it would probably not be worthwhile because the very minor benefit is outwayed by a more significant negative consequence: Code written which exploited such a non-standard option of an implementation would not be portable to/behave constantly on another system using a standard implementation. mv is standardized by POSIX and this explicitly ties its behavior to the rename() system call. In ISO C the behavior of rename() is not very specific and much is left up to the implementation, but under POSIX you'll note the potential ENOENT error, indicating "a component of the path prefix of new does not exist", describing the behavior to be expected in explicit terms. This is better than ambiguity and leaving such details up to the implementation, because doing the latter hurts the portability. In defense of the design, in a scripting context it's probably better to by default fail on an invalid target path than assume it just needs to be created. This is because the path itself may often come from user input or configuration and may include a typo; in this case the script should fail at that point and indicate to the user that they've entered an invalid path. There is of course the option for the person who wrote the code to implement different behavior and create directories that don't exist, but it is better that you are responsible for doing that than the opposite (being responsible for ensuring a mv call won't create previously non-existent directories).
Will `mv` ever have the ability to create directories?
1,373,495,376,000
I've noticed that some procs, such as bash, have their entire /proc/<pid>/ resources readable by the user who created that proc. However other procs, such as chrome or gnome-keyring-daemon, have most of their /proc/<pid>/ resources only accessible by root, despite the process itself being owned by the normal user and no suid being called. I dug through the kernel a bit and found that the /proc/ stuff gets limited if a task lacks a 'dumpable' flag, however I'm having a hard time understanding under what scenarios a task becomes undumpable (other than the setuid case, which doesn't apply to chrome or gnome-keyring): https://github.com/torvalds/linux/blob/164c09978cebebd8b5fc198e9243777dbaecdfa0/fs/proc/base.c#L1532 Anyone care to help me understand the underlying mechanism and the reasons for it? Thanks! Edit: Found a good doc on why you wouldn't want to have your SSH agent (such as gnome-keyring-daemon) dumpable by your user. Still not sure how gnome-keyring-daemon is making itself undumpable. https://github.com/torvalds/linux/blob/164c09978cebebd8b5fc198e9243777dbaecdfa0/Documentation/security/Yama.txt#L30
Linux has a system call, which will change the dumpable flag. Here is some example code, which I wrote several years ago: #include <sys/prctl.h> ... /* The last three arguments are just padding, because the * system call requires five arguments. */ prctl(PR_SET_DUMPABLE,1,42,42,42); It may be that gnome-keyring-daemon deliberately set the dumpable flag to zero for security reasons.
What causes /proc/<pid>/* resources to become owned by root, despite the procs being launched as a normal user?
1,373,495,376,000
I'm running a local vagrant VM, Ubuntu 13.10 with nginx reverse proxying to uwsgi. Running sudo /etc/init.d/nginx status returns * nginx is running However running sudo /etc/init.d/uwsgi status returns * which one? If I take a look at the log file for the wsgi app I can see that uwsgi is running, worker processes have been created etc... so is there a hidden instance of uwsgi running somewhere that's confusing the service restart command? I installed uwsgi using: apt-get install uwsgi Here's the app.ini file: [uwsgi] plugin = python socket = /run/uwsgi/app/myapp/socket chdir = /var/www/myapp module = project.wsgi:application pidfile = /tmp/myapp-master.pid master = True daemonize = /var/log/uwsgi/myapp.log Any ideas?
Changing the pidfile option to pidfile2 seems to fix this issue. pidfile2 = /tmp/myapp-master.pid Interestingly the service uwsgi stop returns [OK] but the service uwsgi start returns [fail] So I'm assuming the error happens when a non privileged user (i.e. www-data) is trying to write to the pidfile which has been created by a privileged user (e.g. root). pidfile2 will create the pidfile after privileges drop - so www-data can happily write to it. If someone else can shed light on whether this is the case that would be great.
"sudo service uwsgi restart" returns [fail] however uwsgi logs show processes have been created and python app is running
1,373,495,376,000
I found two ways to change the TTY font: dpkg-reconfigure console-setup editing /etc/default/console-setup Either way I'm limited to the fonts in /usr/share/consolefonts (I'm using Debian). How do I install new console fonts? What are the requirements for this kind of font? ... in particular: would it be possible to use Anonymous Pro as console font?
Installing FbTerm was what I went with to get nice fonts in my CLI environment. It's a frame buffer terminal emulator (so no need for X org) that supports nice rendering of the same kinds of fonts you would use in a GUI.
Changing TTY font to a non-default font
1,373,495,376,000
For ten years I have been generating billings for me and my wife in this way: Create TeX template with LyX or by hand Using in this source placeholder text Then: cat template | sed s/placeholder1/somevalue1/ |sed s/placeholder2/somevalue2 ... > output.tex pdflatex output.tex Sadly in last years this process is becoming more and more difficult because TeX packages are becoming larger and larger (don't know why, but a full TeX installation on Ubuntu is almost 3 GB), and I always get lots of errors in the generation. So, since I can control all the processes involved, is there a better (easier/more portable) way to create the template and then fill it with fields and create a PDF?
I have seen PDF files generated by using Python and ReportLab. In the open source version you have to do some programming to get to the output, but there are several examples and this should be straightforward for someone who could handle TeX. What I have not used but what seems even simpler is to use rst2pdf. The generating/editing the .rst markup format is much the same as using LaTeX.
Easiest method to create pdf from template using command line (without pdflatex)?
1,373,495,376,000
I have made some changes to /etc/ssh/sshd file and wanted these effects should take place. I know I can run below command for the effects to take place. /etc/init.d/sshd reload But on my box I could not find /etc/init.d/sshd itself. So is there any other command can I run which is equivalent to /etc/init.d/sshd reload Edit: I am on linux kernel 2.6.28 running on embedded development board.
Try: $ sudo /etc/init.d/sshd restart systemd If that doesn't work and your using a distro such as Fedora/CentOS/RHEL and it's using systemd then try this: $ systemctl sshd.service reload You can get all the commands that sshd.service will accept by doing this. Hit the Tab key after typing the following: $ systemctl sshd.service cancel emergency is-enabled list-unit-files reload-or-restart start condreload enable is-failed list-units reload-or-try-restart status condrestart exit isolate load rescue stop condstop force-reload kexec mask reset-failed suspend daemon-reexec halt kill poweroff restart try-restart daemon-reload help link preset set-environment unmask default hibernate list-dependencies reboot show unset-environment delete hybrid-sleep list-jobs reenable show-environment disable is-active list-sockets reload snapshot If it's a Debian/Ubuntu system they use upstart to mange services, at least with the newer versions. See my answer to this Q&A titled: How to “close” open ports?. I discuss your options for using upstart & systemd further in that answer. Neither? You could use kill to send the SIGHUP signal to the running process if none of the above are appropriate for your particular distro. $ pkill -1 sshd Will send signal 1 (SIGHUP) to the sshd process. If you don't have the pkill or pgrep family of commands you can use ps. $ ps -eaf | grep sshd 1234 Then take that process ID and send it a signal using the kill command. $ kill -1 1234 Signals If you ever forget which ones are which you can use the kill command to find out via the -l switch. Example $ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX References F.9. Administering services with systemd
What are the commands to apply changes made to /etc/ssh/sshd_config?
1,373,495,376,000
192.168.25.1 = router 192.168.10.1 = gateway/modem 192.168.25.144 = this pc (which is 'linuxpc' running fedora 17) What happened during these log entry events below? Specifically what do the last two log entries mean. Oct 10 13:24:22 linuxpc dhclient[5779]: DHCPREQUEST on p14p1 to 192.168.25.1 port 67 (xid=0x466a6633) Oct 10 13:24:22 linuxpc dhclient[5779]: DHCPACK from 192.168.25.1 (xid=0x466a6633) Oct 10 13:24:22 linuxpc dhclient[5779]: bound to 192.168.25.144 -- renewal in 32701 seconds. Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> (p14p1): DHCPv4 state changed renew -> renew Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> address 192.168.25.144 Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> prefix 24 (255.255.255.0) Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> gateway 192.168.25.1 Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> hostname 'linuxpc' Oct 10 13:24:22 linuxpc NetworkManager[846]: <info> nameserver '192.168.10.1' Oct 10 13:24:22 linuxpc dbus[910]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Oct 10 13:24:22 linuxpc dbus-daemon[910]: dbus[910]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Oct 10 13:24:22 linuxpc dbus-daemon[910]: dbus[910]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Oct 10 13:24:22 linuxpc dbus[910]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Oct 10 13:30:29 linuxpc dbus-daemon[910]: dbus[910]: [system] Rejected send message, 2 matched rules; type="method_return", sender=":1.2" (uid=0 pid=865 comm="/usr/lib/systemd/systemd-logind ") interface="(unset)" member="(unset)" error name="(unset)" requested_reply="0" destination=":1.36" (uid=1000 pid=1182 comm="gnome-session ") Oct 10 13:30:29 linuxpc dbus[910]: [system] Rejected send message, 2 matched rules; type="method_return", sender=":1.2" (uid=0 pid=865 comm="/usr/lib/systemd/systemd-logind ") interface="(unset)" member="(unset)" error name="(unset)" requested_reply="0" destination=":1.36" (uid=1000 pid=1182 comm="gnome-session ")
First 3 lines are messages from dhclient which: sent a DHCP request to the router received DHCP lease set up the interface next 6 lines are from the NetworkManager, which basically restates the above in more detail reason probably is, that the dhclient instance was spawned by NM (rather silly if you ask me, but that's NetworkManager) Last 6 lines are from the D-Bus daemon. First 4 of these are the imprint of successful activation of the nm_dispatcher service (nm stands for NetworkManager again) service, which activates services on connection to network. The 2 remaining lines are (to the best of my understanding) log of rejected message from systemd's logind to the GNOME session manager. I don't speak D-Bus and systemd enough to even guess what could have triggered these, but judging my the man page, I would expect it to be either another login/VT switch or "Device access management for users". It would certainly help if you knew what happened around that time.
Understanding /var/log/messages entries
1,373,495,376,000
Sometimes when using VoIP I experience disruptions. I would like to check if the problems could be caused by my internet provider. How could I best test the quality of my bandwidth (throughput and latency)? Until now, I have used a script which sends 3600 pings per hour and saves min/max/avg., but I am not sure how representative ICMP packets are.
MTR is probably the tool you're looking for. I've been using it for a long time and it's helped me troubleshoot a lot of network connectivity problems. It's like traceroute, but it runs continuously and shows you detailed info of every hop along the way. From the wiki: MTR relies on ICMP Time Exceeded (type 11, code 0) packets coming back from routers, or ICMP Echo Reply packets when the packets have hit their destination host. Good luck! VoIP can be a headache to troubleshoot.
How to test the quality of my network connection for VoIP services
1,373,495,376,000
I am repeating tens of thousands of similar operations in /dev/shm, each with a directory created, files written, and then removed. My assumption used to be that I was actually creating directories and removing them in place, so the memory consumption had to be quite low. However it turned out the usage was rather high, and finally caused memory overflow. So my questions is: with operations like mkdir /dev/shm/foo touch /dev/shm/foo/bar [edit] /dev/shm/foo/bar .... rm -rf /dev/shm/foo Will it finally cause memory overflow? and if it does, why is that, since it seems to be removing them in-place. Note: this is a tens of thousands similar operation.
Curious, as you're running this application what does df -h /dev/shm show your RAM usage to be? tmpfs By default it's typically setup with 50% of whatever amount of RAM the system physically has. This is documented here on kernel.org, under the filesystem documentation for tmpfs. Also it's mentioned in the mount man page. excerpt from mount man page The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower. confirmation On my laptop with 8GB RAM I have the following setup for /dev/shm: $ df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 4.4M 3.9G 1% /dev/shm What's going on? I think what's happening is that in addition to being allocated 50% of your RAM to start, you're essentially consuming the entire 50% over time and are pushing your /dev/shm space into swap, along with the other 50% of RAM. Note that one other characteristic of tmpfs vs. ramfs is that tmpfs can be pushed into swap if needed: excerpt from geekstuff.com Table: Comparison of ramfs and tmpfs Experimentation Tmpfs Ramfs --------------- ----- ----- Fill maximum space and continue writing Will display error Will continue writing Fixed Size Yes No Uses Swap Yes No Volatile Storage Yes Yes At the end of the day it's a filesystem implemented in RAM, so I would expect it to act a little like both. What I mean by this is that as files/directories are deleted your're using some of the physical pages of memory for the inode table, and some for the actual space consumed by these files/directories. Typically when you use space on a HDD, you don't actually free up the physical space, just the entries in the inode table, saying that the space consumed by a specific file is now available. So from the RAM's perspective the space consumed by the files is just dirty pages in memory. So it will dutifully swap them out over time. It's unclear if tmpfs does anything special to clean up the actual RAM used by the filesystem that it's providing. I saw mention in several forums that people saw that it was taking upwards of 15 minutes for their system to "reclaim" space for files that they had deleted in the /dev/shm. Perhaps this paper I found on tmpfs titled: tmpfs: A Virtual Memory File System will shed more light on how it is implemented at the lower level and how it functions with respect to the VMM. The paper was written specifically for SunOS but might hold some clues. experimentation The following contrived tests seem to indicate /dev/shm is able to clean itself up. experiment #1 Create a directory with a single file inside it, and then delete the directory 1000 times. initial state of /dev/shm $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5500 3988244 1% /dev/shm fill it with files $ for i in `seq 1 1000`;do mkdir /dev/shm/sam; echo "$i" \ > /dev/shm/sam/file$i; rm -fr /dev/shm/sam;done final state of /dev/shm $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5528 3988216 1% /dev/shm experiment #2 Create a directory with a single 50MB file inside it, and then delete the directory 300 times. fill it with 50MB files of random garbage $ start_time=`date +%s` $ for i in `seq 1 300`;do mkdir /dev/shm/sam; \ dd if=/dev/random of=/dev/shm/sam/file$i bs=52428800 count=1 > \ /dev/shm/sam/file$i.log; rm -fr /dev/shm/sam;done \ && echo run time is $(expr `date +%s` - $start_time) s ... 8 bytes (8 B) copied, 0.247272 s, 0.0 kB/s 0+1 records in 0+1 records out 9 bytes (9 B) copied, 1.49836 s, 0.0 kB/s run time is 213 s final state of /dev/shm Again there was no noticable increase in the space consumed by /dev/shm. $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5500 3988244 1% /dev/shm conclusion I didn't notice any discernible effects with adding files and directories with my /dev/shm. Running the above multiple times didn't seem to have any effect on it either. So I don't see any issue with using /dev/shm in the manner you've described.
operation in /dev/shm causes overflow
1,373,495,376,000
I am using initramfs to boot Centos via PXE. The initramfs used memory is listed within the "cached" value in /proc/meminfo or via free. Since I need to calculate performance data, I need to know whether the memory used by the initramfs is reclaimable (i.e. can be swapped out to disk) or not. Typically only a very small part of the / filesystem tree is actually in use, so a majority of the initramfs could be swapped out. Reading on this I got conflicting information. Some sources claimed that initramfs behaves like initrd and is based on ramfs, which means claimed memory cannot be paged out to swap. Other sources claim that initramfs is essentially tmpfs which in turn would imply that it can be paged out to swap. Which is true? Can the unused parts of the initramfs filesystem be paged out to swap space?
EDIT: Answer updated/corrected. Although the kernel documentation about this topic says that "Rootfs is a special instance of ramfs (or tmpfs, if that's enabled) [...]", it is in reality still a ramfs, as a short look in the code shows (rootfs is not mentioned in mm/shmem.c). Some patches (see e.g. here and here) were sent to the Linux kernel mailing list (lkml) to change this. But they were not accepted. One reason was, that you normally do not have enabled swap during the initramfs phase or in embedded systems. The initramfs image is extracted to the rootfs. Before user space (usually switch_root called from /init) switches to the new root, it deletes the content of the rootfs such that only the minimal memory amount of an empty ramfs is remains. So after this, you can basically ignore its memory usage and the question if it can be swapped out, is nearly irrelevant.
Can initramfs be paged out to swap disk?
1,373,495,376,000
I am attempting to extract a tarball (*.tgz, to be exact) and receiving terminal errors on extracted symlinks. Unfortunately, I cannot simply recreate the archive as this is a legacy archive for a system that no longer exists which was created before I was even out of high school (have to love working for a big company). I have consulted the almighty Google; however all I can seem to find is information for excluding / following symlinks at creation time. The exact error I am receiving is something of a misnomer (error: read-only filesystem) and comes from the fact that a very large portion of the data payload is contained within numerous squash / cram / loop filesystems. The symlinks are referencing data within them which, obviously, cannot be mounted due to errors while extracting said tarball. Chicken; meet egg. So, in short: How can I extract a *.tgz archive to completion while either ignoring symlinks or ignoring resultant symlink errors? For reference: $ tar --version tar (GNU tar) 1.26 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Written by John Gilmore and Jay Fenlason. $ uname -a Linux localhost.localdomain 3.7.9-205.fc18.x86_64 #1 SMP Sun Feb 24 20:10:02 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
-h (or --dereference) to dereference will only work upon creation. Reference: http://www.gnu.org/software/tar/manual/tar.html#SEC138 According to a similar answer here: How do I dereference links when extracting from a tar file? you can mount the archive then copy from it, though I have not tested this myself.
Ignore Symlinks when Extracting Tarball
1,373,495,376,000
I have a computer with two WD 1TB drives and I want to configure disk mirroring on it. I tried setting up RAID during the installation by creating RAID partitions but that does not work for me. Is there a convenient software that I can install to do the job for me? If no, what shell commands can be used? Once it is set up how should I verify that it is working?
You can do that. You need to be a bit careful, but this is not dangerous¹ if you are very careful not to mistype anything and it doesn't leave any gotchas in the setup. I highly recommend not doing any of the manipulations on a live system. It's possible in some cases but requires extra care. Boot from a liveCD/liveUSB such as Parted or SystemRescueCD. Assumption: you have a block device that contains something Linux recognizes, for example: a disk containing one or more partitions; a partition containing a filesystem; a partition containing an LVM physical volume. Objective: make that block device a component of an mdraid (Linux software RAID) RAID-1 (mirroring) volume. The RAID volume will initially be in a degraded state with all but one components missing. First, you need to shrink the volume a bit, to make room for mdraid metadata (the superblock). There are several metadata formats, you must use one that puts the metadata at the end of the disk. (In some setups, you may have enough space to put the superblock at the beginning, but that's more complicated and risk-prone so I go into that.) You must ensure that the last 128kB from the block device are unused, to make room for the superblock. If the block device is a disk containing partitions, shrink the partition that comes last (this may not be the partition with the highest number). You'll need to shrink whatever the partition contains as well. If the block device contains a filesystem, shrink that filesystem. If the block device contains an LVM physical volume, call pvreduce to reduce the size of the physical volume. This may or may not reduce the usable size since physical volumes have a granularity of 4MB (more precisely, one extent: 4MB is the rarely-changed default extent size). Parted can handle filesystems and partitions. If you need to shrink an ext4 filesystem, you'll need to unmount it first; a btrfs filesystem can be shrunk live. If you've modified the partition table on a disk where some partitions are in use, reboot. Once you have ensured that the last 128kB of the block device are free, call mdadm --create to create a RAID-1 volume. This doesn't touch any part of the volume aside from the superblock. Initially, the volume will have a single component: all the others are set as failed. You must pass --level=1 (or equivalently -n 1) (this approach only works for RAID-1) and --metadata=0.9 or --metadata=1.0 (the default superblock format 1.2 puts the superblock near the beginning of the device, which may overwrite data). The argument to --raid-devices (-n) is the number of components (included missing ones) in the RAID volume. Replace /dev/sdz99 by the designation of the block device (e.g. /dev/sda for a whole disk or /dev/sda1 for a partition). mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=1.0 /dev/sdz99 missing You can now activate the array and add other components. mdadm --add /dev/md0 /dev/sdy98 Grub2 understands Linux RAID-1 and can boot from it. Bootloaders such as Grub1 that don't understand RAID read transparently from mirror volumes, but your system won't boot if the drive the bootloader is reading from fails. If the RAID volume is on a partition, be sure to install Grub's boot sector on both drives. ¹ Be sure to have backups. “Not dangerous” means “you probably won't need them”, not “gamble your data”.
How to set up disk mirroring (RAID-1)
1,373,495,376,000
I've got hard system configuration there and I don't want to re-install it again but I need more space now. Current system space is 30GB. Seems like it's not possible to re-size disk so what I need is to re-init my system on new disk and here I have some questions. I will copy all the data to host machine first (windows), is it safe, can I lost data (break symlinks) by doing this? I just make same partitions with same filesystems and move data there - is it enough? grub:2 is installed on special partition (EF02 GPT code) - is it movable? Is there another ways to make such transfer or virtual drive re-size?
Virtualbox images can be resized from outside Virtualbox. Run this command on the VDI: VBoxManage modifyhd SLACK.vdi --resize 100000 That last number is the size in MiB.
How to re-size virtual disk with installed Linux system?
1,306,122,705,000
We have many unused PC machines and we would like to use them to set up educational lab for high performance computing applications. Which Linux distribution is the most convenient to set up and easy to manage in educational environment? I would be thankful if someone provides me with a list of advantages and disadvantages of different Linux clustering distributions.
There's the rocks linux distro which is made for clustering, and is based on CentOS/RHEL. The strong point of rocks is that it'll for the most part manage and do a lot of the minutia for you. It'll do automatic installation and reinstallation, and if your computers can boot via PXE, the initial install will consist of PXE booting your nodes. If you have a large number of compute-nodes, they use bittorrent internally for distributing packages, which removes a significant bottleneck for (re)installing the entire thing. It'll give you a very homogeneous compute-environment by default. By default it'll set up and use NFS internally, and there's options for using PVFS2 (which I haven't tried). As for queueing/batch systems, it should set up and manage this for you, by default I think it uses SGE, there's also a roll (their software bundling format) for torque. It'll ensure consistency in users/groups/etc. across your cluster It'll graph resource utilization through ganglia If I were to dig up downsides Adding/removing software from the compute-nodes involves reinstalling them (although, it does ensure homogeneity). Adding/removing software involves either adding a roll (their way of bundling rpms/appliances), or editing xml-files. However, it's fairly well documented so if you're willing to put some effort into reading the documentation you should be ok. Plus there's a mailing-list if you get stuck. It's based on CentOS/RHEL, which is a little behind "bleeding edge" It'll (mostly) force you to do things "their way", minor changes you might get away with maybe modifying some of the xml-config files, major changes might have to be implemented through making, adding or modifying rolls (their sw/addon format)
Straightforward Linux Clustering
1,306,122,705,000
I would like to associate the Start menu with the Windows key on the keyboard?
In KDE 4, the sequence to find the shortcut key is System Settings -> Keyboard and Mouse -> Global Keyboard Shortcuts -> Plasma Workspace then modify the shortcut key for "Activate Application Launcher Widget". An alternative way to get to the shortcut is to simply right-click on the Kickoff menu and choose "Application Launcher Settings". Additionally, unless you configure the Win key, it acts as a Meta modifier by default, and cannot be used as a shortcut. To change this, create a file ~/.xmodmap and put the following in the file: keycode 115 = F14 This makes your system think the Win key is actually the F14 key. Note: different keyboards sometimes have different key maps for the Win key (i.e. on my Thinkpad, the keycode for the Win key is 133). You can use the application "xev" from the terminal to determine which keycode applies to your keyboard. You can modify the file /etc/kde/kdm/Xsession and add the command if [ -f $HOME/.xmodmap ]; then /usr/bin/xmodmap $HOME/.xmodmap fi to the bottom of the file, which should load your .Xmodmap on startup. Note: you should do the .Xmodmap creation and the Xsession modification and then logout/login (or run /usr/bin/xmodmap $HOME/.xmodmap) before attempting to modify the shortcut key.
How to associate the Fedora Start menu with the Windows keyboard?