date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,301,414,593,000
How can I pipe any data to audio output? For example, i want to listen to a file -- an archive, a drive backup, a program. Or I want to listen to my HDD -- I vaguely remember reading something about this being possible about 7 years ago, but can't find anything now. So, files, disk reads, even network connections -- I want to be able to listen to anything. I know that it's definitely possible with Linux. How can I do it? Using Lubuntu 20.04
I find piping things into aplay works well. journalctl | aplay doesn't sound pretty but does work surprisingly well. Here's an example from aplay(1): aplay -c 1 -t raw -r 22050 -f mu_law foobar will play the raw file "foobar" as a 22050-Hz, mono, 8-bit, Mu-Law .au file. It can be found as part of the alsa-utils package on debian/ubuntu. Here's a 1-liner that I like which echos a small C program into gcc, and runs the compiled version, piping it to aplay. The result is a surprisingly nice 15-minute repeating song. echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6$\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay
How to pipe anything to the audio output?
1,301,414,593,000
I'm basically asking about user management from the command line (e.g. on a system where there are no graphical tools available). In the past I've used several different programs to add or delete users and groups or to modify their attributes: adduser, useradd, usermod, gpasswd, and perhaps others I've forgotten. I've also heard a couple times that some of these programs are low-level and should be avoided for general use, but I can never remember which. So I'd like to get a definitive answer for which programs are the recommended ones for at least the following tasks: Create a new user Add a user to a group Remove a user from a group Change a user's primary group Change a user's login shell or home directory Delete a user I'm looking for standard tools which I can expect to be available on pretty much any Linux system (of any distribution).
Sadly, none of those operations were ever standardized. Some operating systems offer this functionality as part of the OS, like Linux, but even if your Linux system includes them, over time and across Linux distributions the tools and their names changed so you can not really depend on a standard set of tools to do those tasks. You need to have a per-operating system set of tools.
What is/are the standard CLI program(s) to manage users and groups?
1,301,414,593,000
I have 2 servers, Server1 and Server2. On Server1 I have a user named user1. On Server2 I have the user named user2. I need to be able to write a script that runs on Server1, which will scp some files to user2@Server2. Is there any way to do this without prompting for a password? I can put the password in a config file or something if necessary. I am not able to create user2@Server1 user by the way.
What you want are ssh-key pairs, these create 'trusted networks' that allow for password-less authentication: On your client (server1): [user@server1]# ssh-keygen -t rsa -b 2048 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): # Hit Enter Enter passphrase (empty for no passphrase): # Hit Enter Enter same passphrase again: # Hit Enter Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. Now copy your public key to your remote server (server2): ssh-copy-id user2@server2 [OR] cat ~/.ssh/id_rsa.pub | ssh user2@server2 "mkdir -p ~/.ssh \ && cat >> ~/.ssh/authorized_keys" Now when you run the scp (or any other ssh) command you shouldn't be prompted for a password: scp file user2@server2:/drop/location
SCP without password prompt using different username [duplicate]
1,301,414,593,000
I use this sleep 900; <command> on my shell. Just wanted to know is there is some alternate/better way that you use?
You are searching for at (at@wikipedia)? usr@srv % at now + 15 min at> YOUR COMMAND HERE You can define multiple commands that should be executed in 15 min; separate them with a return. Confirm all commands with control+d.
What is a good way of saying "run this after 15 minutes" on a shell?
1,301,414,593,000
When installing Ubuntu for the first time, I separated / and home into different partitions. Thinking back on it, how is this possible? Isn't home "in" /.
You might want to read the manual page entry for the mount command: https://www.man7.org/linux/man-pages/man8/mount.8.html All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The file hierarchy is a way of logically organizing the files on your system but is not really representative of how the files are physically stored.
How is it possible to have / and /home on different partitions
1,301,414,593,000
to copy contents from a folder i've read , the use is: cp -rfva ../foldersource/. ./ but this works too cp -rfva ../foldersource/* ./ is there any difference? by example if i want to delete a content from a folder with . : rm -rf ../foldersource/. i get the error: rm: rejet delete folder '.' or '..': but with asterisk is ok rm -rf ../foldersource/* so, asterisk is better options that works anywhere ?
There is a fundamental difference between these two argument forms. And it's an important one to understand what is happening. With ../foldersource/. the argument is passed unchanged to the command, whether it's cp or rm or something else. It's up to the command whether that trailing dot has special or unique semantics different from the standard Unix convention of simply pointing to the directory it resides in; both rm and cp seem to treat it as a special case. With ../foldersource/* the argument is first expanded by the shell before the command is ever even executed and passed any arguments. Thus, rm never sees ../foldersource/*; it sees the expanded version ../foldersource/file1.ext ../foldersource/file2.ext ../foldersource/childfolder1 and so on. This is important because operating systems limit how many arguments can be passed to a command, usually only a few hundred.
difference copy contents folder between /. and /* in linux
1,301,414,593,000
How does one find out the true number of processes that is running on your system? A number of articles mention using ps in order to count the number of processes. But recently I looked at cat /proc/stat, and it outputted: cpu 972 0 1894 189609 236 26 490 0 0 0 cpu0 972 0 1894 189609 236 26 490 0 0 0 intr 101595 157 10 0 0 0 0 0 0 3 0 0 0 136 0 0 0 1406 0 0 14936 934 19133 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 244344 btime 1405754990 processes 3912 procs_running 3 procs_blocked 0 softirq 122778 0 48263 439 15599 19037 0 1 0 7405 32034 It shows the number of processes as 3912. Using something like ps -A --no-headers | wc -l only shows 173 processes. Why does /proc/stat show so many more processes (an increase of 3739 processes)? Who is giving the right number?
Look at the documentation for proc(5), and you'll see this for the processes field: Number of forks since boot. So it's simply not the number you're looking for. ps will give you that as you already know, counting the directories with only numbers in their name under /proc is another approach.
Using /proc/stat for the number of processes running on the system
1,301,414,593,000
I'm trying to run a command, write that to a file, and then I'm using that file for something else. The gist of what I need is: myAPICommand.exe parameters > myFile.txt The problem is that myAPICommand.exe fails a lot. I attempt to fix some of the problems and rerun, but I get hit with "cannot overwrite existing file". I have to run a separate rm command to cleanup the blank myFile.txt and then rerun myAPICommand.exe. It's not the most egregious problem, but it is annoying. How can I avoid writing a blank file when my base command fails?
You must have "noclobber" set, check the following example: $ echo 1 > 1 # create file $ cat 1 1 $ echo 2 > 1 # overwrite file $ cat 1 2 $ set -o noclobber $ echo 3 > 1 # file is now protected from accidental overwrite bash: 1: cannot overwrite existing file $ cat 1 2 $ echo 3 >| 1 # temporary allow overwrite $ cat 1 3 $ echo 4 > 1 bash: 1: cannot overwrite existing file $ cat 1 3 $ set +o noclobber $ echo 4 > 1 $ cat 1 4 "noclobber" is only for overwrite, you can still append though: $ echo 4 > 1 bash: 1: cannot overwrite existing file $ echo 4 >> 1 To check if you have that flag set you can type echo $- and see if you have C flag set (or set -o |grep clobber). Q: How can I avoid writing a blank file when my base command fails? Any requirements? You could just simply store the output in a variable and then check if it is empty. Check the following example (note that the way you check the variable needs fine adjusting to your needs, in the example I didn't quote it or use anything like ${cmd_output+x} which checks if variable is set, to avoid writing a file containing whitespaces only. $ cmd_output=$(echo) $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e '\n\n\n') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e ' ') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e 'something') $ test $cmd_output && echo yes || echo no yes $ cmd_output=$(myAPICommand.exe parameters) $ test $cmd_output && echo "$cmd_output" > myFile.txt Example without using a single variable holding the whole output: log() { while read data; do echo "$data" >> myFile.txt; done; } myAPICommand.exe parameters |log
How can I output a command to a file, without getting a blank file on error?
1,301,414,593,000
The echo in coreutils seems to be ubiquitous, but not every system will have it in the same place (usually /bin/echo). What's the safest way to invoke this echo without knowing where it is? I'm comfortable with the command failing if the coreutils echo binary doesn't exist on the system -- that's better than echo'ing something different than I want. Note: The motivation here is to find the echo binary, not to find a set of arguments where every shell's echo builtin is consistent. There doesn't seem to be a way to safely print just a hyphen via the echo builtin, for example, without knowing if you're in zsh or bash.
Note that coreutils is a software bundle developed by the GNU project to provide a set of Unix basic utilities to GNU systems. You'll only find coreutils echo out of the box on GNU systems (Debian, trisquel, Cygwin, Fedora, CentOS...). On other systems, you'll find a different (generally with different behaviour as echo is one of the least portable applications) implementation. FreeBSD will have FreeBSD echo, most Linux-based systems will have busybox echo, AIX will have AIX echo... Some systems will even have more than one (like /bin/echo and /usr/ucb/echo on Solaris (the latter one being part of package that is now optional in later versions of Solaris like the for GNU utilities package from which you'd get a /usr/gnu/bin/echo) all with different CLIs). GNU coreutils has been ported to most Unix-like (and even non-Unix-like such as MS Windows) systems, so you would be able to compile coreutils' echo on most systems, but that's probably not what you're looking for. Also note that you'll find incompatibilities between versions of coreutils echo (for instance it used not to recognise \x41 sequences with -e) and that its behaviour can be affected by the environment (POSIXLY_CORRECT variable). Now, to run the echo from the file system (found by a look-up of $PATH), like for every other builtin, the typical way is with env: env echo this is not the builtin echo In zsh (when not emulating other shells), you can also do: command echo ... without having to execute an extra env command. But I hope the text above makes it clear that it's not going to help with regards to portability. For portability and reliability, use printf instead.
What's the safest, most portable way to invoke the echo binary?
1,301,414,593,000
I always have /proc/sys/kernel/panic set up to 0. Looking at description of this option in kernel.org we can see: panic: The value in this file represents the number of seconds the kernel waits before rebooting on a panic. When you use the software watchdog, the recommended setting is 60. From here one can conclude that 0 is 0 seconds waiting before reboot - immediate reboot. But proc MAN page states the following: /proc/sys/kernel/panic This file gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if nonzero, it indicates that the kernel should autore‐ boot after this number of seconds. When you use the software watchdog device driver, the recommended setting is 60. Here 0 means antipodal thing - never reboot. So why such a trusted source gives such a misleading info? Or maybe the MAN page is inaccurate? P.S. just from a hint in panic_on_oops section (if you happen to read this) you can guess that MAN page is right. Or if you are technically skilled enough to investigate something in kernel source code.
The authoritative source is the implementation in the kernel, so let’s look at that first. The panic entry in sysctl corresponds to a kernel variable called panic_timeout. This is a signed integer, used to control behaviour on panic as follows: if panic_timeout is strictly positive, the kernel waits after a panic, for panic_timeout seconds; if panic_timeout is non-zero, the kernel reboots after a panic (after waiting, if appropriate); if the kernel hasn’t rebooted, it prints a message and loops forever. So the manpage is correct, and the kernel’s own documentation was incomplete; but sysctl/kernel.rst now documents panic in more detail. This was fixed in version 5.7-rc1 of the kernel.
Linux Kernel.org misleading about kernel panic /proc/sys/kernel/panic
1,301,414,593,000
I downloaded a torrent file http://cdimage.debian.org/cdimage/stretch_di_rc1/amd64/bt-cd/debian-stretch-DI-rc1-amd64-netinst.iso.torrent Now I want to parse/read it so that I can find out things like - a. Which software was used to create the torrent file ? b. The size of the iso image, the size and number of pieces c. Number of trackers which the iso image. All of which is meta-data. I guess I'm looking for what mediainfo is for a media file - [$] mediainfo Big_Buck_Bunny_small.ogv General ID : 30719 (0x77FF) Complete name : Big_Buck_Bunny_small.ogv Format : Ogg File size : 2.65 MiB Duration : 1 min 19 s Overall bit rate mode : Variable Overall bit rate : 280 kb/s Writing application : ffmpeg2theora-0.25 SOURCE_OSHASH : cc9e38e85baf7573 Video ID : 20319 (0x4F5F) Format : Theora Duration : 1 min 19 s Bit rate : 212 kb/s Nominal bit rate : 238 kb/s Width : 240 pixels Height : 134 pixels Display aspect ratio : 16:9 Frame rate : 24.000 FPS Compression mode : Lossy Bits/(Pixel*Frame) : 0.275 Stream size : 2.01 MiB (76%) Writing library : Xiph.Org libtheora 1.1 20090822 (Thusnelda) Audio ID : 13221 (0x33A5) Format : Vorbis Format settings, Floor : 1 Duration : 1 min 19 s Bit rate mode : Variable Bit rate : 48.0 kb/s Channel(s) : 2 channels Sampling rate : 48.0 kHz Compression mode : Lossy Stream size : 465 KiB (17%) Writing library : libVorbis 20090709 (UTC 2009-07-09) Is there something similar ? I am looking for a CLI tool .
transmission has a tool for that $ transmission-show debian-stretch-DI-rc1-amd64-netinst.iso.torrent Name: debian-stretch-DI-rc1-amd64-netinst.iso File: debian-stretch-DI-rc1-amd64-netinst.iso.torrent GENERAL Name: debian-stretch-DI-rc1-amd64-netinst.iso Hash: 13d51b233d37965a7137dd65858d73c5a2e7ded4 Created by: Created on: Fri Jan 13 12:29:09 2017 Comment: "Debian CD from cdimage.debian.org" Piece Count: 1184 Piece Size: 256.0 KiB Total Size: 310.4 MB Privacy: Public torrent TRACKERS Tier #1 http://bttracker.debian.org:6969/announce FILES debian-stretch-DI-rc1-amd64-netinst.iso (310.4 MB) Another one would be intermodal which besides showing metadata can also create and verify it: https://rodarmor.com/blog/intermodal
Is there a CLI tool to parse/read and show the metadata from a torrent file?
1,301,414,593,000
If I install an application in Linux for example Debian/Gnu Linux, the files of the applications are copied to many different directories in the file system. Some scripts goes into /usr/share .. /usr/local some other files into /var .. /log .. etc/ and so on. For me this is o.k because I learned something about the file system and most of the directories are there to hold files for a specific purpose. This fits very nice in the Unix philosophy "do one thing and do it well" But my question is what are the advantages of such a directory structure? Or is it simply the heritage of the old unix days. (e.g in comparison with the one windows use, where all files for an application are in one specific "folder")
What seems to me the easiest-to-think-of advantage is that similar files live in the same directory tree. Configuration files live in /etc, log files and/or run-time trace files live in /var/log, executables live in /usr/bin, run-time information like PID files lives in /var/run. You want to know what's in the NTP configuration file? Change directory to /etc and do ls ntp*. You want to have some program watch executable files so that some traditional file system virus doesn't infect them? Everything in /usr/bin and /usr/local/bin needs watching. The second advantage I can think of is that the Unix style of organization promotes a separation of data and executable. Executables live in a directory that's well away from where templates live (/usr/share, probably), and well away from where data lives. That separation might be a reason why Unix/Linux/*BSD have more resistance to file system viruses than Windows does, or the old Pre-OSX Mac had.
What are the advantages of the Unix file system structure
1,301,414,593,000
Ref: The following question Drive name? What is the correct term for the "sda" part of "/dev/sda"? Given: I have a system, (in this case a Raspberry Pi, but this could be relevant to any 'nix system.) It is running a version of Linux and it can be assumed that all normal Linux commands work. The boot device can be either a SD card or a USB attached storage device. If booted from an attached storage device, the device "basename" is sd(x) If booted from a SD card, the device "basename" becomes something like "mm(xxxx)" What I want to do: I want to be able to programmatically, (in a shell script if possible) the kind of device it was booted from, and change certain characteristics based on what the boot device is. Simple example: Boot device = "mmxxxxx" Print "Booted from SD card!" Boot device = "sda" Print "Booted from Attached Storage!" What I want to do is extract the, (for want of a better term), major device type the root partition is derived from, (i.e. "sd", "mm", or whatever, depending on what device is mounted as the root partition.) I suspect that I could, somehow, list the device that is mounted on root, without listing everything in mount, and then extract the first two letters after the final slash. . .
Mount point is controlled by systemd. You can list the systemd mount unit files through: systemctl list-units --type=mount --all sample output: -.mount loaded active mounted Root Mount boot-efi.mount loaded active mounted /boot/efi ... The Root partition is controlled by -.mount. systemctl status -- -.mount ● -.mount - Root Mount Loaded: loaded (/etc/fstab; generated) Active: active (mounted) since Wed 2024-02-07 Where: / What: /dev/sdaX Docs: man:fstab(5) man:systemd-fstab-generator(8) To extract device name : systemctl status -- -.mount |grep -oP '(?<=What: ).*' |xargs basename Better, as pointed @steeldriver in this comment: systemctl show --value --property=What -- -.mount |xargs basename man systemd.mount: What: Takes an absolute path of a device node, file or other resource to mount.
How to programattically determine the device name/basename of the root partition?
1,301,414,593,000
With the below function signature ssize_t read(int fd, void *buf, size_t count); While I do understand based off the man page that on a success case, return value can be lesser than count, but can the return value exceed count at any instance?
A call to read() might result in more data being read behind the scenes than was requested (e.g. to read a full block from storage, or read ahead the following blocks), but read() itself never returns more data than was requested (count). If it did, the consequence could well be a buffer overflow since buf is often sized for only count bytes. POSIX (see the link above) specifies this limit explicitly: Upon successful completion, where nbyte is greater than 0, read() shall mark for update the last data access timestamp of the file, and shall return the number of bytes read. This number shall never be greater than nbyte. The Linux man page isn’t quite as explicit, but it does say read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf. (Emphasis added.)
Can read() return value exceed the count value?
1,301,414,593,000
I need to create a while loop that if dmesg returns some/any value, then it should kill a determined process. Here is what I have. #!/bin/bash while [ 1 ]; do BUG=$(dmesg | grep "BUG: workqueue lockup" &> /dev/null) if [ ! -z "$BUG" ]; then killall someprocessname else break fi done I don't know if instead of ! -z I should do [ test -n "$BUG" ] I think with -n it says something about expecting a binary. I don't know if the script will even work because the BUG lockup halts every process, but still there are few more lines in dmesg until the computer gets completely borked - maybe I can catch-up and kill the process.
Some issues: You are running this in a busy loop, which will consume as much resources as it can. This is one instance where sleeping could conceivably be justified. However, recent versions of dmesg have a flag to follow the output, so you could rewrite the whole thing as (untested) while true do dmesg --follow | tail --follow --lines=0 | grep --quiet 'BUG: workqueue lockup' killall someprocessname done The code should be indented to be readable. It is really strange, but [ is the same as test - see help [.
How can I create an infinite loop that kills a process if something is found in dmesg?
1,301,414,593,000
I just installed Mint 18 as a virtual machine using VMware 12. I have the problem that I can't install vmware-tools. At first I tried to install open-vm-tools as is recommended by Mint, but it didn't work, so I uninstalled it and then tried to install the default vmware-tools, but it can't be installed.
Forget VM tools, use: sudo apt-get install open-vm-tools open-vm-tools-desktop Then do a full restart and check that the client screen will resize when the host window resizes.
Problem with Mint 18 and VMware Tools
1,301,414,593,000
How do the md devices get assembled at bootup in Ubuntu? Is /etc/mdadm/mdadm.conf truly the relevant factor here? My mdadm.conf is sound and I checked that while I was in the rescue CD environment. When running mdadm -A --scan it finds and assigns the device names as desired. The mdadm.conf contains AUTO -all to take out all automatism from assembling the arrays. What I need to do is to be able to auto-assemble the md devices as outlined in mdadm.conf at boot time or that when assembling it honors the super-minor value for the 0.9 array and the name (apparently <hostname>:<super-minor>) for the 1.2 arrays and does the right thing without mdadm.conf. What puzzle piece am I missing? I have the following problem. There are two md devices with RAID1 (md0 and md1) and one with RAID6 (md2). I am referring to them by the desired device names. md0 has meta-data version 0.9, the other two have version 1.2. md0 maps to / and the other two are not relevant for booting. The boot drive is GPT partitioned. There is a glue "BIOS Boot Partition" (sda1) on it. grub-install --no-floppy /dev/sda reports success. md0 == sda3 + sdb3 md1 == sda2 + sdb2 md2 == sdc + sdd + sde + sdf + sdg + sdh sda1 and sdb1 are "BIOS Boot Partition" each GRUB2 is happy with the /boot/grub/devicemap I gave it and I added part_gpt, raid, mdraid09 and ext2 to the modules to preload in GRUB2. Since I still had my root volume in the rescue environment, I simply mounted everything and then chrooted into it: mkdir /target mount /dev/md0 /target mount -o bind /dev /target/dev mount -o bind /dev/pts /target/dev/pts mount -o bind /sys /target/sys mount -o bind /proc /target/proc chroot /target /bin/bash From there I reset the super-minor on md0 (with meta-data 0.9) and the name on md1 and md2. I also verified that it worked using mdadm --detail .... Other than that I adjusted /etc/default/grub, run update-grub and also grub-install --no-floppy /dev/sda and grub-install --no-floppy /dev/sdb. After that, when booting, I am always dropped into the initramfs rescue shell, however, because the root file system could not be mounted. The reason, after checking /proc/mdstat appears to be that the respective md device doesn't even get assembled and run. Not to mention that the other two (meta-data version 1.2) drives receive a device number somewhere in the 125..127 range. Note: GRUB2 comes up from the boot disk. So at the very least it has been embedded correctly. The issue is the transition from the initial rootfs to the proper root file system.
Basic Boot Process Grub Grub reads its disk, md, filesystem, etc. code from the MBR. Grub finds its /boot partition, and reads the rest of itself out of it. Including the config, and any modules the config specifies need loading. Grub follows the instructions in the config, which typically tell it to load a kernel and initramfs into memory, and execute the kernel. There is a fallback mode, for when Grub can't actually read the filesystem—either because there wasn't enough space to embed all that code in the boot record, or because it doesn't know the filesystem or layers under it. In this case, GRUB embeds a list of sectors, and reads code from them. This is much less robust, and best avoided. It may even be able to do kernel and initramfs like that (not sure). Kernel The kernel then takes control, and does a lot of basic hardware init. This stage is fairly quick. Next, the kernel unpacks the initramfs to a tmpfs, and looks for a /init on that tmpfs. It then executes (in the normal sense, the kernel is fulling running at this point) /init. This is, by the way, a plain old shell script. Initramfs You can extract the initramfs by hand by doing something like mkdir /tmp/foo; cd /tmp/foo; zcat /boot/initrd.img-3.8-trunk-amd64 | cpio -idmv. The initramfs is responsible for loading all the drivers, starting udev, and finding the root filesystem. This is the step that is failing for you—it can't find the root filesystem, so it bails out. Once the initramfs has finished, it has the root filesystem mounted, and hands over control to /sbin/init. System boot At this point, your init takes over—I think Ubuntu is using upstart currently. What's Broken I'm not entirely sure what's broken (part, I confess, because I'm much more familiar with how it works on Debian than Ubuntu, though its similar), but I have a couple of suggestions: The initramfs has its own copy of mdadm.conf. You may just need to run update-initramfs -u to fix it. Look at the boot messages. There may be an error. Get rid of 'quiet' and 'splash' and maybe add 'verbose' to your kernel line to actually see them. Depending on storage used, you may need to set the rootdelay parameter. When you're dumped to the shell prompt, you don't have a lot of commands, but you do have mdadm. Try to figure out what went wrong. If you fix the problem, boot can continue.
Ubuntu: How do the md devices get assembled at bootup?
1,301,414,593,000
I'm on a Raspbian, I've tried to receive data with: nc -4 -l -v -k -p 5004 which result in: Listening on [0.0.0.0] (family 2, port 5004) nc: getnameinfo: Temporary failure in name resolution route command return this: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.10.1 0.0.0.0 UG 0 0 0 00c0caaaeb87 10.11.10.0 0.0.0.0 255.255.255.0 U 0 0 0 00c0caaaeb87 Gateway is reachable: PING 10.11.10.1 (10.11.10.1) 56(84) bytes of data. 64 bytes from 10.11.10.1: icmp_seq=1 ttl=64 time=4.03 ms This are the ip: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether b8:27:eb:4b:46:37 brd ff:ff:ff:ff:ff:ff 3: intwifi0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b8:27:eb:1e:13:62 brd ff:ff:ff:ff:ff:ff 4: 00c0caaaeb87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2304 qdisc mq state UP group default qlen 1000 link/ether 00:c0:ca:aa:eb:87 brd ff:ff:ff:ff:ff:ff inet 10.11.10.10/24 brd 10.11.10.255 scope global 00c0caaaeb87 valid_lft forever preferred_lft forever Instead if I connect eth0 and do: dhclient eth0. Nc start work fine. What I'm missing? This is the content of /etc/resolv.conf for both states is the same: # Generated by resolvconf nameserver 10.11.10.1 This is the strace output of the nc process: strace: Process 1780 attached accept(3, {sa_family=AF_INET, sin_port=htons(41250), sin_addr=inet_addr("10.11.10.11")}, [128->16]) = 4 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(5) = 0 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(5) = 0 open("/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=497, ...}) = 0 read(5, "# /etc/nsswitch.conf\n#\n# Example"..., 4096) = 497 read(5, "", 4096) = 0 close(5) = 0 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=104908, ...}) = 0 mmap2(NULL, 104908, PROT_READ, MAP_PRIVATE, 5, 0) = 0x76f45000 close(5) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/arm-linux-gnueabihf/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 5 read(5, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\240\31\0\0004\0\0\0"..., 512) = 512 lseek(5, 37440, SEEK_SET) = 37440 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1120) = 1120 lseek(5, 37088, SEEK_SET) = 37088 read(5, "A.\0\0\0aeabi\0\1$\0\0\0\0056\0\6\6\10\1\t\1\n\2\22\4\23\1\24"..., 47) = 47 fstat64(5, {st_mode=S_IFREG|0644, st_size=38560, ...}) = 0 mmap2(NULL, 127744, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 5, 0) = 0x76d98000 mprotect(0x76da1000, 61440, PROT_NONE) = 0 mmap2(0x76db0000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x8000) = 0x76db0000 mmap2(0x76db2000, 21248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x76db2000 close(5) = 0 mprotect(0x76db0000, 4096, PROT_READ) = 0 munmap(0x76f45000, 104908) = 0 getpid() = 1780 open("/etc/resolv.conf", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=48, ...}) = 0 read(5, "# Generated by resolvconf\nnamese"..., 4096) = 48 read(5, "", 4096) = 0 close(5) = 0 uname({sysname="Linux", nodename="rx", ...}) = 0 open("/etc/host.conf", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=9, ...}) = 0 read(5, "multi on\n", 4096) = 9 read(5, "", 4096) = 0 close(5) = 0 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=119958, ...}) = 0 read(5, "127.0.0.1\tlocalhost\n::1\t\tlocalho"..., 4096) = 4096 close(5) = 0 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=119958, ...}) = 0 read(5, "127.0.0.1\tlocalhost\n::1\t\tlocalho"..., 4096) = 4096 read(5, "0.1 wbc\n127.0.0.1 wbc\n127.0.0.1 "..., 4096) = 4096 read(5, "1 wbc\n127.0.0.1 wbc\n127.0.0.1 wb"..., 4096) = 4096 read(5, "1 wbc\n127.0.0.1 wbc\n127.0.0.1 wb"..., 4096) = 4096 read(5, "\n127.0.0.1 groundpi\n127.0.0.1 wb"..., 4096) = 4096 read(5, "\n127.0.0.1 wbc\n127.0.0.1 wbc\n127"..., 4096) = 4096 read(5, "127.0.0.1 wbc\n127.0.0.1 wbc\n127."..., 4096) = 4096 read(5, "127.0.0.1 wbc\n127.0.0.1 wbc\n127."..., 4096) = 4096 read(5, "bc\n127.0.0.1 wbc\n127.0.0.1 wbc\n1"..., 4096) = 4096 read(5, "0.1 wbc\n127.0.0.1 wbc\n127.0.0.1 "..., 4096) = 4096 read(5, "bc\n127.0.0.1 wbc\n127.0.0.1 groun"..., 4096) = 4096 read(5, ".1 wbc\n127.0.0.1 wbc\n127.0.0.1 w"..., 4096) = 4096 read(5, ".0.0.1 wbc\n127.0.0.1 wbc\n127.0.0"..., 4096) = 4096 read(5, ".0.0.1 wbc\n127.0.0.1 wbc\n127.0.0"..., 4096) = 4096 read(5, "0.0.1 groundpi\n127.0.0.1 wbc\n127"..., 4096) = 4096 read(5, ".0.0.1 wbc\n127.0.0.1 wbc\n127.0.0"..., 4096) = 4096 read(5, "7.0.0.1 wbc\n127.0.0.1 wbc\n127.0."..., 4096) = 4096 read(5, "127.0.0.1 wbc\n127.0.0.1 wbc\n127."..., 4096) = 4096 read(5, "0.1 wbc\n127.0.0.1 wbc\n127.0.0.1 "..., 4096) = 4096 read(5, "0.1 wbc\n127.0.0.1 wbc\n127.0.0.1 "..., 4096) = 4096 read(5, ".0.1 wbc\n127.0.0.1 wbc\n127.0.0.1"..., 4096) = 4096 read(5, "1 wbc\n127.0.0.1 groundpi\n127.0.0"..., 4096) = 4096 read(5, "c\n127.0.0.1 wbc\n127.0.0.1 ground"..., 4096) = 4096 read(5, "1 wbc\n127.0.0.1 wbc\n127.0.0.1 wb"..., 4096) = 4096 read(5, "\n127.0.0.1 wbc\n127.0.0.1 wbc\n127"..., 4096) = 4096 read(5, ".0.0.1 wbc\n127.0.0.1 wbc\n127.0.0"..., 4096) = 4096 read(5, "127.0.0.1 wbc\n127.0.0.1 wbc\n127."..., 4096) = 4096 read(5, "\n127.0.0.1 rx\n127.0.0.1 wbc\n127."..., 4096) = 4096 read(5, "0.0.1 wbc\n127.0.0.1 wbc\n127.0.0."..., 4096) = 4096 read(5, "1 rx\n127.0.0.1 wbc\n127.0.0.1 wbc"..., 4096) = 1174 read(5, "", 4096) = 0 close(5) = 0 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=104908, ...}) = 0 mmap2(NULL, 104908, PROT_READ, MAP_PRIVATE, 5, 0) = 0x76f45000 close(5) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/arm-linux-gnueabihf/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 5 read(5, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\220\n\0\0004\0\0\0"..., 512) = 512 lseek(5, 16892, SEEK_SET) = 16892 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1120) = 1120 lseek(5, 16540, SEEK_SET) = 16540 read(5, "A.\0\0\0aeabi\0\1$\0\0\0\0056\0\6\6\10\1\t\1\n\2\22\4\23\1\24"..., 47) = 47 fstat64(5, {st_mode=S_IFREG|0644, st_size=18012, ...}) = 0 mmap2(NULL, 82080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 5, 0) = 0x76d83000 mprotect(0x76d87000, 61440, PROT_NONE) = 0 mmap2(0x76d96000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x3000) = 0x76d96000 close(5) = 0 mprotect(0x76d96000, 4096, PROT_READ) = 0 munmap(0x76f45000, 104908) = 0 stat64("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=48, ...}) = 0 open("/etc/resolv.conf", O_RDONLY|O_CLOEXEC) = 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=48, ...}) = 0 read(5, "# Generated by resolvconf\nnamese"..., 4096) = 48 read(5, "", 4096) = 0 close(5) = 0 uname({sysname="Linux", nodename="rx", ...}) = 0 socket(AF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.7.208.3")}, 16) = 0 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) send(5, ".\242\1\0\0\1\0\0\0\0\0\0\00211\00210\00211\00210\7in-addr"..., 42, MSG_NOSIGNAL) = 42 poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout) poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) send(5, ".\242\1\0\0\1\0\0\0\0\0\0\00211\00210\00211\00210\7in-addr"..., 42, MSG_NOSIGNAL) = 42 poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout) close(5) = 0 write(2, "nc: ", 4) = 4 write(2, "getnameinfo: Temporary failure i"..., 49) = 49 write(2, "\n", 1) = 1 exit_group(1) = ? +++ exited with 1 +++
nc -n -n numeric-only IP addresses, no DNS With this I've solved
netcat nc: getnameinfo: Temporary failure in name resolution
1,301,414,593,000
I have a file with the following file mode bits (a+rw): [0] mypc<johndoe>:~>sudo touch /tmp/test [0] mypc<johndoe>:~>sudo chmod a+rw /tmp/test [0] mypc<johndoe>:~>ls -l /tmp/test -rw-rw-rw- 1 root root 0 Mar 13 11:09 /tmp/test Why can't I remove the file? [0] mypc<johndoe>:~>rm /tmp/test rm: cannot remove '/tmp/test': Operation not permitted
The /tmp directory is conventionally marked with the restricted deletion flag, which appears as a permission letter t or T in ls output. Restricted deletion implies several things. In the general case, it implies that only the owner of the file, or the owner of /tmp itself, can delete a file/directory in /tmp. You can not delete the file, because you are not the owner, which is root. Try running rm with sudo which you probably forgot. sudo rm /tmp/test More specifically to Linux alone, the restricted deletion flag (on a world-writable directory such as /tmp) also enables the protected_symlinks, protected_hardlinks, protected_regular, and protected_fifos restrictions, which in such directories respectively prevent users from following symbolic links that they do not own, prevent users making hard links to files that they do not own, prevents users opening FIFOs that they do not own, and prevents users from open existing files that they do not own when they expected to create them. This will surprise you with permissions errors when doing various further things as root when you do use sudo. More on these at question like "Hard link permissions behavior different between CentOS 6 and CentOS 7" , "Symbolic link not working as expected when changes user", and "Group permissions for root not working in /tmp".
Can't remove a file with file mode bits a+rw
1,301,414,593,000
In mount man page errors=remount-ro is an option for mounting fat but this option doesn't appear in ext4 options catalog. I know what this option means: In case of mistake remounting the partition like readonly but I don't know if it's a correct option or only a bug.
It is perfectly valid for ext4, and is defined in the ext4 manpage: errors={continue|remount-ro|panic} Define the behavior when an error is encountered. (Either ignore errors and just mark the filesystem erroneous and continue, or remount the filesystem read-only, or panic and halt the system.) The default is set in the filesystem superblock, and can be changed using tune2fs(8). Some versions of the mount manpage do list this option for ext4; others refer to the manpage linked above: Mount options for ext2, ext3 and ext4 See the options section of the ext2(5), ext3(5) or ext4(5) man page (the e2fsprogs package must be installed).
Why do I have "errors=remount-ro" option in my ext4 partition in my Linux?
1,301,414,593,000
Just to understand core IDs: I have 4 CPUs: $ cat /proc/cpu* | grep proc* processor: 0 processor: 1 processor: 2 processor: 3 and the result of nproc is also 4. But if I use cat /proc/cpu* | grep 'core id' I get the same twice core id: 0 core id: 2 core id: 0 core id: 2 Why they are not numbered like the CPUs and how to distinguish the same core IDs? The full contents of /proc/cpuinfo are as follows: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz stepping : 5 microcode : 0x4 cpu MHz : 1066.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat bugs : bogomips : 5053.71 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz stepping : 5 microcode : 0x4 cpu MHz : 2533.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 4 initial apicid : 4 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat bugs : bogomips : 5053.71 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz stepping : 5 microcode : 0x4 cpu MHz : 1466.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat bugs : bogomips : 5053.71 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz stepping : 5 microcode : 0x4 cpu MHz : 1199.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 5 initial apicid : 5 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat bugs : bogomips : 5053.71 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:
Your CPU has two cores and four threads, so seeing duplicated core identifiers is perfectly normal: each “processor” listed in /proc/cpuinfo is a logical processor, on a physical core, so you end up with two physical cores (ids 0 and 2), with four processors (ids 0, 1, 2, and 3). The core numbering seems odd, but that’s up to your firmware. The kernel calculates core identifiers based on the APIC id, which is assigned by the firmware, and in your case the ids are 0, 1, 4, and 5, which results in core ids of 0 and 2.
Understanding core IDs
1,301,414,593,000
I can see the difference between /dev/tty and /dev/tty0 by testing the provided method from this question. But I really wonder about the practical usage of those devices (like situations they will be used).
/dev/tty is the controlling tty of the current process, for any process that actually opens this special file. It isn’t necessarily a virtual console device (/dev/ttyn), and can be a pty, a serial port, etc. If the controlling tty isn’t a virtual console, then the process has not to interact with console devices even if its pseudotty is actually implemented on the system console. E. g. for a shell in a terminal emulator under locally-running X server, said programs form such chain of interactions as:    Unix shell      ⇕ /dev/pts/2 (≡ /dev/tty for its processes)  kernel pty driver      ⇕ /dev/ptmx  terminal emulator      ⇕ X Window protocol    X server      ⇕ /dev/tty7 (≡ /dev/tty for the server)  system console zxc↿⇂[_̈░░]     user Use of /dev/tty by userland programs includes: Write something to the controlling terminal, ignoring all redirections and pipes; Make an ioctl() – see tty_ioctl(4); For example, detach from the terminal (TIOCNOTTY). /dev/tty0 is the currently active (i. e. visible on the monitor) virtual console of the operating system. This special file unlikely is used significantly by system software, but /dev/console is virtually an “alias” for tty0 and /dev/console has much use by syslog daemons and, sometimes, by the kernel itself. Experiment to show the difference: run a root shell on tty3 (Ctrl+Alt+F3) or in a terminal emulator. Now # sleep 2; echo test >/dev/tty then quickly Ctrl+Alt+F2, wait for two seconds, and Ctrl+Alt+whatever back. Where do you see the output? And now the same test for /dev/tty0.
How Linux uses /dev/tty and /dev/tty0
1,301,414,593,000
I want to replace the default listening port of httpd to 9090. I can edit the line in httpd.conf file using below sed -i "/^Listen/c\Listen 9090" /etc/httpd/conf/httpd.conf But the line Listen 80 may have white space before it. How do I ignore this white space to match this line?
Change your matching pattern no catch white spaces before liste in the following way: /^\s*Listen/ That will include all Listen .. Listen ... and others.
sed : Ignore line starting whitespace for match
1,301,414,593,000
I bought a Microsoft 3600 bluetooth mouse and never managed to get it working properly on Linux, but it works fine in other operating systems. If I stop moving the mouse for a few seconds (like 3 or 4 seconds) it "sleeps", and when I move it again the pointer won't move for the next few seconds. This makes this device completely unusable. I already searched a lot about that and found lots of answers telling to change the timeout at the /etc/bluetooth/input.conf (I didn't had that file by default tho) or create a udev rule. I already made they both and the problem persists. Looking at journalctl, I get those messages when the mouse sleeps and I attempt to move it: jul 03 19:41:46 nathan kernel: usb 1-6: new high-speed USB device number 24 using xhci_hcd jul 03 19:41:46 nathan kernel: usb 1-6: Device not responding to setup address. jul 03 19:41:47 nathan kernel: usb 1-6: Device not responding to setup address. jul 03 19:41:47 nathan kernel: usb 1-6: device not accepting address 24, error -71 jul 03 19:41:47 nathan kernel: usb usb1-port6: unable to enumerate USB device I also noticed a weird behavior: If I keep my USB gaming mouse plugged in the USB, the Bluetooth mouse does not sleeps and works fine. But if I remove the USB mouse the problem starts occurring again in the bluetooth mouse. I'm currently running on Manjaro with linux 5.7.0, but the same problem used to occur in OpenSuse too, with every single kernel version I tested (5.4.x, 5.5.x and 5.6.x).,
It's been about 3 years since I bought this mouse, TODAY I managed to fix it. It was about the USB device auto suspending for some reason Do # lsusb -vt to get your USB device ID: λ ~> sudo lsusb -vt /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/12p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 5: Dev 2, If 0, Class=Wireless, Driver=btusb, 12M ID 0cf3:e500 Qualcomm Atheros Communications |__ Port 5: Dev 2, If 1, Class=Wireless, Driver=btusb, 12M ID 0cf3:e500 Qualcomm Atheros Communications Create a /etc/udev/rules.d/50-usb_power_save.rules file if you don't have it already Append to the end of this file: (replace the idVendor and idProduct with yours, see the example above) ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="0cf3", ATTR{idProduct}=="e500", ATTR{power/autosuspend}="-1" Reboot Today is a great day btw
Bluetooth mouse sleeps after a few seconds idle when there is no other mouse connected
1,301,414,593,000
I am using iptables with ipset on an Ubuntu server firewall. I am wondering if there is a command for importing a file containg a list of ip's to ipset. To populate an ipset, right now, I am adding each ip with this command: ipset add manual-blacklist x.x.x.x It would be very helpfull if I can add multiple ip's with a single command, like importing a file or so. At command for ip in `cat /home/paul/ips.txt`; do ipset add manual-blacklist $ip;done I get this response resolving to IPv4 address failed to parse 46.225.38.155 for each ip in ips.txt I do not know how to apply it.
You can use ipset save/restore commands. ipset save manual-blacklist You can run above command and see how you need to create your save file. Example output: create manual-blacklist hash:net family inet hashsize 1024 maxelem 65536 add manual-blacklist 10.0.0.1 add manual-blacklist 10.0.0.2 And restore it with below command. ipset restore -! < ips.txt Here we use -! to ignore errors mostly because of duplication.
How to import multiple ip's to Ipset?
1,301,414,593,000
I've add 1 route in configuration network file, how to reload routing table on Centos without lost network service
It's impossible to reload routing table without lost network service (I think you mean you don't have to use service network restart command to make the changes). If you have any change to network configuration file, you need to restart networking service to apply new configuration. In your case, you can add the config (i.e new route, new gateway...) manually, so you will have new config running. But this config will be lost if you reboot server. To make it persitent, you must add this config to network configuration file.
How to reload routing table on Centos without lost network service
1,301,414,593,000
I've been doing a bit of reading, and it looks like ZFS doesn't like disks being removed from non-redundant arrays: You can use the zpool detach command to detach a device from a mirrored storage pool. For example: # zpool detach zeepool c2t1d0 However, this operation is refused if there are no other valid replicas of the data. For example: # zpool detach newpool c1t2d0 cannot detach c1t2d0: only applicable to mirror and replacing vdevs The basic problem is understandable: removing the only copy of a piece of data (whether metadata or payload data) from an array would render that data unavailable. The examples for replacing devices in a ZFS storage pool give a basic step-by-step description for how to replace a device in a storage pool: offline the disk, remove the disk, insert the replacement disk, run zpool replace to inform ZFS of the change and online the disk. This obviously requires that the array does not depend on the disk being replaced, hence the array must have redundancy; if it does depend on the drive in question, this approach presents the same problem as above. What is the recommended way of replacing a disk in a non-redundant ZFS array? Assume that the existing disk is working properly, and assume that the replacement disk is at least the same size as the disk being replaced. (If the existing disk has failed, clearly all one could do is add a new disk and restore all files affected by the disk failure from backup.)
Don't know if things were that different in `13 but 'zfs replace' works on non-redundant pools. You just run the 1 command instead of detaching first. d1 is 1G, d2 is 2G, both are empty files in /tmp: /t/test #> zpool create test /tmp/test/d1 /t/test #> zpool set autoexpand=on test /t/test #> zpool status pool: test state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 /tmp/test/d1 ONLINE 0 0 0 errors: No known data errors /t/test #> zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT test 1008M 258K 1008M - 0% 0% 1.00x ONLINE - /t/test #> zpool replace test /tmp/test/d1 /tmp/test/d2 /t/test #> zpool status pool: test state: ONLINE scan: resilvered 61K in 0h0m with 0 errors on Sun Sep 18 18:55:32 2016 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 /tmp/test/d2 ONLINE 0 0 0 errors: No known data errors /t/test #> zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT test 1.98G 408K 1.98G - 0% 0% 1.00x ONLINE -
How to replace a disk in a non-redundant ZFS pool?
1,301,414,593,000
I have seen some tutorials on extending an LVM2 logical volume. None of them instruct you to unmount the filesystem. They claim that you can extend an LVM volume while it is in use. Is this right?
That depends on whether the filesystem can be extended online. Most major Linux filesystems can be extended while they are mounted (btrfs, ext2, ext3, ext4, xfs, zfs) — the main exception is reiserfs. If you want to extend one of these filesystems on an LVM volume, you can extend the volume with lvextend, then extend the filesystem to fill the enlarged volume, all without deactivating or unmounting anything. If you're shrinking the volume, there are fewer filesystems that don't require mounting: only btrfs and zfs can be shrunk online. Shrink the filesystem, then call lvreduce to bring the logical volume size down to the size of the filesystem.
Must the filesystem be unmounted while extending an LVM logical volume?
1,301,414,593,000
What are the main differences between the Windows registry and the approach used in UNIX/Linux, and what are the advantages and disadvantages of each approach?
There is no real cognate in UNIX, but as wollud1969 says, /etc comes close. That, though, is only part of the story. You'd also need to consider things under /var (for information about installed software, running services, etc), /usr/local/etc (at least on FreeBSD and certain Linux distros) for configuration information for installed third party apps, and of course each user's dotfiles, which customise how software works for them (roughly equivalent the the HK_CURRENT_USER hive in the registry). Then there's /dev for device interfaces, /proc for running process data, and the kernel itself (either through sysctl, a kernfs virtual file system, etc). Depending on your particular platform, there may be other places to look, too. The primary advantage in the UNIX approach, from my perspective as a UNIX user these last 12 years, is that application config files, wherever they live, are usually just plain old text files, so can be read and edited by plain old humans. (Except, possibly, the sendmail config file, but that's a completely different religious war...). Many applications (browsers, desktop apps, etc) create config files for you, but they are text files, and the apps usually won't stop working if those files are then edited by hand, provided the edits don't break their syntax. The downside, though, is that there is no universal config language, so you need to learn the syntax for each app you manage. In reality, though, this is only a small annoyance at worst. The Windows Registry was developed, at least in part, to address a similar state of affairs that was deemed problematic by Microsoft, where application ini files were not centrally managed, with no strict control on what values went in them, and no standard location for software to put them. The registry fixes some of those concerns (it is centrally managed, with specific data types that can be stored in it), but its disadvantages are its binary format, so that even experienced Windows admins need to use a GUI tool to look at it, it's prone to getting corrupted if you lose power, and not all software authors are sufficiently conscientious to clean up after themselves when you decide to uninstall their kewl shareware app. And, as with almost any other file in Windows, it's entirely possible for the various components of the registry to become fragmented on disk, resulting in painfully slow read and update operations. There is no requirement for software to make use of the registry, and even Microsoft's own .NET platform uses XML files instead. The Wikipedia page about the registry is quite informative.
Differences between Windows registry and UNIX/Linux approach [closed]
1,300,719,913,000
Sometimes I upload an application to a server that doesn't have external internet access. I would like to create the same environment in my machine for testing some features in the application and avoid bugs (like reading a rss from an external source). I thought about just unplugging my ethernet cable to simulate, but this seems archaic and I don't know if I'm going to raise the same exceptions (specially in Python) when doing this compared to the limitations at the server. So, how do I simulate "no external access" in my development machine? Will "deactivating" my ethernet interface and reactivating later (with a "no hassle" command) have the same behavior as the server with no external access? I'm using Ubuntu 10.04. Thanks!
Deleting the default route should do this. You can show the routing table with /sbin/route, and delete the default with: sudo /sbin/route del default That'll leave your system connected to the local net, but with no idea where to send packets destined for beyond. This probably simulates the "no external access" situation very accurately. You can put it back with route add (remembering what your gateway is supposed to be), or by just restarting networking. I just tried on a system with NetworkManager, and zapping the default worked fine, and I could restore it simply by clicking on the panel icon and re-choosing the local network. It's possible that NM might do this by itself on other events, so beware of that. Another approach would be to use an iptables rule to block outbound traffic. But I think the routing approach is probably better.
Is it possible to simulate "no external access" from a Linux machine when developing?
1,300,719,913,000
I was able to set up a network namespace and start a server that listens on 127.0.0.1 inside the namespace: # ip netns add vpn # ip netns exec vpn ip link set dev lo up # ip netns exec vpn nc -l -s 127.0.0.1 -p 80 & # ip netns exec vpn netstat -tlpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 5598/nc After that, I can connect to the server inside the namespace: # ip netns exec vpn nc 127.0.0.1 80 -zv localhost [127.0.0.1] 80 (http) open But I can't connect to the server outside the namespace: # nc 127.0.0.1 80 (UNKNOWN) [127.0.0.1] 80 (http) : Connection refused How to configure iptables or namespace to forward traffic from the global namespace to the vpn namespace?
First: I don't think you can achieve this by using 127.0.0.0/8 and/or a loopback interface (like lo). You have to use some other IPs and interfaces, because there are specific things hardwired for 127.0.0.0/8 and for loopback. Then there is certainly more than one method, but here's an example: # ip netns add vpn # ip link add name vethhost0 type veth peer name vethvpn0 # ip link set vethvpn0 netns vpn # ip addr add 10.0.0.1/24 dev vethhost0 # ip netns exec vpn ip addr add 10.0.0.2/24 dev vethvpn0 # ip link set vethhost0 up # ip netns exec vpn ip link set vethvpn0 up # ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.100 ms The first command creates out of thin air a pair of virtual ethernet interfaces connected by a virtual ethernet cable. The second command moves one of these interfaces into the netns vpn. Consider it the equivalent of things like socketpair(2) or pipe(2): a process creates a pair, then forks, and each process keeps only one end of the pair and they can communicate. Usually (LXC, virt-manager,...) there's also a bridge involved to put everything in the same LAN when you have many netns. Once this is in place, for the host it's like any router. Enable ip forwarding (be more restrictive if you can: you need it at least for vethhost0 and the main interface): # echo 1 > /proc/sys/net/ipv4/conf/all/forwarding Add some DNAT rule, like: # iptables -t nat -A PREROUTING ! -s 10.0.0.0/24 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.2 Now you can either add a default route inside vpn with: # ip netns exec vpn ip route add default via 10.0.0.1 Or else, instead, add a SNAT rule to have everything be seen as coming from 10.0.0.1 inside vpn. # iptables -t nat -A POSTROUTING -d 10.0.0.2/24 -j SNAT --to-source 10.0.0.1 With this in place you can test from any other host, but not from the host itself. To do this, also add a DNAT rule similar to the previous DNAT, but in OUTPUT and changed (else any outgoing http connexion would be changed too) to your own IP. Let's say your IP is 192.168.1.2: # iptables -t nat -A OUTPUT -d 192.168.1.2 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.2 Now it will even work if you connect from the host to itself if you don't use a loopback ip, but any other IP belonging to the host with a nat rule as above. Let's say your IP is 192.168.1.2: # ip netns exec vpn nc -l -s 10.0.0.2 -p 80 & [1] 10639 # nc -vz 192.168.1.2 80 nc: myhost (192.168.1.2) 80 [http] open # [1]+ Done ip netns exec vpn nc -l -s 10.0.0.2 -p 80
How to forward traffic between Linux network namespaces?
1,300,719,913,000
There is a lot of solution here to execute a script at shutdown/reboot, but I want my script to only execute at shutdown. I've tried to put my script in /usr/lib/systemd/systemd-shutdown, and check the $1 parameter, as seen here, but it doesn't work. Any ideas ? system : archlinux with gnome-shell $systemctl --version systemd 229 +PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN
I've finally found how to do that. It's a bit hackish thought, but it works. I've used some part of this thread : https://stackoverflow.com/questions/25166085/how-can-a-systemd-controlled-service-distinguish-between-shutdown-and-reboot and this thread : How to run a script with systemd right before shutdown? I've created this service /etc/systemd/system/shutdown_screen.service [Unit] Description=runs only upon shutdown Conflicts=reboot.target After=network.target [Service] Type=oneshot ExecStart=/bin/true ExecStop=/bin/bash /usr/local/bin/shutdown_screen RemainAfterExit=yes [Install] WantedBy=multi-user.target Which will be executed at shudown/reboot/halt/whatever. (don't forget to enable it) And in my script /usr/local/bin/shutdown_screen I put the following : #!/bin/bash # send a shutdown message only at shutdown (not at reboot) /usr/bin/systemctl list-jobs | egrep -q 'reboot.target.*start' || echo "shutdown" | nc 192.168.0.180 4243 -w 1 Which will send a shutdown message to my arduino, whom will shutdown my screen.
Systemd : How to execute script at shutdown only (not at reboot)
1,300,719,913,000
As far as I know no update on a linux machine requires a restart. Windows however needs to restart several times for a update to complete which is understandable because the hardware might be in use at the moment and a restart ensures that no software uses the driver. But how can an OS (or linux as an example) handle such a situation where you want to update a driver but it is currently in use?
Updates on Linux require a restart if they affect the kernel. Drivers are part of the kernel. It's sometimes possible to upgrade a driver on Linux without rebooting, but that doesn't happen often: the peripheral controller by the driver can't be in use during the update, and the new driver version has to be compatible with the running kernel. Upgrading a driver to a running system where the peripheral controlled by the driver is in use requires that the old driver leaves the peripheral in a state that the new driver is able to start with. The old and new driver must manage the handover of connections from clients as well. This is doable but difficult; how difficult depends on what the driver is driving. For example, a filesystem update without unmounting the filesystem requires the handover of some very complex data structures but is easy to cope with on the hardware side (just flush the buffers before the update, and start over with an empty cache). Conversely, an input driver only has to transmit a list of open descriptors or the like on the client side, but the hardware side requires that the new driver know what state the peripheral is in and must be managed carefully not to lose events. Updating a driver on a live system is a common practice during development on operating systems where drivers can be dynamically loaded and unloaded, but usually not while the peripheral is in use. Updating a driver in production is not commonly done on OSes like Linux and Windows; I suppose it does get done on high-availability systems that I'm not familiar with. Some drivers are not in the kernel (for example FUSE filesystems). This makes it easy to update them without updating the rest of the system, but it still requires that the driver not be in use (e.g. instances of the FUSE filesystem have to be unmounted and mounted again to make use of the new driver version). Linux does have mechanisms to upgrade the kernel without restarting: Ksplice, Kpatch, KGraft. This is technically difficult as the updated version has to be compatible with the old version to a large extent; in particular, its data structures have to have exactly the same binary layout. A few distributions offer this service for security updates. These features are not (yet?) available in the mainline Linux kernel. On a mainline Linux kernel, a driver can be updated only if it's loaded as a module and if the module can be unloaded and the new module is compatible with the running kernel.
How does linux driver update work?
1,300,719,913,000
Environment: CentOS 5.5 and 6.4 I have a request to analyze the hardware before installation to make sure our customers don't install our software on sub-standard server hardware. For example, examining memory, disk space, CPU, network card... So, the %pre section in my ks.cfg file seems like the perfect place to do this??? But, I can't get a command like free to work.... I would like to find out what commands are available in the %pre section and is this the right place to perform hardware analysis before the installation begins???.. If the %pre section of ks.cfg is NOT a good place to do this, then where?? Here is what I've tried so far and I get NO output: ks.cfg: %pre (echo "Analyzing Hardware...") >/dev/tty1 free >/dev/tty1 free_txt=`free -o` (echo "$free_txt") >/dev/tty1 %end I see 'Analyzing Hardware...' on the screen during the first part of the install but nothing after that.....
After digging a bit more, I found a ton of system info in /proc that is available for viewing when the %pre section in ks.cfg executes. Checkout dmidecode and files in /proc to get all the information you need. Here is what worked for me: %pre --log=/tmp/ks_pre.log #!/bin/sh #---------------------------------------------- # echos message to console screen and a log file #---------------------------------------------- echo_screen_n_log() { msg=$1 # Send to console screen (echo "$msg") >/dev/tty1 # Send to log echo "$msg" } echo_screen_n_log "" echo_screen_n_log "Analyzing Hardware..." echo_screen_n_log "" #---------------------------------------------- # System Memory #---------------------------------------------- IFS=$'\n' mem_info=(`dmidecode --type memory`) unset IFS sys_mem_sizes="" sys_mem_banks="" sys_tot_mem=0 cntr=0 bank_cntr=0 for i in "${mem_info[@]}" do # echo_screen_n_log "i: $i" # Maximum system memory that can be placed on the motherboard REG_EX="Maximum Capacity: (.*)$" if [[ $i =~ $REG_EX ]] then sys_mem_max=${BASH_REMATCH[1]} fi # How many memory slots are on the motherboard REG_EX="Number Of Devices: (.*)$" if [[ $i =~ $REG_EX ]] then sys_mem_slots=${BASH_REMATCH[1]} fi REG_EX="^[[:space:]]+Size: (.*)$" if [[ $i =~ $REG_EX ]] then sys_mem_sizes[cntr]=${BASH_REMATCH[1]} cntr=$(( $cntr + 1 )) fi REG_EX="^[[:space:]]+Bank Locator: (.*)$" if [[ $i =~ $REG_EX ]] then sys_mem_banks[bank_cntr]=${BASH_REMATCH[1]} bank_cntr=$(( $bank_cntr + 1 )) fi done cntr=$(( $cntr - 1 )) echo_screen_n_log "Max system memory: $sys_mem_max" echo_screen_n_log "Total system slots: $sys_mem_slots" i=0 while [ $i -le $cntr ] do echo_screen_n_log "Memory Bank Location ${sys_mem_banks[$i]} : ${sys_mem_sizes[$i]}" REG_EX="No Module Installed$" if [[ ! ${sys_mem_sizes[$i]} =~ $REG_EX ]] then REG_EX="^([0-9]+) [A-Z][A-Z]$" if [[ ${sys_mem_sizes[$i]} =~ $REG_EX ]] then sys_tot_mem=$(( $sys_tot_mem + ${BASH_REMATCH[1]} )) fi fi i=$(( $i + 1 )) done echo_screen_n_log "System Total Memory: $sys_tot_mem MB" #-------------------------------------------- # Get Disk size information #-------------------------------------------- IFS=$'\n' disk_info=(`cat /proc/partitions`) unset IFS total_disk_space=0 type="" # Grab from minor column starting with 0 ending in 3 letters (drive node) REG_EX="0\s+([0-9]+) [a-z][a-z][a-z]$" for i in "${disk_info[@]}" do # echo_screen_n_log "i: $i" if [[ $i =~ $REG_EX ]] then total_disk_space=${BASH_REMATCH[1]} total_disk_space=$(( $total_disk_space * 1024 )) type="GB" div_num=1000000000 if [ "$total_disk_space" -lt $div_num ] then type="MB" div_num=1000000 fi total_disk_space=$(( $total_disk_space / $div_num )) fi done echo_screen_n_log "Disk Space: $total_disk_space $type" #----------------------------------------------------- # Get CPU model name #----------------------------------------------------- cpu_grep=`grep 'model name' /proc/cpuinfo` cpu_model_nm="Not Found!" REG_EX="^.*: (.*)$" if [[ $cpu_grep =~ $REG_EX ]] then cpu_model_nm=${BASH_REMATCH[1]} fi echo_screen_n_log "CPU Model: $cpu_model_nm" #------------------------------------------------------- # Get number of physical CPUs #------------------------------------------------------- IFS=$'\n' cpu_count=(`grep "physical id" /proc/cpuinfo`) unset IFS last_cpu_id="" total_cpu_cnt=0 # Add up all cores of the CPU to get total MIPS total_cpus=0 REG_EX="^physical id\s+: ([0-9]+)$" for i in "${cpu_count[@]}" do # echo_screen_n_log "i: $i" if [[ $i =~ $REG_EX ]] then cpu_id=${BASH_REMATCH[1]} if [ ! "$last_cpu_id" = "$cpu_id" ] then total_cpu_cnt=$(( $total_cpu_cnt + 1 )) last_cpu_id=$cpu_id fi fi done echo_screen_n_log "System physical CPUs: $total_cpu_cnt" #------------------------------------------------------- # Get number of CPU cores #------------------------------------------------------- IFS=$'\n' cpu_cores=(`grep -m 1 "cpu cores" /proc/cpuinfo`) unset IFS total_cpu_cores=0 REG_EX="^cpu cores\s+: ([0-9]+)$" for i in "${cpu_cores[@]}" do # echo_screen_n_log "i: $i" if [[ $i =~ $REG_EX ]] then total_cpu_cores=${BASH_REMATCH[1]} fi done echo_screen_n_log "CPU cores: $total_cpu_cores" #------------------------------------------------------- # CPU MHz #------------------------------------------------------- IFS=$'\n' dmi_cpu_MHz=(`dmidecode --string processor-frequency`) unset IFS cpu_MHz=0 REG_EX="^[0-9]+ " for i in "${dmi_cpu_MHz[@]}" do # echo_screen_n_log "i: $i" if [[ $i =~ $REG_EX ]] then cpu_MHz=${BASH_REMATCH[1]} fi done echo_screen_n_log "CPU MHz: ${dmi_cpu_MHz:0:1}.${dmi_cpu_MHz:1:$(( ${#dmi_cpu_MHz} - 1 ))}" #------------------------------------------------------- # Get CPU bogomips (Millions of instructions per second) #------------------------------------------------------- IFS=$'\n' cpu_mips=(`grep "bogomips" /proc/cpuinfo`) unset IFS # Add up all cores of the CPU to get total MIPS total_mips=0 REG_EX="\s([0-9]+)\..*$" for i in "${cpu_mips[@]}" do # echo_screen_n_log "i: $i" if [[ $i =~ $REG_EX ]] then cpu_bogomips=${BASH_REMATCH[1]} total_mips=$(( $total_mips + $cpu_bogomips )) fi done echo_screen_n_log "Total CPU MIPS (Millions of instructions per second) : $total_mips" echo_screen_n_log "" (echo -n "Press <enter> to continue..") >/dev/tty1 read text %end I just need to add the criteria for determining what a base system for our installations should look like and I'm done..... Updated this with more information... You can also do the following for disk info in the %pre section: IFS=$'\n' parted_txt=(`parted -l`) unset IFS for i in "${parted_txt[@]}" do # (echo "i: \"$i\"") >/dev/tty1 REG_EX="^Model: (.*)$" if [[ $i =~ $REG_EX ]] then disk_model=${BASH_REMATCH[1]} # (echo "Disk Model: \"$disk_model\"") >/dev/tty1 fi REG_EX="^Disk (.*): ([0-9]+).[0-9]([A-Z][A-Z])$" if [[ $i =~ $REG_EX ]] then disk_device=${BASH_REMATCH[1]} disk_capacity=${BASH_REMATCH[2]} disk_capacity_type=${BASH_REMATCH[3]} (echo "Device: \"$disk_device\" \"$disk_capacity\" $disk_capacity_type") >/dev/tty1 IFS=$'\n' disk_txt=(`udevadm info --query=all --name=$disk_device`) unset IFS is_USB_drive=0 for j in "${disk_txt[@]}" do #(echo "j: \"$j\"") >/dev/tty1 REG_EX="^ID_BUS=usb$" if [[ $j =~ $REG_EX ]] then # USB keys are not to be included in total disk space # (echo "$disk_device is a USB drive!") >/dev/tty1 is_USB_drive=1 fi done if [ "$is_USB_drive" = "0" ] then total_capacity=$(( $total_capacity + $disk_capacity )) fi fi done (echo "Disk Model: $disk_model") >/dev/tty1 (echo "Disk $disk_device Capacity: $total_capacity $disk_capacity_type") >/dev/tty1
What commands are available in the %pre section of a Kickstart file on CentOS?
1,300,719,913,000
In this page from The Design and Implementation of the 4.4BSD Operating System, it is said that: A major difference between pipes and sockets is that pipes require a common parent process to set up the communications channel However, if I record correctly, the only way to create a new process is to fork an existing one. So I can’t really see how 2 processes could not have a common ancestor. Am I then right to think that any pair of processes can be piped to each other?
Am I then right to think that any pair of processes can be piped to each other? Not really. The pipes need to be set up by the parent process before the child or children are forked. Once the child process is forked, its file descriptors cannot be manipulated "from the outside" (ignoring things like debuggers), the parent (or any other process) can't do the "set up the comms. channel" part after the fact. So if you take two random processes that are already running, you can't set up a pipe between them directly. You need to use some form of socket (or another IPC mechanism) to get them to communicate. (But note that some operating systems, FreeBSD among them, allow you to send file descriptors on Unix-domain sockets.)
Can I pipe any two processes to each other?
1,300,719,913,000
I was wondering what commands/utilities can be used in terminal to know the types of windowing system (such as X window system), window manager (such as Metacity, KWin, Window Maker) and desktop environment (such as KDE, Gnome) of a Linux or other Unix-like operating systems? Thanks!
From Ask Ubuntu.SE: If you have wmctrl installed, wmctrl -m will identify the window manager for you. Thomas already mentioned the XDG_CURRENT_DESKTOP environment variable for identifying the desktop environment. And from this thread here in Unix & Linux SE: the XDG_SESSION_TYPE environment variable can be used to identify whether the windowing system is X11 or Wayland.
How to know the types of windowing system, window manager and desktop environment of a Unix-like OS
1,300,719,913,000
I'm on a Linux busybox 1.27 only system so no output=progress available, no busybox's own implementation of pv which is pipe_progress nor pv itself. I have two questions. The first is based on https://www.linux.com/training-tutorials/show-progress-when-using-dd/. It says that by sending the USR1 signal to dd it "pauses" the process and dd after printing its current status will continue with the job it was doing. I'm trying to do some benchmark tests with dd so I would like to have minimal impact on the dd operation. I want to get an output of the current operation every second because the data that's passing through dd is fluctuating and it is important to me to recognize when the transfer rate drops. First question: Is it true that 'dd' "pauses" every time it receives a USR1 signal? If dd pauses every second then I'll be adding hours to the operation when tens of gigabytes are being transferred. Second question: Assuming yes as an answer to the first question, I would like to know if it's possible to get dd to print its current status without sending any signal to the process, maybe some kind of redirection for STDOUT (like 2>&1)? What I'm referring to is: # bs with 1Mib so I can have more control on the test. dd if=/dev/zero of=/dev/null bs=1048576 count=1024 # Printing current operation status. sudo kill -USR1 $dd_pid
dd if=/dev/zero of=/dev/null bs=1048576 count=1024 Note that dd can mangle data, at least when using the bs parameter. And its performance advantage is small at best if you hand-pick an optimal block size for your particular system configuration: cat or cp can do better, and at most is only slightly slower. So don't use dd if you don't have to. Note that since version 1.23, BusyBox uses the sendfile system call to copy data, instead of using read and write. Only plain copies such as cat and cp use sendfile, however: dd is forced to use read/write because it wants precise control over the sizes. So with BusyBox ≥1.23, cat and cp are very likely to be faster than dd. Is it true that 'dd' "pauses" every time it receives an USR1 signal? Technically yes, it has to “pause” to handle the signal. However the pause is only a few CPU instructions (the most costly part by far is printing the progress output). So this doesn't invalidate your benchmark in any way. If dd pauses every second then I'll be adding hours to the operation when tens of gigabytes are being transferred. No, you have your orders of magnitude wrong. You'll be adding maybe .1% of the time on a single-CPU thread. The main cost is kernel time for the benchmarking program, not dd, so it's intrinsic in what you want to do, not in the way you implement it. if its possible to get dd to print its current status without sending any signal to the process Well, no. There's already a simple, historically established, standard, easy way to do it. Why would there be another way which would be harder to implement? On Linux, there is a generic way to know the point that a copy has reached. It doesn't depend on what program is doing the copy, though it doesn't always work with special files. Find out the process ID $pid that's doing the copy, and which file descriptors it's using for input and output. dd reads from fd 0 and writes to fd 1. BusyBox cp typically reads from fd 3 and writes to fd 4. You can check which file is open on which file descriptor through the symbolic links in /proc/$pid/fd. $ cp /dev/zero /dev/null & pid=$! $ readlink /proc/$pid/fd/3 /dev/zero $ readlink /proc/$pid/fd/4 /dev/null And you can check the position on the file descriptor $n$ in /proc/$pid/fd/$n. $ cat /proc/$pid/fdinfo/4 pos: 74252288 flags: 0100001 mnt_id: 27 However, note that the file descriptor position may not updated with special files such as /dev/zero, /dev/null, pipes or sockets. It's always updated with regular files. I don't know if it's updated for block devices. So it likely won't give you any information for copying between /dev/zero and /dev/null, but it might work in your real use case.
Check dd's progress without USR1?
1,300,719,913,000
I use the rsync in order to transfer files from /etc/yum.repos.d/ to remote server to /etc/yum.repos.d/ sshpass -p $password rsync -av /etc/yum.repos.d/* root@server_one.usaga.com:/etc/yum.repos.d Host key verification failed. rsync error: explained error (code 255) at rsync.c(551) [sender=3.0.9] as we can see above rsync failed because key fingerprint so after we did the following ( answer yes on ssh ) ssh root@server_one.usaga.com The authenticity of host 'server_one.usaga.com (43.3.22.4)' can't be established. ECDSA key fingerprint is 08:b1:c7:fa:c3:a8:8f:ce:85:4f:b9:ac:b1:8a:6a:87. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'server_one.usaga.com,43.3.22.4' (ECDSA) to the list of known hosts. and run the rsync again as - its works sshpass -p $password rsync -av /etc/yum.repos.d/* root@server_one.usaga.com:/etc/yum.repos.d sending incremental file list . . . sent 378 bytes received 112 bytes 326.67 bytes/sec total size is 1937 speedup is 3.95 so regrading to rsync what are the necessary flags to set in rsync in order to ignore key fingerprint ? or dose rsync can ignore the question about - Are you sure you want to continue connecting (yes/no)? yes
You can specify other remote shell for rsync than ssh using -e and that includes ssh with extra options so adding -e "ssh -o StrictHostKeyChecking=no" will do trick.
rsync + rsync failed because Host key verification failed
1,300,719,913,000
I try to write a script to get the version of my distro so that I can pass it to a variable. The following command is what I wrote to achieve the result. lsb_release -ar | grep -i release | cut -s -f2 The unwanted output: No LSB modules are available. 18.04 As you can see, the No LSB modules are available message is the unwanted part. Since I prefer my script to be portable across servers, I don't want to install any extra packages beside utilizing the lsb_release -a command.
That message is sent to standard error, so redirecting that to /dev/null will get rid of it (along with any other error message produced by lsb_release): lsb_release -ar 2>/dev/null | grep -i release | cut -s -f2
Remove "No LSB modules are available." message from 'lsb_release -a'
1,300,719,913,000
I have a script that will add users from a file. It will add the users just fine as far as I can tell, but when trying to login the password supplied by the script does not work. I'm not sure if this is normal or not, but the /etc/shadow file also shows the correct passwords in plain text. I thought the shadow file is only supposed to show a hash? Here is the code I have for the script: #/bin/bash file="/Scripts/FormattedFile.txt" while IFS=: read -r f1 f2 f3 f4 do validID=$( echo "$f3" | cut -b 4,5,6,7,8,9 ) comment="${f1} ${f2}" useradd -m -p "$f3" -u "$validID" -c "$comment" "$f1" echo "Added user $f1." done < "$file" The result is a user with the correct username, ID, and home directory, but is unable to login. I know that the script is reading the file correctly and the fields are also correct as pulled from the file. Field 3 is a 6 digit number taken from a longer 9 digit string. Each record is of the format: firstname:lastname:H00123456:[email protected] Records are separated by a newline.
According to useradd --help: -p, --password PASSWORD encrypted password of the new account So what you need to do is to pass the encrypted password: -p $(openssl passwd "the_password")
Script to add users will add users but they cannot login
1,300,719,913,000
I have 119766 files in a folder. They are CSV files. I want to find out total number of lines of all files. I'm trying to run following command: cat * |wc -l But the following error occurrs: -bash: /bin/cat: Argument list too long How can I do that? Is there any way around this? One thing I would like to add that total number of lines would be very large.
If you want a line-count for each individual file: find . -type f -exec wc -l {} + | awk '! /^[ 0-9]+[[:space:]]+total$/' I've excluded the total lines because there will be several of them with this many files being processed. The find ... -exec ... + will try to fit as many filenames onto a single command line as possible, but that will be a LOT less than 119766 files....probably only several thousand (at most) per invocation of wc, and each one will result in its own independent 'total' line. If you want the total number of lines in all files combined, here's one way of doing it: find . -type f -exec wc -l {} + | awk '/^[ 0-9]+[[:space:]]+total$/ {print $1}' | xargs | sed -e 's/ /+/g' | bc This prints only the line counts on the total lines, pipes that into xargs to get the counts all on one line, then sed to transform the spaces into + signs, and then pipes the lot into bc to do the calculation. Example output: $ cd /usr/share/doc $ find . -type f -exec wc -l {} + | awk '/^[ 0-9]+[[:space:]]+total$/ {print $1}' | xargs | sed -e 's/ /+/g' | bc 53358931 Update 2022-05-05 It is better to run wc -l via sh. This avoids the risk of problems arising if any of the filenames are called total....aside from the total line being the last line of wc's output, there is no way to distinguish an actual total line from the output for a file called "total", so a simple awk script that matches "total" can't work reliably. To show counts for individual files, excluding totals: find . -type f -exec sh -c 'wc -l "$@" | sed "\$d"' sh {} + This runs wc -l on all filenames and deletes the last line (the "total" line) from each batch run by -exec. The $d in the sed script needs to be escaped because the script is in a double-quoted string instead of the more usual single-quoted string. double-quotes were used because the entire sh -c is a single-quoted string. It's easier and more readable to just escape one $ symbol than to use '\'' to fake embedding a single-quote inside a single quote. To show only the totals: find . -type f -exec sh -c 'wc -l "$@" | awk "END {print \$1}"' sh {} + | xargs | sed -e 's/ /+/g' | bc Instead of using sed to delete the last line from each batch of files passed to wc via sh by find ... -exec, this uses awk to print only the last lines (the "total") from each batch. The output of find is then converted to a single line (xargs) with + characters between each number (sed to transform spaces to +), and then piped into bc to perform the calculation. Just like the $d in the sed script, the $1 in the awk script needs to be escaped because of double-quoting.
/bin/cat: Argument list too long
1,300,719,913,000
I'm trying to rename a bunch of music tracks in a directory, but I got this error: When moving multiple files, last argument must be a directory This is the script: for file in * ; do mv $file $(echo $file |sed 's/^.\{5\}//g') done This works for a file without whitespace, how would I modify this script?
Use quotes: mv -- "$file" "$(echo "$file" | sed ...)" Else mv sees multiple arguments. A filename called file name with spaces would be 4 arguments for mv. Therefore the error: when moving multiple files, last argument must be a directory. When mv has more than 2 arguments, it's assuming you want to move multiple files to a directory (which would then be the last argument). But however, it looks like you want to remove the first 5 characters from the filename. That can be done simpler with bash: mv -- "$file" "${file:5}" Edit: I added the -- flag, thanks to the comment of @pabouk. Now file starting with dash - are also correctly processed.
Remove certain characters from multiple files with whitepsaces
1,300,719,913,000
Why doesn't this work? cat /dev/video1 | mplayer - If I could get that to work, then I could play & record video at the same time using 'tee' to feed mplayer and mencoder. I want to play live video (from /dev/video1:input=1:norm=NTSC) and record it at the same time without introducing lag. mplayer plays the video fine (no noticeable lag). mencoder records it fine. But I can't figure out how to "tee" the output from /dev/video so that I can feed it to both at the same time. (I know ways to encode it, then immediately play the encoded video, but that introduces too much lag). If mplayer and mencoder would read from stdin, then I could use 'tee' to solve this. How can I do it? [BTW, I'd be happy with ANY solution that plays & records at the same time, as long as it doesn't add lag - I'm not wedded to mplayer. But encoding first and then playing adds lag.]
8+ years later, I ought to post the solution I found. Use Python. AFAICT, this isn't possible in with standard Linux tools alone. If you're reading this - best to stop smashing your head against the wall. Very roughly speaking - use pygame (import pygame) to read the camera and display the video, and OpenCV (import cv2) to save the video. This works.
How to get mplayer to play from stdin?
1,300,719,913,000
I'm trying to create patch a file using diff tool.But facing an issues.The way I am doing is below. I've created one Directory named a and put original file in to it. a/original_file.c Now I have created other Directory named b and put same file with modified content in to it. b/original_file.c Now content of b/original_file.c file I have copied from internet and put it into some text editor. After giving command: diff -Naur a b > patch_file.patch, the file patch_file.patch is generated and it has some unwanted changes (its related to indentation). For example: return mg_nw (MG_READY_NOY, &rmsg, seqnr, - sizeof (struct mg_rdy_notify)); + sizeof (struct mg_rdy_notify)); Now you can see there are changed related to indentation where sizeof (struct mg_rdy_notify)) is replaced by same sizeof (struct mg_rdy_notify)) but one basis of indentation which is what we don't want.
diff has more than one option related to whitespace. However, one is less useful for patches. The manual page gives an obscure hint, referring to both GNU: -B, --ignore-blank-lines ignore changes where lines are all blank -b, --ignore-space-change ignore changes in the amount of white space -w, --ignore-all-space ignore all white space and FreeBSD -b Ignore changes in amount of white space. -B Ignore changes that just insert or delete blank lines. -w Ignore white space when comparing lines. Usually one uses -b, because that is less likely to overlook significant changes. If you have changed only indentation, then both -b and -w give the same result. On the other hand, if you inserted spaces where there were none, or deleted existing spaces (leaving none), that could be a change in your program. Here is an example: $ diff foo.c foo2.c 4c4 < setlocale(LC_ALL, ""); --- > setlocale(LC_ALL, " "); 6,7c6,7 < printw("\U0001F0A1"); < getch(); --- > printw ("\U0001F0A1"); > getch(); /* comment */ $ diff -b foo.c foo2.c 4c4 < setlocale(LC_ALL, ""); --- > setlocale(LC_ALL, " "); 6,7c6,7 < printw("\U0001F0A1"); < getch(); --- > printw ("\U0001F0A1"); > getch(); /* comment */ $ diff -w foo.c foo2.c 7c7 < getch(); --- > getch(); /* comment */ In this case, the -w option allows you to ignore the change to the setlocale parameter (perhaps not what was intended). POSIX diff, by the way, has only the -b option. For patch, POSIX documents the -l option: -l (The letter ell.) Cause any sequence of <blank> characters in the difference script to match any sequence of <blank> characters in the input file. Other characters shall be matched exactly.
How to create a patch ignoring indentation differences in the code?
1,300,719,913,000
I was trying to find the intersection of two plain data files, and found from a previous post that it can be done through comm -12 <(sort test1.list) < (sort test2.list) It seems to me that sort test1.list aims to sort test1.list in order. In order to understand how sort works, I tried sort against the following file, test1.list as sort test1.list > test2.list 100 -200 300 2 92 15 340 However, it turns out that test2.list is 100 15 2 -200 300 340 92 This re-ordered list make me quite confused about how this sort works, and how does sort and comm work together.
Per the comm manual, "Before `comm' can be used, the input files must be sorted using the collating sequence specified by the `LC_COLLATE' locale." And the sort manual: "Unless otherwise specified, all comparisons use the character collating sequence specified by the `LC_COLLATE' locale. Therefore, and a quick test confirms, the LC_COLLATE order comm expects is provided by the sort's default order, dictionary sort. sort can sort files in a variety of manners: -d: Dictionary order - ignores anything but whitespace and alphanumerics. -g: General numeric - alpha, then negative numbers, then positive. -h: Human-readable - negative, alpha, positive. n < nk = nK < nM < nG -n: Numeric - negative, alpha, positive. k,M,G, etc. are not special. -V: Version - positive, caps, lower, negative. 1 < 1.2 < 1.10 -f: Case-insensitive. -R: Random - shuffle the input. -r: Reverse - usually used with one of dghnV There are other options, of course, but these are the ones you're likely to see or need. Your test shows that the default sort order is probably -d, dictionary order. d | g | h | n | V ------+-------+-------+-------+------- 1 | a | -1G | -10 | 1 -1 | A | -1k | -5 | 1G 10 | z | -10 | -1 | 1g -10 | Z | -5 | -1g | 1k 1.10| -10 | -1 | -1G | 1.2 1.2 | -5 | -1g | -1k | 1.10 1g | -1 | a | a | 5 1G | -1g | A | A | 10 -1g | -1G | z | z | A -1G | -1k | Z | Z | Z 1k | 1 | 1 | 1 | a -1k | 1g | 1g | 1g | z 5 | 1G | 1.10 | 1G | -1 -5 | 1k | 1.2 | 1k | -1G a | 1.10 | 5 | 1.10 | -1g A | 1.2 | 10 | 1.2 | -1k z | 5 | 1k | 5 | -5 Z | 10 | 1G | 10 | -10
Issues of using sort and comm
1,300,719,913,000
I know that I can do ps aux | grep cgi to get a list of all cgi scripts currently running, and ps -p [pid] -o etime= can get me the run time for each pid; is there a way to combine the two, or better still, only return those that have been running for "too long" (say, 45sec)? Ideally, I'm looking for something that could be put into a perl script that looks out for issues, emails me the details and pro-actively kills the process "for safety". would it be better to just strap a the output from one, and then iterate through the results?
I've done something like this in the past. ps -A -o etime,pid,user,args| grep init returns 180-04:55:20 1 root init [5] Which is easily parse-able in perl. I used split and pop to parse it. The format is [[dd-]hh:]mm:ss
How can I get a list of long running processes that match a particular pattern?
1,300,719,913,000
I've heard of lines of code that are distributed with the Linux Kernel that aren't open. Maybe some drivers or something like that. I'd like to know how much of that is true? Are there lines of code that are distributed with the Kernel (as when you download it from kernel.org) that aren't open at all? And how much that is of the total (if there's a way to know it, number of lines or percentage)? And where can I find more information about this? Maybe some articles to read... Thank you very much!
The Linux kernel itself is all free software, distributed under the GNU General Public License. Third parties may distribute closed-source drivers in the form of loadable kernel modules. There's some debate as to whether the GPL allows them; Linus Torvalds has decreed that proprietary modules are allowed. Many device in today's computer contain a processor and a small amount of volatile memory, and need some code to be loaded into that volatile memory in order to be fully operational. This code is called firmware. Note that the difference between a driver and a firmware is that the firmware is running on a different processor. Firmware makers often only release a binary blob with no code source. Many Linux distributions package non-free firmware separately (or in extreme cases not at all), e.g. Debian.
Proprietary or Closed Parts of the Kernel
1,300,719,913,000
I successfully built up a raid5 array on Debian testing (Wheezy). As the man pages and further tell, the array would be created as an out-of-sync array with just a new spare injected to be repaired. That worked fine. But after the rebuild process, I get daily messages on missing spares, but the array should be raid5 over 3 discs without spares. I think I only need to tell mdadm that there is -- and should be -- no spare, but how to? mdadm -D gives Active Devices: 3 Working Devices: 3 Failed Devices: 0 Spare Devices: 0 and /proc/mdstat reads md1: active raid5 sda3[0] sdc3[3] sdb3[1] ##### blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] Any ideas?
Open the /etc/mdadm/mdadm.conf file, find the line that begins with ARRAY /dev/md1 and remove the line immediately following which states 'spares=1'. Then restart mdadm service. If you did a mdadm --examine --scan to retrieve the array definitions while the md1 array was still rebuilding, one partition was seen as spare at that moment.
mdadm Raid5 gives spares missing events
1,300,719,913,000
during the upgrade of a debian system i got the following errors: W: Possible missing firmware /lib/firmware/rtl_nic/rtl8125b-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8125a-3.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8107e-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8107e-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168fp-3.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168h-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168h-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168g-3.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168g-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8106e-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8106e-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8411-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8411-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8402-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168f-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168f-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8105e-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-3.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-1.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168d-2.fw for module r8169 W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168d-1.fw for module r8169 W: Possible missing firmware /lib/firmware/i915/skl_huc_2.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/skl_guc_33.0.0.bin for module i91 Has anyone encountered something similar? I tried safe update/upgrade..
It's normal. Many devices need firmware to fully work. The thing is, firmware is a binary blob, and not free (as in GNU-free). So Debian does not distribute it by default. In your case you could do this (with the non-free entry in /etc/apt/sources.list: $ sudo apt-get install firmware-realtek firmware-misc-nonfree If that ever happens again, one way to know the package that holds a certain file is to look it up at packages.debian.org, under Search the contents of packages. Alternatively, you can go to: https://packages.debian.org/file:path, where path is the path of the file you are looking for, for example, for /lib/firmware/rtl_nic/rtl8125b-2.fw it would be: https://packages.debian.org/file:/lib/firmware/rtl_nic/rtl8125b-2.fw
Linux Upgrade (Debian) - Possible missing firmware problem [duplicate]
1,300,719,913,000
I'm able to get the signal strength of all Wi-Fi networks with the following command: $ nmcli -t -f SIGNAL device wifi list $ 77 67 60 59 55 45 44 39 39 37 I would like to reduce this list only to the current Wi-Fi on which I'm connected. I've been through the man page but can't find the necessary flag. One solution would be to use sed or awk, but I would like to avoid piping. Should I use nmcli device wifi instead of parsing directly for the SIGNAL column?
nmcli --version nmcli tool, version 1.6.2 To get the SIGNAL of the AP on which you are connected, use: nmcli dev wifi list | awk '/\*/{if (NR!=1) {print $7}}' The second * mark in nmcli dev wifi list is set to identify the SSID on which your are connected. nmcli --version nmcli tool, version 1.22.10 use: nmcli dev wifi list | awk '/\*/{if (NR!=1) {print $6}}'
Get connected Wi-Fi network signal strength with nmcli
1,300,719,913,000
In Linux, I can get last month by using date -d "last month" '+%Y%m' or date -d "1 month ago" '+%Y%m' But say, today is 31st of March, if I run the command at top, it shows 201603, but I want to get last month regardless which day I'm in now; how can I do so? I can achieve that by using workaround like get first day/last day of previous month, but I wonder is there any elegant way to do so? date -d "-$(date +%d) days" '+%Y%m' #get last day of previous month
The usual wisdom is use the 15 of this month. Then subtract 1 month: $ nowmonth=$(date +%Y-%m) $ date -d "$nowmonth-15 last month" '+%Y%m' 201602
Get previous month regardless of days
1,300,719,913,000
I'm trying to remove all files more than 1 days old.Before I execute a script to remove the files,I try to find the files using mtime. however, I face same problem with my command- My Current date is Wed Jan 27 11:49:20 BDT 2016 My file lists are- Jan 25 15:11 25-01-2016.txt Jan 26 13:05 26-01-2016.txt Jan 27 02:30 27-01-2016.txt Jan 25 15:11 dfk-25-01-2016.txt Jan 26 13:05 dfk-26-01-2016.txt Jan 27 02:30 dfk-27-01-2016.txt I thought -mtime +1 was supposed to list all files over a day old. find /etc/output/*.txt -mtime +1 find /etc/output/*.txt -mtime +0 /output/25-01-2016.txt /output/dfk-25-01-2016.txt find /etc/output/*.txt -mtime -1 /output/26-01-2016.txt /output/27-01-2016.txt /output/dfk-26-01-2016.txt /output/dfk-27-01-2016.txt My desired out output is as followings, find /etc/output/*.txt -mtime +1 /output/25-01-2016.txt /output/dfk-25-01-2016.txt find /etc/output/*.txt -mtime +0 /output/26-01-2016.txt /output/dfk-26-01-2016.txt /output/25-01-2016.txt /output/dfk-25-01-2016.txt
Two points: find "ignores fractional parts". I guess it calculates the number of hours, divides by 24, and integerizes the result (discards the fraction). So -mtime 0 checks a file, compares the mtimes, converts to hours, divides by 24. If the integer part of that result is 0, it's a match. That means 0.99999 hours ago will match. Then -mtime +0 matches any file whose mtime difference is at least 24 hours. Second, if you want mtime to count calendar days, and not n-24 hour periods from now, use -daystart. So -daystart -mtime 0 means today and -daystart -mtime +0 means before today.
How do I find files older than 1 days using mtime? [duplicate]
1,300,719,913,000
I am trying to set up an IP6tables firewall on Linux and I basically want to start copying my IPv4 configuration. One of my rules simply accepts all traffic to the localhost subnet: iptables -A INPUT -s 127.0.0.0/8 -j ACCEPT I am a little puzzled though what the equivalent for IPv6 is. Is it as simple as: ip6tables -A INPUT -s ::/128 -m comment --comment localhost -j ACCEPT Many sources on internet explain 127.0.0.1, but I am specifically looking for the 127.0.0.0/8 equivalent and I haven't been able to find confirmation yet. BTW I would expect the ip6tables -vnL counters to increment when I issue: telnet -6 localhost 22 But that doesn't happen.
From Wikipedia: IPv4 network standards reserve the entire 127.0.0.0/8 address block for loopback purposes. That means any packet sent to one of those 16,777,214 addresses (127.0.0.1 through 127.255.255.254) will be looped back. IPv6 has just a single address, ::1. So for IPv6 its just ::1 or ::1/128.
What is the IPv6 equivalent for 127.0.0.0/8
1,300,719,913,000
I have a text file named foo.txt with root permission in one Linux distribution. I copied it to another Linux distribution on another computer. Would file permissions be still maintained for foo.txt? If yes, how does Unix/Linux linux know, and duplicate the permissions of the file? Does it add extra bytes (which indicates the permissions) to the file?
To add Eric's answer (don't have rep to comment), permissions are not stored in file but file's inode (filesystem's pointer to the file's physical location on disk) as metadata along with owner and timestamps. This means that copying file to non-POSIX filesystem like NTFS or FAT will drop the permission and owner data. File owner and group is just a pair of numbers, user ID (UID) and group ID (GID) respectively. Root UID is 0 as standard so file will show up as owned by root on (almost) every unix-compliant system. On the other hand, non-root owner will not be saved in meaningful way. So in short, root ownership will be preserved if tarball'd or copied via extX usbstick or the like. Non-root ownership is unreliable.
How does Unix implement file permissions?
1,300,719,913,000
Observation: I have an HP server with an AMD dual core CPU (Turion II Neo N40L) which can scale frequencies from 800 to 1500 MHz. The frequency scaling works under FreeBSD 9 and under Ubuntu 12.04 with the Linux kernel 3.5. However, when I put FreeBSD 9 in a KVM environment on top of Ubuntu the frequency scaling does not work. The guest (thus FreeBSD) does not detect the minimum and maximum frequencies and thus does not scale anything when CPU occupation gets higher. On the host (thus Ubuntu) the KVM process uses between 80 and 140 % of the CPU resource but no frequency scaling happens, the frequency stays at 800 MHz, although when I run any other process on the same Ubuntu box, the ondemand governor quickly scales the frequency to 1500 MHz! Concern and question: I don't understand how the CPU is perhaps virtualised, and if it is up to the guest to perform the proper scaling. Does it require some CPU features to be exposed to the guest for this to work? Apendix: The following Red Hat release note tends to suggest that frequency scaling out to work even in a virtualised environment (see chapter 6.2.2 and 6.2.3), thought the note fails to address which virtualisation technology this work with (kvm, xen, etc.?) For information, the cpufreq-info output on Ubuntu is: $ cpufreq-info cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: powernow-k8 CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 8.0 us. hardware limits: 800 MHz - 1.50 GHz available frequency steps: 1.50 GHz, 1.30 GHz, 1000 MHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.50 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 1.50 GHz:14.79%, 1.30 GHz:1.07%, 1000 MHz:0.71%, 800 MHz:83.43% (277433) analyzing CPU 1: driver: powernow-k8 CPUs which run at the same hardware frequency: 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 8.0 us. hardware limits: 800 MHz - 1.50 GHz available frequency steps: 1.50 GHz, 1.30 GHz, 1000 MHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.50 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 1.50 GHz:14.56%, 1.30 GHz:1.06%, 1000 MHz:0.79%, 800 MHz:83.59% (384089) The reason I want this feature to work is: save energy, run quieter (less hot) and also simple curiosity to understand better why this is not working and how to make it work.
I have found the solution thanks to the tip given by Nils and a nice article. Tuning the ondemand CPU DVFS governor The ondemand governor has a set of parameters to control when it is kicking the dynamic frequency scaling (or DVFS for dynamic voltage and frequency scaling). Those parameters are located under the sysfs tree: /sys/devices/system/cpu/cpufreq/ondemand/ One of this parameters is up_threshold which like the name suggest is a threshold (unit is % of CPU, I haven't find out though if this is per core or merged cores) above which the ondemand governor kicks in and start changing dynamically the frequency. To change it to 50% (for example) using sudo is simple: sudo bash -c "echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold" If you are root, an even simpler command is possible: echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold Note: those changes will be lost after the next host reboot. You should add them to a configuration file that is read during boot, like /etc/init.d/rc.local on Ubuntu. I have found out that my guest VM, although consuming a lot of CPU (80-140%) on the host was distributing the load on both cores, so no single core was above 95%, thus the CPU, to my exasperation, was staying at 800 MHz. Now with the above patch, the CPU dynamically changes it frequency per core much faster, which suits better my needs, 50% seems a better threshold for my guest usage, your mileage may vary. Optionally, verify if you are using HPET It is possible that some applicable which incorrectly implement timers might get affected by DVFS. This can be a problem on the host and/or guest environment, though the host can have some convoluted algorithm to try to minimise this. However, modern CPU have newer TSC (Time Stamp Counter) which are independent of the current CPU/core frequency, those are: constant (constant_tsc), invariant (invariant_tsc) or non-stop (nonstop_tsc), see this Chromium article about TSC resynchronisation for more information on each. So if your CPU is equipped with one of this TSC, you don't need to force HPET. To verify if your host CPU supports them, use a similar command (change the grep parameter to the corresponding CPU feature, here we test for the constant TSC): $ grep constant_tsc /proc/cpuinfo If you do not have one of this modern TSC, you should either: Active HPET, this is described here after; Not use CPU DVFS if you have any applications in the VM that rely on precise timing, which is the one recommended by Red Hat. A safe solution is to enable HPET timers (see below for more details), they are slower to query than TSC ones (TSC are in the CPU, vs. HPET are in the motherboard) and perhaps not has precise (HPET >10MHz; TSC often the max CPU clock) but they are much more reliable especially in a DVFS configuration where each core could have a different frequency. Linux is clever enough to use the best available timer, it will rely on first the TSC, but if found too unreliable, it will use the HPET one. This work good on host (bare metal) systems, but due to not all information properly exported by the hypervisor, this is more of a challenge for the guest VM to detect badly behaving TSC. The trick is then to force to use HPET in the guest, although you would need the hypervisor to make this clock source available to the guests! Below you can find how to configure and/or enable HPET on Linux and FreeBSD. Linux HPET configuration HPET, or high-precision event timer, is a hardware timer that you can find in most commodity PC since 2005. This timer can be used efficiently by modern OS (Linux kernel supports it since 2.6, stable support on FreeBSD since latest 9.x but was introduced in 6.3) to provide consistent timing invariably to CPU power management. It allows to build also easier tick-less scheduler implementations. Basically HPET is like a safety barrier which even if the host has DVFS active, the host and guest timing events will be less affected. There is a good article from IBM regarding enabling HPET, it explains how to verify which hardware timer your kernel is using, and which are available. I provide here a brief summary: Checking the available hardware timer(s): cat /sys/devices/system/clocksource/clocksource0/available_clocksource Checking the current active timer: cat /sys/devices/system/clocksource/clocksource0/current_clocksource Simpler way to force usage of HPET if you have it available is to modify your boot loader to ask to enable it (since kernel 2.6.16). This configuration is distribution dependant, so please refer to your own distribution documentation to set it properly. You should enable hpet=enable or clocksource=hpet on the kernel boot line (this again depends on the kernel version or distribution, I did not find any coherent information). This make sure that the guest is using the HPET timer. Note: on my kernel 3.5, Linux seems to pick-up automatically the hpet timer. FreeBSD guest HPET configuration On FreeBSD one can check which timers are available by running: sysctl kern.timecounter.choice The currently chosen timer can be verified with: sysctl kern.timecounter.hardware FreeBSD 9.1 seems to automatically prefer HPET over other timer provider. Todo: how to force HPET on FreeBSD. Hypervisor HPET export KVM seems to export HPET automatically when the host has support for it. However, for Linux guest they will prefer the other automatically exported clock which is kvm-clock (a paravirtualised version of the host TSC). Some people reports trouble with the preferred clock, your mileage may vary. If you want to force HPET in the guest, refer to the above section. VirtualBox does not export the HPET clock to the guest by default, and there is no option to do so in the GUI. You need to use the command line and make sure the VM is powered off. the command is: ./VBoxManage modifyvm "VM NAME" --hpet on If the guest keeps on selecting another source than HPET after the above change, please refer to the above section how to force the kernel to use HPET clock as a source.
Host CPU does not scale frequency when KVM guest needs it
1,300,719,913,000
I would like to know how many Primary and Extended Partitions I can create on a x86_64 PC with Linux running on it? Update : If there is a limit to the number of partitions, why is that the limit?
The limitation is due to the original BIOS design. At that time, people weren't thinking more than four different OSes would be installed on a single disk. There was also a misunderstanding of the standard by OS implementors, notably Microsoft and Linux which erroneously map file systems with (primary) partitions instead of subdividing their own partition in slices like BSD and Solaris which was the original intended goal. The maximum number of logical partitions is unlimited by the standard but the number of reachable ones depends on the OS. Windows is limited by the number of letters in the alphabet, Linux used to have 63 slots with the IDE driver (hda1 to hda63) but modern releases standardize on the sd drivers which supports by default 15 slots (sda1 to sda15). By some tuning, this limit can be overcome but might confuse tools (see http://www.justlinux.com/forum/showthread.php?t=152404 ) In any case, this is becoming history with EFI/GPT. Recent Linuxes support GPT with which you can have 128 partitions by default. To fully use large disks (2TB and more) you'll need GPT anyway.
What's the limit on the no. of partitions I can have?
1,300,719,913,000
I am executing this command to test a connection from a remove server: ssh -l user $IP "dd if=/dev/zero count=3500 bs=1M status=progress" > /dev/null This shows progress reports of the form 3145728000 bytes (3,1 GB, 2,9 GiB) copied, 276,047 s, 11,4 MB/s so apparently, dd reads at 11mb per second. The network bandwidth however is known to cap out below ~20mbits, so this cannot be the amount of data actually received. iftop on the receiving machine shows throughputs around ~300 kbits, which is much less than is possible, but more realistic. Question: What does dd's progress status actually mean when piped over an ssh connection? Is data dropped when the receiving end cannot keep up? What is happening exactly?
SSH can be operated as a compressed protocol, and judging by your results, it is enabled as such by default in your distribution or configuration (or you are using ssh -C). As such, your stream of zeroes compresses nicely into something much more compact -- from your readings, with a compression ratio of about 300: the end result being about 0.3% of the original size. For that reason, it's not really a great choice for testing network speed, since it can take little network bandwidth to produce a huge result on the receiver. You can turn off compression on demand with -o Compression=no on the command line, or permanently for a connection by specifying Compression no in your SSH client config. Another option is to use something more basic, like netcat, which doesn't implement compression, authentication, or similar, although I wouldn't generally recommend using it for real-world file transfers for that reason.
How can dd over ssh report read speeds exceeding the network bandwidth?
1,300,719,913,000
I use xinetd and it works for my purposes. However I recently discovered that systemd has something built in called "socket activation". These two seem very similar, but systemd is "official" and seems like the better choice. However before using it, are they really the same? Are there differences I should be aware of? For example, I want to start some dockerised services only when they are first requested - my first thought would be to use xinetd. But is socket activation better / faster / stabler / whatever?
I don’t think systemd socket activation is significantly better than xinetd activation, when considered in isolation; the latter is stable too and has been around for longer. Socket activation is really interesting for service decoupling: it allows services to be started in parallel, even if they need to communicate, and it allows services to be restarted independently. If you have a service which supports xinetd-style activation, it can be used with socket activation: a .socket description with Accept=true will behave in the same way as xinetd. You’ll also need a .service file to describe the service. The full benefits of systemd socket activation require support in the dæmon providing the service. See the blog post on the topic. My advice tends to be “if it isn’t broken, don’t fix it”, but if you want to convert an xinetd-based service to systemd it’s certainly feasible.
systemd "socket activation" vs xinetd
1,300,719,913,000
Applications like lynx browser, htop etc and many others accept position dependent mouse clicks in bash over ssh shell. I know that ssh is a command line interface. Then how does it accepts mouse clicks?
IMHO, the simplest way to write such a TUI application is to use ncurses. "New Curses" is a library that abstracts the design of the TUI from the details of the underlying device. All the software you cited use ncurses to render their interface. When you click on a terminal emulator (e.g. xterm, gnome-term, etc), the terminal emulator translates the click in a sequence of ANSI Escape codes. These sequences are read and translated in events by the ncurses library. You can find an example on Stack Overflow: Mouse movement events in NCurses
How some applications accept mouse click in bash over ssh?
1,300,719,913,000
I have remote server without GUI support. How can I install `CentOS 7 there? The CentOS 7 is mandatary and I can't switch to another OS or distribution. I get following text at the end. I able to mount CD but I don't know what to do next. FreeBSD has bsdinstall which works in text mode. Debian can also be installed in text mode without any problems. (?- //\ Core is distributed with ABSOLUTELY NO WARRANTY. v_/_ www.tinycorelinux.com tc@box:~$ Switched to clocksource tsc
CentOS 7 has an option to install in text mode. When you see install centos menu option press the tab key, add text to the end of any existing installer command line arguments and then press the return key. This will tell the installer (Anaconda) to install the OS in text mode.
Install CentOS 7 using text mode
1,300,719,913,000
System: Linux Mint 18.1 64-bit Cinnamon. Objective: To define Bash aliases to launch various CLI and GUI text editors while opening a file in root mode from gnome-terminal emulator. Progress For example, the following aliases seem to work as expected: For CLI, in this example I used Nano (official website): alias sunano='sudo nano' For GUI, in this example I used Xed (Wikipedia article): alias suxed='sudo xed' They both open a file as root. Problem I have an issue with gksudo in conjunction with sublime-text: alias susubl='gksudo /opt/sublime_text/sublime_text' Sometimes it works. It just does not do anything most of the time. How do I debug such a thing with inconsistent behavior? It does not output anything. No error message or similar. Question gksudo has been deprecated in Debian and also no longer included in Ubuntu 18.04 Bionic, so let me re-formulate this question to a still valid one: How to properly edit system files (as root) in GUI (and CLI) in Linux? Properly in this context I define as safely in case, for instance, a power loss occurs during the file edit, another example could be lost SSH connection, etc.
You shouldn’t run an editor as root unless absolutely necessary; you should use sudoedit or sudo -e or sudo --edit, or your desktop environment’s administrative features. sudoedit Once sudoedit is set up appropriately, you can do SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit yourfile sudoedit will check that you’re allowed to do this, make a copy of the file that you can edit without changing ownership manually, start your editor, and then, when the editor exits, copy the file back if it has been changed. I’d suggest a function rather than an alias: function susubl { export SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit "$@" } although as Jeff Schaller pointed out, you can use env to put this in an alias and avoid changing your shell’s environment: alias susubl='env SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit' Take note that you don't need to use the $SUDO_EDITOR environment variable if $VISUAL or $EDITOR are already good enough. The -w option ensures that the Sublime Text invocation waits until the files are closed before returning and letting sudoedit copy the files back. Desktop environments (GNOME) In GNOME (and perhaps other desktop environments), you can use any GIO/GVFS-capable editor, with the admin:// scheme; for example gedit admin:///etc/shells This will prompt for the appropriate authentication using PolKit, and then open the file for editing if the authentication was successful.
How to properly edit system files (as root) in GUI (and CLI) in Gnu/Linux?
1,300,719,913,000
The Linux Programming Interface states: Each device driver registers its association with a specific major device ID, and this association provides the connection between the device special file and the device driver. Is it possible to obtain the list of those associations?
Documentation/admin-guide/devices.txt in the kernel source code documents the allocation process and lists all the allocated device numbers. sd gets a whole bunch of major device numbers because of the large number of devices it can handle: major 8 covers /dev/sda to /dev/sdp, major 65 covers /dev/sdq to /dev/sdaf, 66 /dev/sdag to /dev/sdav and so on all the way to 135 for /dev/sdig to /dev/sdiv (for a total of 256 disk devices).
How to get a list of major number -> driver associations
1,300,719,913,000
I have Linux Mint 14 installed as my only OS. I have one extended partition containing /swap, / and /home, and I have some unallocated space on my drive. I'm guessing that Mint decided to put this all on an extended partition instead of three primary partitions. So I want to build Linux From Scratch using some of my unallocated space. My first question is, do I need to have a swap partition for each distro or can LFS use the swap partition I already have? If so, would the swap partition have to be a primary partition, or does it not matter? Is there any practical difference between a primary and a logical partition? A question about definition: is an extended partition just a primary partition that contains logical partitions? Finally, since deleting Windows 7 (sda 1-3), my Linux partitions are still numbered 5-7. If I create a new partition, will it be called sda1?
do I need to have a swap partition for each distro or can LFS use the swap partition I already have? As goldilock says, unless you are hibernating (suspend to disk), yes. Otherwise no, because you could overwrite swap of a hibernated system - either it's saved state or the part that was used as regular swap at suspend time. If so, would the swap partition have to be a primary partition, or does it not matter? No, it doesn't matter at all. You can use swap in file on a regular filesystem, if need be (there is a small overhead, but it's also more flexible). You can even swap to NFS, if you're bold enough. On the other hand, if you ran Windows 7 on the machine, chances are you have enough memory not to need swap at all under normal circumstances - even with "just" 2GB RAM you can do a whole lot of stuff without swap (basic desktop environment will use ~200MB). Not that swap would be unnecessary, but the need for it these days is much smaller then 10 years ago. since deleting Windows 7 (sda 1-3), my Linux partitions are still numbered 5-7. If I create a new partition, will it be called sda1? Since the disk is using the MBR partitioning scheme, the numbers, all logical partitions will have number 5 and higher. Unless you expand the extended partition that contains the logical ones, the only remaining space is likely available only for primary partitions, which will be numbered 1-3, provided the extended partition has number 4. See wiki on MBR for more details. Is there any practical difference between a primary and a logical partition? Not these days. BIOSes usually weren't able to boot from logical partitions (because they only read the MBR). Today the bootloaders usually know how to do this, so the only thing BIOS does in the system loading process is to read the bootloader trampoline from MBR (or boot sector in a primary partition) and that takes care of everything else by first loading rest of the bootloader, which in turn loads the kernel. is an extended partition just a primary partition that contains logical partitions? Yes, you can view it as such with a tiny bit of abstraction - it behaves as such, but the partition metadata is stored differently (as a linked list instead of an array with 4 elements which is what MBR is). As for question in the comment - yes you can only have one extended partition. But once you finalize your setup a bit (or even earlier), you might want to switch to GPT. It might even be possible to do it non-destructively (depends on the exact partitions layout).
Do I only need one swap partition for multiple Linux distros? (and other questions)
1,300,719,913,000
I am performing some experiments on a network of about 10 remote Linux computers which are geographically scattered. I suspect some of them have clock skews but they are seen transiently (eg. once in a week or twice in a month). I was wondering if there exists some tools which could detect and quantify such clock skews. Also wondering if clock skew is the right term for what I am witnessing or could it be called clock synchronization.
Are you trying to keep the clocks synchronized to the right time, or are you trying to determine how accurate the real time clock actually is, without being synchronized? If you simply want the times to be correct, there's a whole hierarchy of time servers that systems can sync to, and it's often built in to the OS, although you can usually specify time servers. Also, one system can synch to a time server, and then run its own time server for the other systems to sync to. I have seen, and even used in the past, programs that not only sync to a standard time, but keep up with the local computer's drift from the last synchronization. I used this in the past when we had intermittent internet connections, so the program would correct the time by using the history of expected drift. Usually, with either of these methods, if the correction is too much, it may assume an error and not do any correction at all. Hopefully, I'm not totally missing your point. :)
Clock skews on remote machines
1,300,719,913,000
How can I check if /proc/ is mounted? Using /etc/mtab is discouraged as it might be inconsistent. Using /proc/mounts is also not an option as might not exist if /proc/ is not mounted (although checking for its existence may be a way to do this check. What is the best way to do this check?
You can run the mount without any arguments to get a list of current mounts. The /etc/mtab file should have similar data, but like you said it is possible for this to be inconsistent with what is actually mounted in the event that the /etc file system is messed up, not writable, or another program has messed with it. You can get specific information about the proc mounts by asking mount to list all mounts of type proc like this: mount -l -t proc Edit: It looks like you can use stat to compare the device of the /proc folder to the device of / to tell at least if SOMETHING is mounted there other than the root file system: [[ $(stat -c %d%D /proc) != $(stat -c %d%D /) ]] && echo "Something is mounted at /proc"
How to check if /proc/ is mounted
1,300,719,913,000
I have a process running very long time. I accidentally deleted the binary executable file of the process. Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else.... How can I get recover it? (I use CentOS 7, the running process is written in C++)
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However! $ cat hamlet.c #include <unistd.h> int main(void) { while (1) { sleep(9999); } } $ gcc -o hamlet hamlet.c $ md5sum hamlet 30558ea86c0eb864e25f5411f2480129 hamlet $ ./hamlet & [1] 2137 $ rm hamlet $ cat /proc/2137/exe > newhamlet $ md5sum newhamlet 30558ea86c0eb864e25f5411f2480129 newhamlet $ With interpreted programs, obtaining the script file may be somewhere between tricky and impossible, as /proc/$$/exe will point to perl or whatever, and the input file may already have been closed: $ echo sleep 9999 > x $ perl x & [1] 16439 $ rm x $ readlink /proc/16439/exe /usr/bin/perl $ ls /proc/16439/fd 0 1 2 Only the standard file descriptors are open, so x is already gone (though may for some time still exist on the filesystem, and who knows what the interpreter has in memory).
How to recover the deleted binary executable file of a running process
1,300,719,913,000
Recently I did the usual update + upgrade .. however after doing so, my network interface refused to work. ( no connection ) What happened ? How can I get bring my network-interface up ? ... I am running a debian - stretch. ( The same issue might occur on debian-derivates, like e.g. Ubuntu)
After some search in the web ( god sake I have as well a laptop ) I figured out that some renaming of the network interfaces occurred ... so first thing to do: See which network interfaces currently are up ( for me only the Loopback was started ) sudo ifconfig Now let's check the naming of all available network interfaces: networkctl For me the output looked like that: WARNING: systemd-networkd is not running, output will be incomplete. IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback n/a unmanaged 2 enp3s0 ether n/a unmanaged 3 enp4s0 ether n/a unmanaged After that I took a look into /etc/network/interfaces ... which for me looks like that: source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # Comment in the right one (the one plugged in) otherwise system.d will run a startjob #auto net0 #allow-hotplug net0 #iface net0 inet dhcp auto net1 allow-hotplug net1 iface net1 inet dhcp ... you probably can guess what comes next ... replace net0 / net1 (or whatever you have there) by the LINKS listed by networkctl. Start the new interface (or reboot): sudo ifup enp3s0 And check if it is listed now: sudo ifconfig
Debian - network interface does not work any more after update / upgrade
1,300,719,913,000
I got a copy of The Unix Programming Environment by Kernighan and Pike from a garage sale. I'm very interested in the chapter about the UNIX filesystem. Naturally, I also found this passage very interesting: The time has come to look at the bytes in a directory: $ od -cb . 0000000 4 ; . \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 064 073 056 000 000 000 000 000 000 000 000 000 000 000 000 000 .... It was really long so I won't type the whole thing out. The gist of it was that it displayed the directory in the way it was stored on the system. I quickly rushed to my laptop (Debian) to try this out. I typed out the command as it was in the book. $ od -cb . od: .: read error: Is a directory 0000000 Obviously it won't let me view the raw contents of the directory. So here's my question. Does the Linux kernel store directories in a different way that the original UNIX kernel did? If not, why is there the need to conceal the actual bytes of the directory from the user?
Each filesystem type stores directories in a different way. There are many different filesystem types with different characteristics — good for high throughput, good for high concurrency, good for limited-memory environments, different compromises between read and write performance, between complexity and stability, etc. Your book describes a filesystem used in early Unix systems. Modern systems support many different filesystems. The very early versions of Unix had a lot of filesystem manipulation outside the kernel. For example, mkdir and rmdir worked by editing some filesystem data structures directly. This was quickly replaced by a uniform directory access interface, the opendir/readdir/closedir family, which allowed applications to manipulate directories without having to know how they were implemented under the hood. The reason you can't read directory contents under Linux isn't because they have to be concealed, but because features exist only if they are implemented, and this feature is pointless and has a cost. Given that the format depends on the filesystem, it's a rather pointless feature: a program can't know the format of what it's reading. It isn't a completely trivial feature to support either: some filesystems organize directories in ways that aren't just a stream of bytes, for example it may be organized as a B-tree. Some Unix variants still allow applications to read directory contents directly, for backward compatibility, but Linux doesn't have this feature (and never had as far as I can recall — it was already an obsolete feature in the early 1990s).
octal dump of directory
1,414,755,363,000
stdin, stdout, stderr are some integers that index into a data structure which 'knows' which I/O channels are to be used for the process. I understand this data structure is unique to every process. Are I/O channels nothing but some data array structures with dynamic memory allocation ?
In Unix-like operating systems, the standard input, output and error streams are identified by the file descriptors 0, 1, 2. On Linux, these are visible under the proc filesystem in /proc/[pid]/fs/{0,1,2}. These files are actually symbolic links to a pseudoterminal device under the /dev/pts directory. A pseudoterminal (PTY) is a pair of virtual devices, a pseudoterminal master (PTM) and a pseudoterminal slave (PTS) (collectively referred to a s a pseudoterminal pair), that provide an IPC channel, somewhat like a bidirectional pipe between a program which expects to be connected to a terminal device, and a driver program that uses the pseudoterminal to send input to, and receive input from the former program. A key point is that the pseudoterminal slave appears just like a regular terminal, e.g. it can be toggled between noncanonical and canonical mode (the default), in which it interprets certain input characters, such as generating a SIGINT signal when a interrupt character (normally generated by pressing Ctrl+C on the keyboard) is written to the pseudoterminal master or causing the next read() to return 0 when a end-of-file character (normally generated by Ctrl+D) is encountered. Other operations supported by terminals is turning echoing on on or off, setting the foreground process group etc. Pseudoterminals have a number of uses: They allow programs like ssh to operate terminal-oriented programs on a another host connected via a network. A terminal-orientated program may be any program, which would normally be run in an interactive terminal session. The standard input, output and error of such a program cannot be connected directly socket, as sockets do not support the aforementioned terminal-related functionality. They allow programs like expect to drive a interactive terminal-orientated program from a script. They are used by terminal emulators such as xterm to provide terminal-related functionality. They are are used by programs such as screen to multiplex a single physical terminal between multiple processes. They are used by programs like script to to record all input and output occuring during a shell session. Unix98-style PTYs, used in Linux, are setup as follows: The driver program opens the pseudo-terminal master multiplexer at dev/ptmx, upon which it receives a a file descriptor for a PTM, and a PTS device is created in the /dev/pts directory. Each file descriptor obtained by opening /dev/ptmx is an independent PTM with its own associated PTS. The driver programs calls fork() to create a child process, which in turn performs the following steps: The child calls setsid() to start a new session, of which the child is session leader. This also causes the child to lose its controlling terminal. The child proceeds to open the PTS device that corresponds to the PTM created by the driver program. Since the child is a session leader, but has no controlling terminal, the PTS becomes the childs controlling terminal. The child uses dup() to duplicate the file descriptor for the slave device on it standard input, output, and error. Lastly, the child calls exec() to start the terminal-oriented program that is to be connected to the pseudoterminal device. At this point, anything the driver program writes to the PTM, appears as input to the terminal-orientated program on the PTS, and vice versa. When operating in canonical mode, the input to the PTS is buffered line by line. In other words, just as with regular terminals, the program reading from a PTS receives a line of input only when a newline character is written to the PTM. When the buffering capacity is exhausted, further write() calls block until some of the input has been consumed. In the Linux kernel, the file related system calls open(), read(), write() stat() etc. are implemented in the Virtual Filesystem (VFS) layer, which provides a uniform file system interface for userspace programs. The VFS allows different file system implementations to coexists within the kernel. When userspace programs call the aforementioned system calls, the VFS redirects the call to the appropriate filesystem implementation. The PTS devices under/dev/pts are managed by the devpts file system implemention defined in /fs/devpts/inode.c, while the TTY driver providing the the Unix98-style ptmx device is defined in in drivers/tty/pty.c. Buffering between TTY devices and TTY line disciplines, such as pseudoterminals, is provided a buffer structure maintained for each tty device, defined in include/linux/tty.h Prior to kernel version 3.7, the buffer was a flip buffer: #define TTY_FLIPBUF_SIZE 512 struct tty_flip_buffer { struct tq_struct tqueue; struct semaphore pty_sem; char *char_buf_ptr; unsigned char *flag_buf_ptr; int count; int buf_num; unsigned char char_buf[2*TTY_FLIPBUF_SIZE]; char flag_buf[2*TTY_FLIPBUF_SIZE]; unsigned char slop[4]; }; The structure contained storage divided into two equal size buffers. The buffers were numbered 0 (first half of char_buf/flag_buf) and 1 (second half). The driver stored data to the buffer identified by buf_num. The other buffer could be flushed to the line discipline. The buffer was 'flipped' by toggling buf_num between 0 and 1. When buf_num changed, char_buf_ptr and flag_buf_ptr was set to the beginning of the buffer identified by buf_num, and count was set to 0. Since kernel version 3.7 the TTY flip buffers have been replaced with objects allocated via kmalloc() organized in rings. In a normal situation for an IRQ driven serial port at typical speeds their behaviour is pretty much the same as with the old flip buffer; two buffers end up allocated and the kernel cycles between them as before. However, when there are delays or the speed increases, the new buffer implementation performs better as the buffer pool can grow a bit.
How I/O channels are implemented in Linux kernel?
1,414,755,363,000
I need to sort this list by name, high temp and low temp: Kuala Lumpur 78 56 Seoul 85 66 Karachi 95 75 Tokyo 85 60 Lahore 85 75 Manila 90 85 I figured since whitespace is the delimiter for a column I could just sort -k 1 which gives me this: Karachi 95 75 Kuala Lumpur 78 56 Lahore 85 75 Manila 90 85 Seoul 85 66 Tokyo 85 60 But the "Kuala Lumpur" is causing problems because of the space. So I tried to treat "Lumpur" as a column and to sort the first set of nums I did sort -k 3n but I get this: Tokyo 85 60 Seoul 85 66 Karachi 95 75 Lahore 85 75 Kuala Lumpur 78 56 <---Why is this out of order? Manila 90 85 How do I deal with this one space?
As others have commented, it will make it easier to work with the data if it is comma separated values (CSV). Here is my solution for converting the data to CSV: $ cat file | sed 's/ \([0-9]\)/,\1/g' Kuala Lumpur,78,56 Seoul,85,66 Karachi,95,75 Tokyo,85,60 Lahore,85,75 Manila,90,85 It replaces any space preceding a digit with a comma. \1 references the group ([0-9]), the digit after the space. From there you can use sort with the -t argument to specify a field separator. $ cat file | sed 's/ \([0-9]\)/,\1/g' | sort -t, -k2 Kuala Lumpur,78,56 Tokyo,85,60 Seoul,85,66 Lahore,85,75 Manila,90,85 Karachi,95,75 If you'd like to convert back to spaces or make a table, here are two examples: $ cat test | sed 's/ \([0-9]\)/,\1/g' | sort -t, -k2 | tr , ' ' Kuala Lumpur 78 56 Tokyo 85 60 Seoul 85 66 Lahore 85 75 Manila 90 85 Karachi 95 75 $ cat test | sed 's/ \([0-9]\)/,\1/g' | sort -t, -k2 | column -s, -t Kuala Lumpur 78 56 Tokyo 85 60 Seoul 85 66 Lahore 85 75 Manila 90 85 Karachi 95 75
Sort with unequal whitespace in first column
1,414,755,363,000
I have an hourly hour-long crontab job running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far. On Linux, I have used lsof -n | fgrep cron (lsof is similar to BSD's fstat), and it seems like I might have found the file, but it is annotated as having been deleted (a standard practice for temporary files is to be deleted right after opening): COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... cron 21742 root 5u REG 202,0 7255 66310 /tmp/tmpfSuELzy (deleted) And cannot be accesses by its prior name anymore: # stat /tmp/tmpfSuELzy stat: cannot stat `/tmp/tmpfSuELzy': No such file or directory How do I access such a deleted file that is still open?
The file can be access through the /proc filesystem: you already know the PID and the FD from the lsof output. cat /proc/21742/fd/5
How can I access a deleted open file on Linux (output of a running crontab task)?
1,414,755,363,000
I am doing some research to figure out what distro's of linux contain kernel packet filtering and are compatible with BPF. http://kernelnewbies.org/Linux_3.0 http://lwn.net/Articles/437981/ These two articles lead me to believe there is a package somewhere taht includes the libraries, and binaries? I am specifically looking for the "pfctl" command like I have in FreeBSD Thanks
I think you have mixed two different things: The OpenBSD packet filter facilities (sometimes called pf, and mostly controlled by pfctl). These are the basis of OpenBSD firewalling, the Linux equivalent is netfilter, mostly controlled by the iptables command. Comparable, but not compatible (and most say that OpenBSD is superior to Linux in this aspect). The (Berkeley) packet filter (mostly controlled by the libpcap library). This is a feature of the kernel that allows an application to be notified of packets flowing through a network interface. Since usually any client is only interested in a subset of all packets, most of the library is about filtering which packets should be forwarded to the application and which shouldn't. It's used for network analyzers like tcpdump and Wireshark. The articles you link are not about a port of the OpenBSD pf, instead they describe a new JIT that optimizes the kernel-resident filters used by libpcap.
Is berkeley packet filter ported to linux?
1,414,755,363,000
I started downloading a big file and accidently deleted it a while ago. I know how to get its current contents by cping /proc/<pid>/fd/<fd> but since the download is still in progress it'll be incomplete at the time I copy it someplace else. Can I somehow salvage the file right at the moment the download finishes but before the downloader closes the file and I lose it for good?
Using tail in follow mode should allow you to do what you want. tail -n +0 -f /proc/<pid>/fd/<fd> > abc.deleted I just did a quick test and it seems to work here. You did not mention whether your file was a binary file or not. My main concern is that it may not copy from the start of file but the -n +0 argument should do that even for binary files. The tail command may not terminate at the end of the download so you will need to terminate it yourself.
Recover deleted file that is currently being written to
1,414,755,363,000
Starting an Interactive shell over SSH is slow to one of my servers. Everything leading up to it including negotiating encryption is fast, but then it hangs for 45 seconds. After that, it finishes and I have a shell. How do I identify what it's hanging on? I tried clearing the environment and disabling all forwarding in case that was slowing it down but it didn't help. Here's my test command: env -i ssh -x -a -vvv server and here's the output from SSH: debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. *(hangs for 45 seconds here)* debug3: Wrote 128 bytes for a total of 3191 debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment.
pam_krb5.so was configured to acquire AFS tokens for a non-existent shell which had a 30 second timeout halting any authentication using that module, not just SSH. Removed that and authentication happens much quicker.
SSH slow at starting session
1,414,755,363,000
This question answers why Linux can't run OSX apps, but is there some application similar to Wine that allows one to do so?
Since wine is a re-implementation of the Windows API - you're looking for a re-implementation of the Macintosh API or various "kits" that Apple provides to let OSX apps link to the system frameworks. I don't know of any that fit the bill. The only thing even close is the Chamelion Project which brings the UIKit from iOS to Mac OS X. Since I don't have a real library for you, Lion is allowed to be virtualized on Mac hardware. Perhaps that would work for your needs while you wait for a lighter implementation like wine? There are about a hundred hits on Google about "how to run lion in vmware" and all basically point to the check for a server plist file presence check that the installer wants to see before it will proceed. Here is one that's fairly clear on the steps.
Is there something like wine to run OSX apps on linux?
1,414,755,363,000
I have sudo rights on a redhat box; once I've sudo su - to become root in a shell, are there any commands I can run to see what username I su'd FROM?
The shell's parent process is su -, and the parent of that is the sudo su -. So you need to find out the user running sudo su -'s parent process by searching back in two steps with ps: ps -o user= $(ps -o ppid= $(ps -o ppid= $PPID)) But you shouldn't be doing sudo su - if your version of sudo is not too old to have sudo -i. Sudo sets the environment variable SUDO_USER to the name of the user who ran sudo. You won't see it with sudo su - because su - scrubs the environment. $ sudo -i # echo $SUDO_USER gilles
Once sudo su'd to root, is there a command to see my username?
1,414,755,363,000
I'm looking for a programmable Linux controller for home automation and general fun projects. Requirements: Controlling electric appliances - On/Off switches and dimmers (perhaps using relays) Receive analogue and digital data from sensors (switches, temperatures, etc.) USB connection Running Linux Advantages: Network connection / Web interface Python support Small display screen Keyboard and VGA support I used to have a lot of fun with a Handy Board, but it broke down a few months ago, and it lacks many vital features.
Is not so powerfull as a normal PC, but you should try arduino platform. You can buy a great and cheap unit here: http://www.libelium.com/ Google a little bit about arduino and you will find a lot of references and a big community
Linux Programmable Controller
1,414,755,363,000
Here is my bash case: First case, this is what I want to do "aliasing" var with myvarA: myvarA="variableA" varname="A" eval varAlias=\$"myvar"$varname echo $varAlias Second case for array variable and looping its members, which is trivial: myvarA=( "variableA1" "variableA2" ) for varItem in ${myvarA[@]} do echo $varItem done Now somehow I need to use "aliasing" technique like example 1, but this time for array variable: eval varAlias=\$"myvar"$varname for varItem in ${varAlias[@]} do echo $varItem done But for last case, only first member of myvarA is printed, which is eval evaluate to value of the variable, how should I do var array variable so eval is evaluate to the name of array variable not the value of the variable.
I solved it; last example should be like this: eval varAlias=\${"myvar"$varname[@]} for varItem in ${varAlias[@]} do echo $varItem done
Bash eval array variable name
1,414,755,363,000
How can I add line breaks to a command at the prompt? I know that when I copy a multiline command from the internet that has newlines it appears on the command line $ something > like > this Also I know that you can use \ to insert a newline at the very end of your command $ like \ > this But how can I add newlines in the middle of a command that I've already typed out? For example, given $ this long command that I want to split over multiple lines How can I turn it into $ this long command > that I want to split > over multiple lines So far I've tried: Using ctrl + v to insert a return character - just results in ^M being inserted Typing \ return in the middle of the input (as you would do at the end of a line) - just results in \ being typed and then the command being executed.
Use Ctrl+V followed by Ctrl+J. This inserts a linefeed character rather than a carriage return (which Ctrl+M or Enter would result in after Ctrl+V).
Inserting Newlines at the Bash Command Prompt
1,414,755,363,000
I've installed Ubuntu alongside Windows 7. When i try to mount /mnt/sda1 which is Windows part on it, i take error such that; "The device '/dev/sda1' doesn't seem to have a valid NTFS." NTFS signature is missing. Failed to mount '/dev/sda1': Invalid argument The device '/dev/sda1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? It is the result when i command fdisk -l; Disk /dev/sda: 298,1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x29af3b15 Device Boot Start End Sectors Size Id Type /dev/sda1 2048 546911727 546909680 260,8G 7 HPFS/NTFS/exFAT /dev/sda2 546912254 625141759 78229506 37,3G 5 Extended /dev/sda5 * 546912256 625141759 78229504 37,3G 83 Linux
To get the exact information about the bootable windows partition before executing ntfsfix: sudo file -s /dev/sda1 Then use ntfsfix to fix this problem: sudo ntfsfix /dev/sda1 Finally mount your partition.
Cannot mount sda1: "The device '/dev/sda1' doesn't seem to have a valid NTFS."
1,414,755,363,000
I have a list.txt file including a list of log file. For example server_log1 server_log2 ........ server_log50 I have another shell script used to download these logs. It worked like this ./script.sh serverlog1 I want to make it automatically that means it can automatically pass in each log file's name in list.txt to be executed. Is that possible? I tried #!/bin/bash for i in `cat /home/ec2-user/list.txt` ; do sh ./workaround.sh $i done But it didn't work
The easiest method for reading arguments can be described as follows; Each argument is referenced and parsed by the $IFS or currently defined internal file separator. The default character is a space. For example, take the following; # ./script.sh arg1 arg2 The argument list in that example is arg1 = $1 and arg2 = $2 which can be rewritten as arg1 arg2 = $@. Another note is the use of a list of logs, how often does that change? My assumption is daily. Why not use the directory output as the array of your iterative loop? For example; for i in $(ls /path/to/logs); do ./workaround.sh $i; done Or better yet, move on to use of functions in bash to eliminate clutter. function process_file() { # transfer file code/command } function iterate_dir() { local -a dir=($(ls $1)) for file in ${dir[@]}; do process_file $file done } iterate_dir /path/to/log/for While these are merely suggestions to improve your shell scripting knowledge I must know if there is an error you are getting and would also need to know the details of each scripts code and or functionality. Making the use of the -x argument helps debug scripting as well. If you are simply transferring logs you may wish to do away with the scripts all together and make use of rsync, rsyslog or syslog as they all are much more suited for the task in question.
How to pass in multiple arguments to execute with .sh script
1,414,755,363,000
The major disadvantage of using zram is LRU inversion: older pages get into the higher-priority zram and quickly fill it, while newer pages are swapped in and out of the slower [...] swap The zswap documentation says that zswap does not suffer from this: Zswap receives pages for compression through the Frontswap API and is able to evict pages from its own compressed pool on an LRU basis and write them back to the backing swap device in the case that the compressed pool is full. Could I have all the benefits of zram and a completely compressed RAM by setting max_pool_percent to 100? Zswap seeks to be simple in its policies. Sysfs attributes allow for one user controlled policy: * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. No default max_pool_percent is specified here, but the Arch Wiki page says that it is 20. Apart from the performance implications of decompressing, is there any danger / downside in setting max_pool_percent to 100? Would it operate like using an improved swap-backed zram?
To answer your question, I first ran a series of experiments. The final answers are in bold at the end. Experiments performed: 1) swap file, zswap disabled 2) swap file, zswap enabled, max_pool_percent = 20 3) swap file, zswap enabled, max_pool_percent = 70 4) swap file, zswap enabled, max_pool_percent = 100 5) zram swap, zswap disabled 6) zram swap, zswap enabled, max_pool_percent = 20 7) no swap 8) swap file, zswap enabled, max_pool_percent = 1 9) swap file (300 M), zswap enabled, max_pool_percent = 100 Setup before the experiment: VirtualBox 5.1.30 Fedora 27, xfce spin 512 MB RAM, 16 MB video RAM, 2 CPUs linux kernel 4.13.13-300.fc27.x86_64 default swappiness value (60) created an empty 512 MB swap file (300 MB in experiment 9) for possible use during some of the experiments (using dd) but didn't swapon yet disabled all dnf* systemd services, ran watch "killall -9 dnf" to be more sure that dnf won't try to auto-update during the experiment or something and throw the results off too far State before the experiment: [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 280 72 8 132 153 Swap: 511 0 511 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 74624 8648 127180 0 0 1377 526 275 428 3 2 94 1 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 102430 688 3593850 67603 3351 8000 1373336 17275 0 26 sr0 0 0 0 0 0 0 0 0 0 0 The subsequent swapon operations, etc., leading to the different settings during the experiments, resulted in variances of within about 2% of these values. Experiment operation consisted of: Run Firefox for the first time Wait about 40 seconds or until network and disk activity ceases (whichever is longer) Record the following state after the experiment (firefox left running, except for experiments 7 and 9 where firefox crashed) State after the experiment: 1) swap file, zswap disabled [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 287 5 63 192 97 Swap: 511 249 262 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 255488 5904 1892 195428 63 237 1729 743 335 492 3 2 93 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 134680 10706 4848594 95687 5127 91447 2084176 26205 0 38 sr0 0 0 0 0 0 0 0 0 0 0 2) swap file, zswap enabled, max_pool_percent = 20 [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 330 6 33 148 73 Swap: 511 317 194 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 325376 7436 756 151144 3 110 1793 609 344 477 3 2 93 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 136046 1320 5150874 117469 10024 41988 1749440 53395 0 40 sr0 0 0 0 0 0 0 0 0 0 0 3) swap file, zswap enabled, max_pool_percent = 70 [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 342 8 32 134 58 Swap: 511 393 118 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 403208 8116 1088 137180 4 8 3538 474 467 538 3 3 91 3 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 224321 1414 10910442 220138 7535 9571 1461088 42931 0 60 sr0 0 0 0 0 0 0 0 0 0 0 4) swap file, zswap enabled, max_pool_percent = 100 [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 345 10 32 129 56 Swap: 511 410 101 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 420712 10916 2316 130520 1 11 3660 492 478 549 3 4 91 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 221920 1214 10922082 169369 8445 9570 1468552 28488 0 56 sr0 0 0 0 0 0 0 0 0 0 0 5) zram swap, zswap disabled [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 333 4 34 147 72 Swap: 499 314 185 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 5 0 324128 7256 1192 149444 153 365 1658 471 326 457 3 2 93 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 130703 884 5047298 112889 4197 9517 1433832 21037 0 37 sr0 0 0 0 0 0 0 0 0 0 0 zram0 58673 0 469384 271 138745 0 1109960 927 0 1 6) zram swap, zswap enabled, max_pool_percent = 20 [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 338 5 32 141 65 Swap: 499 355 144 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 364984 7584 904 143572 33 166 2052 437 354 457 3 3 93 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 166168 998 6751610 120911 4383 9543 1436080 18916 0 42 sr0 0 0 0 0 0 0 0 0 0 0 zram0 13819 0 110552 78 68164 0 545312 398 0 0 7) no swap Note that firefox is not running in this experiment at the time of recording these stats. [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 289 68 8 127 143 Swap: 0 0 0 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 70108 10660 119976 0 0 13503 286 607 618 2 5 88 5 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 748978 3511 66775042 595064 4263 9334 1413728 23421 0 164 sr0 0 0 0 0 0 0 0 0 0 0 8) swap file, zswap enabled, max_pool_percent = 1 [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 292 7 63 186 90 Swap: 511 249 262 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 255488 7088 2156 188688 43 182 1417 606 298 432 3 2 94 2 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 132222 9573 4796802 114450 10171 77607 2050032 137961 0 41 sr0 0 0 0 0 0 0 0 0 0 0 9) swap file (300 M), zswap enabled, max_pool_percent = 100 Firefox was stuck and the system still read from disk furiously. The baseline for this experiment is a different since a new swap file has been written: total used free shared buff/cache available Mem: 485 280 8 8 196 153 Swap: 299 0 299 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 8948 3400 198064 0 0 1186 653 249 388 2 2 95 1 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 103099 688 3610794 68253 3837 8084 1988936 20306 0 27 sr0 0 0 0 0 0 0 0 0 0 0 Specifically, extra 649384 sectors have been written as a result of this change. State after the experiment: [root@user-vm user]# free -m ; vmstat ; vmstat -d total used free shared buff/cache available Mem: 485 335 32 47 118 53 Swap: 299 277 22 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 7 1 283540 22912 2712 129132 0 0 83166 414 2387 1951 2 23 62 13 0 disk- ------------reads------------ ------------writes----------- -----IO------ total merged sectors ms total merged sectors ms cur sec sda 3416602 26605 406297938 4710584 4670 9025 2022272 33805 0 521 sr0 0 0 0 0 0 0 0 0 0 0 Subtracting the extra 649384 written sectors from 2022272 results in 1372888. This is less than 1433000 (see later) which is probably because of firefox not loading fully. I also ran a few experiments with low swappiness values (10 and 1) and they all got stuck in a frozen state with excessive disk reads, preventing me from recording the final memory stats. Observations: Subjectively, high max_pool_percent values resulted in sluggishness. Subjectively, the system in experiment 9 was so slow as to be unusable. High max_pool_percent values result in the least amount of writes whereas very low value of max_pool_percent results in the most number of writes. Experiments 5 and 6 (zram swap) suggest that firefox wrote data that resulted in about 62000 sectors written to disk. Anything above about 1433000 are sectors written due to swapping. See the following table. If we assume the lowest number of read sectors among the experiments to be the baseline, we can compare the experiments based on how much extra read sectors due to swapping they caused. Written sectors as a direct consequence of swapping (approx.): 650000 1) swap file, zswap disabled 320000 2) swap file, zswap enabled, max_pool_percent = 20 30000 3) swap file, zswap enabled, max_pool_percent = 70 40000 4) swap file, zswap enabled, max_pool_percent = 100 0 5) zram swap, zswap disabled 0 6) zram swap, zswap enabled, max_pool_percent = 20 -20000 7) no swap (firefox crashed) 620000 8) swap file, zswap enabled, max_pool_percent = 1 -60000 9) swap file (300 M), zswap enabled, max_pool_percent = 100 (firefox crashed) Extra read sectors as a direct consequence of swapping (approx.): 51792 1) swap file, zswap disabled 354072 2) swap file, zswap enabled, max_pool_percent = 20 6113640 3) swap file, zswap enabled, max_pool_percent = 70 6125280 4) swap file, zswap enabled, max_pool_percent = 100 250496 5) zram swap, zswap disabled 1954808 6) zram swap, zswap enabled, max_pool_percent = 20 61978240 7) no swap 0 (baseline) 8) swap file, zswap enabled, max_pool_percent = 1 401501136 9) swap file (300 M), zswap enabled, max_pool_percent = 100 Interpretation of results: This is subjective and also specific to the usecase at hand; behavior will vary in other usecases. Zswap's page pool takes away space in RAM that can otherwise be used by system's page cache (for file-backed pages), which means that the system repeatedly throws away file-backed pages and reads them again when needed, resulting in excessive reads. The high number of reads in experiment 7 is caused by the same problem - the system's anonymous pages took most of the RAM and file-backed pages had to be repeatedly read from disk. It might be possible under certain circumstances to minimize the amount of data written to swap disk near zero using zswap but it is evidently not suited for this task. It is not possible to have "completely compressed RAM" as the system needs a certain amount of non-swap pages to reside in RAM for operation. Personal opinions and anecdotes: The main improvement of zswap in terms of disk writes is not the fact that it compresses the pages but the fact it has its own buffering & caching system that reduces the page cache and effectively keeps more anonymous pages (in compressed form) in RAM. (However, based on my subjective experience as I use Linux daily, a system with swap and zswap with the default values of swappiness and max_pool_percent always behaves better than any swappiness value and no zswap or zswap with high values of max_pool_percent.) Low swappiness values seem to make the system behave better until the amount of page cache left is so small as to render the system unusable due to excessive disk reads. Similar with too high max_pool_percent. Either use solely zram swap and limit the amount of anonymous pages you need to hold in memory, or use disk-backed swap with zswap with approximately default values for swappiness and max_pool_percent. EDIT: Possible future work to answer the finer points of your question would be to find out for your particular usecase how the the zsmalloc allocator used in zram compares compression-wise with the zbud allocator used in zswap. I'm not going to do that, though, just pointing out things to search for in docs/on the internet. EDIT 2: echo "zsmalloc" > /sys/module/zswap/parameters/zpool switches zswap's allocator from zbud to zsmalloc. Continuing with my test fixture for the above experiments and comparing zram with zswap+zsmalloc, it seems that as long as the swap memory needed is the same as either a zram swap or as zswap's max_pool_percent, the amount of reads and writes to disk is very similar between the two. In my personal opinion based on the facts, as long as the amount of zram swap I need is smaller than the amount of zram swap I can afford to actually keep in RAM, then it is best to use solely zram; and once I need more swap than I can actually keep in memory, it is best to either change my workload to avoid it or to disable zram swap and use zswap with zsmalloc and set max_pool_percent to the equivalent of what zram previously took in memory (size of zram * compression ratio). I currently don't have the time to do a proper writeup of these additional tests, though.
Prevent zram LRU inversion with zswap and max_pool_percent = 100
1,414,755,363,000
I am running Ubuntu on a local PC with the following linux distro/kernel: $ lsb_release -a >> ubuntu 16.04.3 LTS $ uname -r >> 4.10.0-33-generic I have a python (3.5) script which calls environment variables via the os package. For the sake of simplicity, let's use the following script, test_script.py: import os MY_VAR = os.environ['MY_VAR'] print(MY_VAR) When I run this script from terminal: $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' ATTEMPT 1 Reference: [1][4] $ MY_VAR=123 $ export MY_VAR $ echo $MY_VAR >> 123 $ python test_script.py >> 123 Success! ... until I close terminal and reopen terminal. When I do that: $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' ATTEMPT 2 Reference: [2] To the end of /home/USER/.profile, I add the following lines: # my variable MYVAR=123 Save. Confirm saved. $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' ATTEMPT 3 Reference: [2] To the end of /etc/profile, I add the following lines: # my variable MYVAR=123 Save. Confirm saved. $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' ATTEMPT 4 Reference: [2] Create myvar.sh in /etc/profile.d/ Add the following line: MYVAR=123 Save. Confirm saved. $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' ATTEMPT 5 Reference: [2][3] To the end of /etc/environment, I add the following line: MYVAR=123 Save. Confirm saved. $ python test_script.py >> File "test-script.py", line 3, in <module> >> MY_VAR = os.environ['MY_VAR'] >> File "/home/USER/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ >> raise KeyError(key) from None >> KeyError: 'MY_VAR' Please help! I don't understand what I'm doing wrong here. How to set environment variables permanently for one user Permanent Environment Variable for all users How to permanently set environmental variables How do I set a user environment variable? (permanently, not session)
You should use the approaches in attempt 3 or 4, but you need to export the variable; change MYVAR=123 to export MYVAR=123
Set persistent environment variable for all users
1,414,755,363,000
Performance Best Practices for MongoDB implies that: Most file systems will maintain metadata for the last time a file was accessed. While this may be useful for some applications, in a database it means that the file system will issue a write every time the database accesses a page, which will negatively impact the performance and throughput of the system. For mongoDB installation I need to disable access time on my Debian, how to do that?
To disable the writing of access times, you need to mount the filesystem(s) in question with the noatime option. To mount an already mounted filesystem with the noatime option, do the following: mount /home -o remount,noatime To make the change permanent, update your /etc/fstab and add noatime to the options field. For example. Before: /dev/mapper/sys-home /home xfs nodev,nosuid 0 2 After: /dev/mapper/sys-home /home xfs nodev,nosuid,noatime 0 2
how to disable access time settings in Debian linux
1,414,755,363,000
All of the tools I've tried until now were only capable to create a dual (GPT & MBR) partition table, where the first 4 of the GPT partitions were mirrored to a compatible MBR partition. This is not what I want. I want a pure GPT partition table, i.e. where there isn't MBR table on the disk, and thus there isn't also any synchronizing between them. Is it somehow possible?
TO ADDRESS YOUR EDIT: I didn't notice the edit to your question until just now. As written now, the question is altogether different than when I first answered it. The mirror you describe is not in the spec, actually, as it is instead a rather dangerous and ugly hack known as a hybrid-MBR partition format. This question makes a lot more sense now - it's not silly at all, in fact. The primary difference between a GPT disk and a hybrid MBR disk is that a GPT's MBR will describe the entire disk as a single MBR partition, while a hybrid MBR will attempt to hedge for (extremely ugly) compatibility's sake and describe only the area covered by the first four partitions. The problem with that situation is the hybrid-MBR's attempts at compatibility completely defeat the purpose of GPT's Protective MBR in the first place. As noted below, the Protective MBR is supposed to protect a GPT-disk from stupid applications, but if some of the disk appears to be unallocated to those, all bets are off. Don't use a hybrid-MBR if it can be at all helped - which, if on a Mac, means don't use the default Bootcamp configuration. In general, if looking for advice on EFI/GPT-related matters go nowhere else (excepting maybe a slight detour here first) but to rodsbooks.com. ahem... This (used to be) kind of a silly question - I think you're asking how to partition a GPT disk without a Protective MBR. The answer to that question is you cannot - because the GPT is a disk partition table format standard, and that standard specifies a protective MBR positioned at the head of the disk. See? What you can do is erase the MBR or overwrite it - it won't prevent most GPT-aware applications from accessing the partition data anyway, but the reason it is included in the specification is to prevent non-GPT-aware applications from screwing with the partition-table. It prevents this by just reporting that the entire disk is a single MBR-type partition already, and nobody should try writing a filesystem to it because it is already allocated space. Removing the MBR removes that protection. In any case, here's how: This creates a 4G ./img file full of NULs... </dev/zero >./img \ dd ibs=4k obs=4kx1k count=1kx1k 1048576+0 records in 1024+0 records out 4294967296 bytes (4.3 GB) copied, 3.38218 s, 1.3 GB/s This writes a partition table to it - to include the leading Protective MBR. Each of printf's arguments is followed by a \newline and written to gdisk's stdin. gdisk interprets the commands as though they were typed at it interactively and acts accordingly, to create two GPT partition entries in the GUID Partition Table it writes to the head of our ./img file. All terminal output is dumped to >/dev/null (because it's a lot and we'll be having a look at the results presently anyway). printf %s\\n o y n 1 '' +750M ef00 \ n 2 '' '' '' '' \ w y | >/dev/null \ gdisk ./img This gets pr's four-columned formatted representation of the offset-accompanied strings in the first 2K of ./img. <./img dd count=4 | strings -1 -td | pr -w100 -t4 4+0 records in 4+0 records out 2048 bytes (2.0 kB) copied, 7.1933e-05 s, 28.5 MB/s 451 * 1033 K 1094 t 1212 n 510 U 1037 > 1096 e 1214 u 512 EFI PART 1039 ;@fY 1098 m 1216 x 524 \ 1044 30 1153 = 1218 529 P 1047 L 1158 rG 1220 f 531 ( 1050 E 1161 y=i 1222 i 552 " 1065 w 1165 G} 1224 l 568 V 1080 E 1170 $U.b 1226 e 573 G 1082 F 1175 N 1228 s 575 G 1084 I 1178 C 1230 y 577 y 1086 1180 b 1232 s 583 G 1088 S 1185 x 1234 t 602 Ml 1090 y 1208 L 1236 e 1024 (s* 1092 s 1210 i 1238 m You can see where the MBR ends there, yeah? Byte 512. This writes 512 spaces over the first 512 bytes in ./img. <>./img >&0 printf %0512s And now for the fruits of our labor. This is an interactive run of gdisk on ./img. gdisk ./img GPT fdisk (gdisk) version 1.0.0 Partition table scan: MBR: not present BSD: not present APM: not present GPT: present Found valid GPT with corrupt MBR; using GPT and will write new protective MBR on save. Command (? for help): p Disk ./img: 8388608 sectors, 4.0 GiB Logical sector size: 512 bytes Disk identifier (GUID): 0528394A-9A2C-423B-9FDE-592CB74B17B3 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 8388574 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 1538047 750.0 MiB EF00 EFI System 2 1538048 8388574 3.3 GiB 8300 Linux filesystem
How to construct a GPT-only partition table on Linux?
1,414,755,363,000
After making a change to my php.ini file I got the error messages as shown below. vim /etc/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://www.php.net/manual/en/ini.core.php#ini.memory-limit memory_limit = 1536 Apache starts, but it won't server any of my virtual hosts, which it was doing previously. I am not seeing any php error listed any where. I am not sure what I need to do to fix this. Thu Apr 30 08:29:06 2015] [notice] caught SIGTERM, shutting down [Thu Apr 30 08:29:07 2015] [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) [Thu Apr 30 08:29:07 2015] [notice] Digest: generating secret for digest authentication ... [Thu Apr 30 08:29:07 2015] [notice] Digest: done [Thu Apr 30 08:29:07 2015] [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) [Thu Apr 30 08:29:07 2015] [notice] Apache/2.2.15 (Unix) PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Thu Apr 30 08:29:12 2015] [notice] child pid 35160 exit signal Segmentation fault (11) [Thu Apr 30 08:29:12 2015] [notice] child pid 35161 exit signal Segmentation fault (11) [Thu Apr 30 08:29:12 2015] [notice] child pid 35163 exit signal Segmentation fault (11) [Thu Apr 30 08:29:13 2015] [notice] child pid 35164 exit signal Segmentation fault (11) [Thu Apr 30 08:29:14 2015] [notice] child pid 35162 exit signal Segmentation fault (11) [Thu Apr 30 08:29:17 2015] [notice] child pid 35167 exit signal Segmentation fault (11) [Thu Apr 30 08:29:20 2015] [notice] child pid 35166 exit signal Segmentation fault (11) [Thu Apr 30 08:29:20 2015] [notice] child pid 35205 exit signal Segmentation fault (11) [Thu Apr 30 08:29:22 2015] [notice] child pid 35206 exit signal Segmentation fault (11) [Thu Apr 30 08:29:24 2015] [notice] child pid 35207 exit signal Segmentation fault (11) [Thu Apr 30 08:29:24 2015] [notice] child pid 35208 exit signal Segmentation fault (11) [Thu Apr 30 08:29:27 2015] [notice] child pid 35165 exit signal Segmentation fault (11) [Thu Apr 30 08:29:29 2015] [notice] child pid 35214 exit signal Segmentation fault (11) [Thu Apr 30 08:29:39 2015] [notice] child pid 35229 exit signal Segmentation fault (11) [Thu Apr 30 08:29:44 2015] [notice] child pid 35230 exit signal Segmentation fault (11) [Thu Apr 30 08:29:44 2015] [notice] child pid 35231 exit signal Segmentation fault (11) [Thu Apr 30 08:29:49 2015] [notice] child pid 35242 exit signal Segmentation fault (11) [Thu Apr 30 08:29:50 2015] [notice] child pid 35241 exit signal Segmentation fault (11) [Thu Apr 30 08:29:52 2015] [notice] child pid 35213 exit signal Segmentation fault (11) [Thu Apr 30 08:29:52 2015] [notice] child pid 35215 exit signal Segmentation fault (11) [Thu Apr 30 08:29:52 2015] [notice] child pid 35262 exit signal Segmentation fault (11)
It was a simple syntax issue. vim /etc/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://www.php.net/manual/en/ini.core.php#ini.memory-limit memory_limit = 1536 The problem was with the line I had changed. memory_limit = 1536M If you don't specify the suffix indicating the memory allocation, it does memory allocation by default in bytes. So each process that Apache attempts to start ends up running out of memory before it can load properly hence the Seg Fault. This sets the maximum amount of memory in bytes that a script is allowed to allocate. http://php.net/manual/en/ini.core.php#ini.memory-limit I am posting this answer because after googling for 20 minutes in panic trying to find out what was happening. I did not find a single clearly explained solution to this problem.
Apache and php not working child pid xxx exit signal Segmentation fault (11)
1,414,755,363,000
I have a large folder with 30M small files. I hope to backup the folder into 30 archives, each tar.gz file will have 1M files. The reason to split into multi archives is that to untar one single large archive will take month.. pipe tar to split also won't work because when untar the file, I have to cat all archives together. Also, I hope not to mv each file to a new dir, because even ls is very painful for this huge folder.
I wrote this bash script to do it. It basically forms an array containing the names of the files to go into each tar, then starts tar in parallel on all of them. It might not be the most efficient way, but it will get the job done as you want. I can expect it to consume large amounts of memory though. You will need to adjust the options in the start of the script. You might also want to change the tar options cvjf in the last line (like removing the verbose output v for performance or changing compression j to z, etc ...). Script #!/bin/bash # User configuratoin #=================== files=(*.log) # Set the file pattern to be used, e.g. (*.txt) or (*) num_files_per_tar=5 # Number of files per tar num_procs=4 # Number of tar processes to start tar_file_dir='/tmp' # Tar files dir tar_file_name_prefix='tar' # prefix for tar file names tar_file_name="$tar_file_dir/$tar_file_name_prefix" # Main algorithm #=============== num_tars=$((${#files[@]}/num_files_per_tar)) # the number of tar files to create tar_files=() # will hold the names of files for each tar tar_start=0 # gets update where each tar starts # Loop over the files adding their names to be tared for i in `seq 0 $((num_tars-1))` do tar_files[$i]="$tar_file_name$i.tar.bz2 ${files[@]:tar_start:num_files_per_tar}" tar_start=$((tar_start+num_files_per_tar)) done # Start tar in parallel for each of the strings we just constructed printf '%s\n' "${tar_files[@]}" | xargs -n$((num_files_per_tar+1)) -P$num_procs tar cjvf Explanation First, all the file names that match the selected pattern are stored in the array files. Next, the for loop slices this array and forms strings from the slices. The number of the slices is equal to the number of the desired tarballs. The resulting strings are stored in the array tar_files. The for loop also adds the name of the resulting tarball to the beginning of each string. The elements of tar_files take the following form (assuming 5 files/tarball): tar_files[0]="tar0.tar.bz2 file1 file2 file3 file4 file5" tar_files[1]="tar1.tar.bz2 file6 file7 file8 file9 file10" ... The last line of the script, xargs is used to start multiple tar processes (up to the maximum specified number) where each one will process one element of tar_files array in parallel. Test List of files: $ls a c e g i k m n p r t b d f h j l o q s Generated Tarballs: $ls /tmp/tar* tar0.tar.bz2 tar1.tar.bz2 tar2.tar.bz2 tar3.tar.bz2
how to create multi tar archives for a huge folder
1,414,755,363,000
Is it possible to have bluetooth turned on and use a bluetooth keyboard when at the login screen? So far I only managed to start the bluetooth daemon when logged in. I added it to systemd with systemctl enable bluetooth, so it starts when I am in my user, although it is turned off by default which I'd like to fix as well. I installed the bluez and bluez-utils which provide the bluetoothhctl utility. Also I am using blueman as a front-end if that is important.
Like most of the time I didn't read the Arch Wiki carefully enough. There is a section on how to have the device active after booting. You need to set a udev rule and to do so create /etc/udev/rules.d/10-local.rules with the following code # Set bluetooth power up ACTION=="add", KERNEL=="hci0", RUN+="/usr/bin/hciconfig hci0 up" That's it... it should now work, even without X running.
Turn on bluetooth on login screen
1,414,755,363,000
I have set of linux folders and I need to get the permissions of the folders numerically. For example in the below directory, I need to what is the permission value of the folder... Whether 755, 644 or 622 etc... drwxrwsr-x 2 dev puser 4096 Jul 7 2014 fonts
To get the octal permission notation. stat -c "%a" file 644 See the manpage of stat, -c specifies the format and %a prints the permissions in octal. Or for multiple files and folders: stat -c "%a %n" * 755 dir 644 file1 600 file2
How to get file permission in octal [duplicate]
1,414,755,363,000
I am having a problem on the server (CentOS 6, Plesk 11.5) where a particular user is using a mass mailer and is blacklisting our IP address. I have tried to delete this user using: /usr/sbin/userdel test but it returns a message saying that the user is currently logged in. I thought ok, kill the process. So I tried: pkill -u test and also locked the account using: passwd -l test which will hopefully stop him logging into the system in future. Still saying user is logged in. How can I log this user out to enable me to delete him off the system?
First grep all the 'test' user's process and kill -9 all pid's then delete the user. pgrep -u test ps -fp $(pgrep -u test) killall -KILL -u test userdel -r test
Log out a user and delete the account
1,414,755,363,000
I am writing code that relies on the output of /proc/meminfo, /proc/cpuinfo etc. Are the file contents always in English? For example, will MemTotal in /proc/meminfo always be MemTotal in all locales?
Yes, usually that is the case, as those messages are provided by the kernel itself, and including a hundred translations into the kernel image itself would serve no purpose other than increasing the kernel size dramatically. For many things there are front-ends, user space programs which read the kernel info and present it in a translated fashion.
Are the outputs of /proc/meminfo, /proc/cpuinfo etc always in English?
1,414,755,363,000
I'm wondering if there is some way to prevent certain certain devices from becoming the output file of the dd command and the target of the fdisk command. I'm currently using the two operations to set up a write a bootloader, kernel, and root filesystem on an SD card, which appears as /dev/sdd. I'm always a little anxious that I'll mix up sdd with sdb, or sda since the letters A and D are close on the keyboard, and I would like to find a way to prevent commands with this format: dd if=/dev/sd[a-zA-Z0-9]* of=/dev/sd[ab] or fdisk /dev/sd[ab]
You might try writing a udev rule to give the supplemental HDD(s) sufficiently unique names. Another idea: Whenever you can phrase a security requirement as "It's not who's doing it, it's how they're doing it" you're talking about type enforcement, and in most Linux distros TE is done at the MAC level. Most of my MAC experience is with "SELinux" You can't lock it down at the DAC level otherwise you wouldn't be able to perform I/O on the device (not necessarily a failing of DAC as a security model, it's just current DAC policy is solely identity based so all programs running under a particular identity get identical rights with no additional administrative expression possible). Locking it down at the MAC level can be made so that regular user space components can't do anything with the block file but your root utilities and certain parts of the platform can. On Fedora this is already kind of the case with block devices showing up with the SELinux type of fixed_disk_device_t and grub having bootloader_exec_t see the following example: [root@localhost ~]# ls -lhZ $(which grub2-install) -rwxr-xr-x. root root system_u:object_r:bootloader_exec_t:s0 /sbin/grub2-install [root@localhost ~]# ls -lhZ /dev/sda brw-rw----+ root disk system_u:object_r:fixed_disk_device_t:s0 /dev/sda [root@localhost ~]# sesearch --allow | egrep bootloader | grep fixed allow bootloader_t fixed_disk_device_t : lnk_file { read getattr } ; allow bootloader_t fixed_disk_device_t : chr_file { ioctl read write getattr lock append open } ; allow bootloader_t fixed_disk_device_t : blk_file { ioctl read write getattr lock append open } ; [root@localhost ~]# Whereas dd has a regular bin_t label: [root@localhost ~]# ls -lhZ $(which dd) -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /bin/dd bin_t (apparently) can still write to block devices but creating a new file context type for fdisk and dd and writing an selinux rule to disallow the new type from accessing fixed_disk_device_t shouldn't be too difficult. You would just need to make it so regular user roles can't do it but users with the sysadm_t can do it, then remember to just do a newrole -r root:sysadm_r before you try to re-partition the disk or do a dd over the block device (which shouldn't be a huge deal since it's not like you run fdisk every day all day long). Probably more work than you were looking for, but TE is the mechanism that solves the general problem you're running into. Personally, the udev rule is probably you're safest bet. I only mention the TE stuff in case you're interested in solving a larger set of problems similar to this one.
Protecting Devices from dd and fdisk Commands
1,414,755,363,000
I have an ASUS RT-N16 router that I've flashed with the open-source DD-WRT firmware. According to my ssh login, I'm running: DD-WRT v24-sp2 mega (c) 2010 NewMedia-NET GmbH Release: 08/07/10 (SVN revision: 14896) I'd like to be able to customize the iptables rules, but before I do that, I'd like to see the output of the built-in rules that get configured when manipulating the browser/GUI interface settings. I am aware of the firewall script tab in the browser interface for entering custom firewall rules, but I can't find someplace to see the output. On a full-blown Linux system, the iptables rules would be stored somewhere like /etc/sysconfig/iptables. Where would I find these on a DD-WRT filesystem? I can do iptables -L -vn --line-numbers and see them output, but what I'm looking for is more of what the iptables-save command might output... so that I can incorporate the appropriate rules into my custom script. I understand that this build does not have an iptables-save command. I don't necessarily want the command itself, just output that it generates. If there was something like /etc/sysconfig/iptables, I wouldn't care about having iptables-save. I've seen that there may be different builds of DD-WRT that give something like iptables-save, but I'm not at the point where I'm ready or willing to flash the router again. Maybe as a last resort. EDIT: The usual Linux locations for startup scripts and the like, (e.g., /etc/init.d, /etc/rc, ...) do not seem to have anything useful (at least in the build of DD-WRT that I have installed). For example, taking a look in /etc/init.d: [/etc/init.d]# ll -rwxr-xr-x 1 root root 84 Aug 7 2010 rcS -rwxr-xr-x 1 root root 10 Aug 7 2010 S01dummy [/etc/init.d]# cat rcS #!/bin/sh for i in /etc/init.d/S*; do $i start 2>&1 done | logger -s -p 6 -t '' & [/etc/init.d]# cat S01dummy #!/bin/sh
Looking in /tmp/.ipt /tmp/.rc_firewall gives exactly what I was looking for: the iptables rules as they would normally be in a file like /etc/sysconfig/iptables. I had earlier found this: dd if=/dev/mem | strings | grep -i iptables ...and fortunately, it works on the pared-down DD-WRT filesystem. It didn't give precisely what I was looking for, but it output quite a bit of info I hadn't been able to pinpoint any other way (or at least not with a single command). Still have to determine which things are actually in effect by comparing with the output of iptables -L -vn --line-numbers iptables -L -vn -t nat --line-numbers iptables -L -vn -t mangle --line-numbers I also discovered that the grep command actually does work [my apologies for initially stating that it didn't-- I would've sworn it didn't work the last times I had tried. Mea maxima culpa.] Using grep, I found that the /lib/services.so also has a wealth of iptables configuration in it.
Where is iptables script stored on DD-WRT filesystem?
1,414,755,363,000
In my dmesg this appeared when my window manager (xfwm4, part of XFCE) crashed: xfwm4[3936]: segfault at 7f3c7c523770 ip 00007f3c7c523770 sp 00007ffffea1ee28 error 15 in SYSV00000000 (deleted)[7f3c7c4e8000+60000] The same SYSV00000000 also appears in other places (like lsof). So, what is this SYSV00000000? I Googled around and found that it's related to virtual memory, but not much else.
The kernel is telling you that when the segfault occurred, the instruction pointer 0x7f3c7c523770 was in a SysV IPC shm segment. The shared memory segment started at 0x7f3c7c4e8000 and was 0x60000 bytes long. SysV shm segments are not backed by a file, so the string SYSV00000000 appears where normally you'd get the filename of the executable or library where the segfault occurred. As a result this log line gives us no real useful information. If you want any hope of tracing the cause of the crash, you need the core dump. I suspect that the instruction pointer wasn't supposed to be in there at all. It's pretty weird to load executable code into a SysV shm segment. But I haven't seen any XFCE code, so what looks weird to me might be normal there. You can learn the basics about sysv shm, assuming you have a decent grasp of the basics of memory management by reading these man pages: man svipc man shmget man shmat Run the ipcs command to see what sysv ipc resources are currently allocated. ipcs -m limits the list to just the shared memory segments.
What is "SYSV00000000"?
1,414,755,363,000
Is there a way to attach to a process and find out, not only which files it is reading from, but what the read locations (byte offsets) of those reads are? I tried using strace for this, but it doesn't seem to show that information. Edit: There's a nice utility for this: apt install progress.
You should be able to tap into /proc/$PID/fdinfo for this purpose. Check out the "The phantom progress bar" section in Solving problems with proc. fdinfo tracks a process's current position within an open file.
Finding where a process reads in a file
1,414,755,363,000
A friend, using a remote machine, SSHed to my machine and ran the following python script: while (1): ....print "hello world" (this script simply prints 'hello world' continuously). I am now logged in to my machine. How can I see the output of the script my friend was running? if it helps, I can 'spot' the script my friend is using: me@home:~$ ps aux | grep justprint.py friend 7494 12.8 0.3 7260 3300 ? Ss 17:24 0:06 python TEST_AREA/justprint.py friend 7640 0.0 0.0 3320 800 pts/3 S+ 17:25 0:00 grep --color=auto just what steps should I take in order to view the "hello world" messages on my screen?
You generally can't see the output of anther person's program. See over in that column where your grep command is running on tty pts/3, and your friend's is ?, which means it's detached from the terminal. You could see where the output is going with ls -l /proc/7494/fd/ (where 7494 is the process ID of your friend's process) — although if you're not running as root, you probably can't even look, for security reasons. (So try sudo ls -l /proc/7494/fd/.) There are horrible, horrible, kludgy things you might be able to do to change where the output of the program goes. But in general, you can't and shouldn't. If your friend wants to share the output with you, and approach would be to redirect the output of the program to a file, and then make that file readable by you: $ python -u TEST_AREA/justprint.py > /tmp/justprint.out & $ chmod a+r /tmp/justprint.out (Where in this case "readable by you" is "readable by everyone"; with a little more work you can set up a shared group so just the two of you can exchange output.) (And be aware that python buffers output by default — turning that off is what the -u is for.)
View Script Over SSH?
1,414,755,363,000
tldr: Does mtrace still work or am I just doing it wrong? I was attempting to use mtrace and have been unable to get it to write data to a file. I followed the instructions in man 3 mtrace: t_mtrace.c: #include <mcheck.h> #include <stdlib.h> #include <stdio.h> int main(int argc, char *argv[]) { mtrace(); for (int j = 0; j < 2; j++) malloc(100); /* Never freed--a memory leak */ calloc(16, 16); /* Never freed--a memory leak */ exit(EXIT_SUCCESS); } Then running this in bash: gcc -g t_mtrace.c -O0 -o t_mtrace export MALLOC_TRACE=/tmp/t ./t_mtrace mtrace ./t_mtrace $MALLOC_TRACE but the file /tmp/t (or any other file I attempt to use) is not created. When I create an empty file with that name it remains zero length. I've tried using relative paths in the MALLOC_TRACE. I tried adding setenv("MALLOC_TRACE", "/tmp/t", 1); inside the program before the mtrace() call. I've tried adding muntrace() before the program terminates. I've tried these tactics on Ubuntu 22.04 and Fedora 39, and I get the same result: the trace file is empty. The ctime and mtime on the file (if I create it in advance) are unchanged when I run the program. I've verified the permissions of the file and its parent directory are read/writable. strace isn't showing that the file in question is stated, much less opened. This occurs using Glibc 2.35 on Ubuntu and 2.38 on Fedora. This isn't a question on how to profile or check for memory leaks. I realize I can do this with valgrind or half a dozen other programs, this is mostly a curiosity and me wanting to know if this is a bug that might need to be patched or whether the man page needs updating (or whether I'm just misapprehending something and the only problem is sitting in my chair).
mtrace still works, but the man page is outdated. The reference documentation explains how to use it: LD_PRELOAD=/usr/lib64/libc_malloc_debug.so.0 MALLOC_TRACE=/tmp/t ./t_mtrace (replace with the appropriate path to libc_malloc_debug.so on your system — the above is appropriate for Fedora and derivatives on 64-bit x86; on Debian derivatives, use LD_PRELOAD=/lib/$(gcc -print-multiarch)/libc_malloc_debug.so.0). In version 2.34 of the GNU C library, memory allocation debugging was moved to a separate library, which must be pre-loaded when debugging is desired.
Does mtrace() still work in modern distros?
1,414,755,363,000
On Linux, Firefox is listening on UDP port, usually on ports 30000 and higher. What is the reason for this and why not just localhost, but 0.0.0.0, i.e. the interface exposed to the network as well?
UDP is not connection-based, so both ends have to be listening for two-way communication. Thus if Firefox wants to receive any responses from UDP services it is talking to, it needs to have open ports bound to a routable interface. Since Firefox 88, HTTP/3 has been enabled by default, using UDP for web browsing with servers that support it. DNS lookups may involve remote UDP requests from Firefox (but not usually on Linux systems). Many P2P systems like WebRTC also use UDP.
On Linux Firefox is listening on several UDP ports on 0.0.0.0
1,414,755,363,000
On AlmaLinux during setup there is an option to choose a Security Profile. I run live and public websites on this server, so security is good, but I don't know what these are and how it could benefit me. Should I choose one of these, and if so, which one? Or, should I ignore this, is it only for special use cases?
These are OpenSCAP profiles to ensure compliance with various government security standards. These are mostly used in situations when you are required to adhere to some specific security policy. So you'd usually choose a security policy if you are working for a governmental organization or your company is a government contractor or something similar. The installer basically checks the policy rules and makes changes (or ask you to make changes) to follow the policy. The rules can define partition layout (for example force encryption), specify what packages should be installed (or should not), what services needs to be enabled and how should they be configured (for example SSH with root login disabled) etc. The rules are public, if you are interested, you can read for example the first one from your screenshot, the French ANSSI-BP-028. You can read more about this in the RHEL installer guide. The rules generally can have some useful security "tips & tricks" but I wouldn't bother using them on a private machine, using some general guides for server hardening is probably better than picking a specific government policy.
What is the purpose and benefit of a Security Profile in Almalinux setup screen?
1,414,755,363,000
I get this message every time I install a new package in KDE neon via terminal, Is it normal and I should ignore it or I should fix this? Reading package lists... Done Building dependency tree Reading state information... Done Starting pkgProblemResolver with broken count: 0 Starting 2 pkgProblemResolver with broken count: 0 Done The following NEW packages will be installed: tree 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/43.0 kB of archives. After this operation, 115 kB of additional disk space will be used. Selecting previously unselected package tree. (Reading database ... 280095 files and directories currently installed.) Preparing to unpack .../tree_1.8.0-1_amd64.deb ... Unpacking tree (1.8.0-1) ... Setting up tree (1.8.0-1) ... Processing triggers for man-db (2.9.1-1) ... Not building database; man-db/auto-update is not 'true'.
The warning is just that, a warning; it means that mandb isn’t run when relevant packages are installed, and the result of that is that the manual page index caches aren’t updated. The technical reason for the warning is the absence of /var/lib/man-db/auto-update. I’m not sure what would cause that. To restore the man-db trigger, restore that file: sudo touch /var/lib/man-db/auto-update You will no longer see the warning, and the caches will be updated. You can update the caches yourself: sudo mandb -pq
Processing triggers for man-db. Not building database; man-db/auto-update is not 'true , error