date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,339,116,601,000
Sometimes, when I have numerous tabs open in Firefox, one of those tabs will start consuming a lot of CPU%, and I want to know which tab is the culprit. Doing this is a very manual process for which I'd like to find automation. I wish I had an application that could monitor firefox exclusively in a manner that produces concise output of only the firefox-facts I want to know. I'm looking for a command/application that will list the processes of each tab running in firefox filtered to only include the following info for each tab-process: Process ID Webpage Address of Tab CPU % usage Memory used Additionally, I'd like the info sorted by CPU % descending. Basically, I hoping there exists a program like htop, but that's exclusively dedicated to just the pertinent stuff I want to monitor in Firefox (while leaving out all the details I don't care about).
You can type about:performance in the address bar of firefox. Then you will get a table where there will be pid of each tab of firefox with Resident Set size and Unique Set Size. And below this there will be some lines explaining the performance of each tab (like performing well) and if a tab is not performing well then it will show there and you can close that tab from there using the Close Tab option.
Monitoring CPU% of Tabs in Firefox
1,339,116,601,000
The study guide LPIC-1 Training and Preparation Guide (Ghori Asghar, ISBN 978-1-7750621-0-3) contains the following question ... Which of the following commands can be used to determine the file type? (A) file (B) type (C) filetype (D) what ... and claims that the answer is: "(B) type". But isn't "(A) file" the correct answer? I'm beginning to doubt the entire book.
Yes it seems like your book is wrong. The file command tells what kind of file it is. From the man file: "file -- determine file type". A few examples: $ file /usr/bin/file /usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ecc4d67cf433d0682a5b7f3a08befc45e7d18057, stripped $ file activemq-all-5.15.0.jar activemq-all-5.15.0.jar: Java archive data (JAR) The type command is used to tell if a command is built in or external: $ type file file is /usr/bin/file $ type type type is a shell builtin
Is this study guide wrong about commands for determining file types?
1,339,116,601,000
I have disabled system beeps locally by adding set bell-style none to my local .inputrc file; however, when remote ssh to machines I will still get system beeps for using TAB for autocomplete, which I do a lot. I know I can modify the remote machine's .inputrc file or the remote accounts .bashrc file, but that is intrusive on machines I do not own. Is there a way to fix this locally. I am using Windows 10 Ubuntu Bash. The beeps are slowing driving me insane.
Here's a workaround, first, create a file .inputrc.mine in the home directory of your remote user with line set bell-style none then log in to the server using ssh -t user@server 'export INPUTRC=~/.inputrc.mine; /bin/bash' Without -t your bash would not work (input/output redirected to the previous command).
Disable System Beeps over SSH
1,339,116,601,000
Looking to learn about game development? Are you a Linux enthusiast looking to test the claim that "Linux runs on everything"? Perhaps you are a software developer who is looking to release for multiple architectures, and you don't have another MIPS Little Endian machine on-hand for testing your programme. Whatever your situation there are a surprising number of reasons to install Linux on a Playstation 2, even sixteen years after it's release (boy do I feel old all of a sudden.), yet an equally surprising lack of documentation about it or how to install it. Now don't get me wrong, if you want to use the original Sony Linux Kit, or one of it's updated open source releases on a fat PS2 with a network adapter and an IDE hard disk you can find plenty of info. However this requires the acquisition of several things, and can be quite expensive, especially when it comes to the Sony Linux Kit itself. This guide will cover some basics about the PS2's native hardware, and it's hardware compatibility, and then we'll move on to how to install Linux on a wider variety of PS2. What CPU does the PS2 use? The PS2 uses a single core "Emotion Engine" CPU, clocked at 295MHz in the original fat models, and 299MHz in newer slim models. The Emotion Engine is a 64-bit MIPS Little Endian CPU, with support for 128-bit addresses. How much RAM does the PS2 have? It has 32MB RDRAM and 4MB of eDRAM. What kind of GPU does it have? The PS2 uses the "Graphics Synthesizer" GPU clocked at 147.4MHz, and is capable of outputting up to 1920x1080 graphics at 60Hz in 32-bit color. What external storage does it support? An unmodified PS2 supports Audio-CD, Video-DVD, and up to two memory cards of up to 128MB in size for game saves. A PS2 modified with a software exploit also supports two memory cards of up to 128MB for general file management and storage, and browsing files on data-CD/DVDs via a 3rd party file manager such as uLaunchELF, as well as some USB 1.1 disks, with support for USB 2.0 disks being available on newer slim models via software support. A PS2 modified with a modchip has added support for data-CDs and DVDs without a 3rd party programme. Now on to the Linux installation: (a quick side note, if you just want to test or play around with Linux on your PS2, you can simply burn the image found here: [link] (https://sourceforge.net/projects/kernelloader/files/BlackRhino%20Linux%20Distribution/Live%20Linux%20DVD/PS2%20Live%20Linux%20DVD%20v3/ps2linux_live_v3_ntsc_large_no_modchip.7z/download) to a DVD and run this programme [link] (https://sourceforge.net/projects/kernelloader/files/Kernelloader/Kernelloader%203.0/kloader3.0.elf/download) via uLaunchELF from a flash drive or memory card with no setup required. Now on with the Linux!)
First of all there are several prerequisites for installing Linux on your PS2, please note that this guide is aimed at installation on a slim PS2, if you have a fat PS2 you should download and install the copy of Sony's PS2 Linux here: [link] (https://sourceforge.net/projects/kernelloader/files/Sony%20Linux%20Toolkit/) Also note that the machine used to test this guide was a PS2 model SCPH-79001 (silver special edition) and thus it is safe to assume this should work on any model of PS2 lower than SCPH-90000 (the model SCPH-90000 and later cannot be softmodded, and thus you will not be able to launch a Linux bootloader.) Now, to install Linux on your PS2 you will require: 1.) A software mod for your PS2, such as the FreeMCBoot OS, or a modchip, as you will need a way to launch your bootloader. 2.) A memory card of at least 8MB, but preferably 16MB, 32MB or 128MB to ensure you have ample space. This MC will permanently hold your boot loader configuration, Linux kernel, and RamDisk. Since your FreeMCBoot installation will take up approx. 4.5MB on it's respective MC, plus the Kernel, RamDisk, and config file together will take up at least 7MB (up to 9.5MB if you choose to include the generic RamDisk as well) and you only have two MC slots, unless you are willing to use a MC port expansion you will likely need the extra space provided by an above-average size MC to store your saved games. 3.) A USB disk of at least 8GB (either a USB flash drive or external IDE/SATA HDD/SSD will work) 4.) Access to an existing install of a Debian based system (while making this guide I used Debian 8), if you are on a macOS or Windows system I recommend using VirtualBox, but make sure you install the guest additions to more easily transfer the required files. 5.) A USB 1.1 or 2.0 keyboard. While Sony's PS2 Linux, and the BlackRhino Linux live DVD come with an on screen keyboard, this installation will use Debian 5, which requires a proper physical keyboard. Once you've met these prerequisites go ahead and proceed with the installation steps as follows: 1.) Download these files: vmlinux_v11.gz and the modules package [link] (https://sourceforge.net/projects/kernelloader/files/Linux%202.6/Linux%202.6%20Test%20Files%20Version%2011/) initrd.usb2.gz [link] (https://sourceforge.net/projects/kernelloader/files/Initial%20RAM%20Disc/Initrd%20for%20booting%20from%20USB%20memory%20stick/initrd.usb2.gz/download) kloader3.0.elf [link] (https://sourceforge.net/projects/kernelloader/files/Kernelloader/Kernelloader%203.0/kloader3.0.elf/download) The Debian 5 installation files [link] (https://sourceforge.net/projects/kernelloader/files/Debian%205.0/debian-lenny-mipsel-v1.tgz/download) 2.) Copy the files vmlinux_v11.gz, initrd.usb2.gz, and kloader3.0.elf to a flash drive formatted as FAT32, plug it into your PS2, and copy them to a folder named kloader on your MC of choice (must have at least 7MB free). If there isn't enough space you can copy kloader3.0.elf to a second MC, but I recommend keeping the files together if possible. 3.) Connect the USB disk you have selected for Linux installation to your existing Debian machine. Create an MS-DOS partition table on the disk. 4.) Open a terminal, start a root shell (sudo -i, or su). Run fdisk /dev/sdX where X is your USB disk's identifier. Delete all existing partitions on the USB disk, create one new primary partition that leaves 1GB of free space left of the disk (i.e. if you have an 8GB disk, you should use +7168M as the end cylinder option). Create a secondary partition of 1GB (+1024M as the end cylinder option), and change it's type to swap (t, followed by 2, and finally 82). Then use w to write changes to disk. 5.) Run mkswap /dev/sdX2 where X is your USB disk's identifier. Then run mkfs.ext2 -I 128 /dev/sdX1. Be sure to include the -I 128 option, it is required. 6.) Once the filesystems have been created, mount your USB disk's first partition under /media/usb/. Create a directory called install in the directory you just mounted the disk on. 7.) Create a folder named debian on your Debian machine, and place all of the files you downloaded in step 1 into it. The next several steps will be commands for ease of writing. 8.) cp -R /path/to/folder/debian/* /media/usb/install/ 9.) cd /media/usb/ 10.) tar -xzf install/debian-lenny-mipsel-v1.tgz 11.) cp install/vmlinux_v11.gz boot/; cp install/initrd.usb2.gz boot/ 12.) bunzip2 install/linux-2.6.35.4-mipsel-ps2-modules-v11.tar.bz2 13.) tar -xf install/linux-2.6.35.4-mipsel-ps2-modules-v11.tar 14.) nano etc/fstab and change ext3 to ext2. Save the file and exit (ctrl-x, y, enter) 15.) Unmount your USB disk with umount /dev/sdX1 16.) Unplug your USB disk. Turn on your PS2, plug your USB keyboard in to USB port 2, and start uLaunchELF. NOTE: do not plug in your USB disk yet, as the PS2 cannot natively read ext2 disks, and it will cause PS2 to hang on boot. 17.) In uLaunchELF, navigate to mc0:/kloader/ or mc1:/kloader/ if you placed the boot loader on your second MC in step 2. Run kloader3.0.elf, watch the bottom of the screen, and when Autobooting in 3... appears, press a button on your controller, or a key on the USB keyboard. A boot configuration menu should appear. 18.) Go to the bottom of the menu using the arrow keys on your USB keyboard, and select Advanced Menu. Go to Select Kernel>Memory Card X>kloader>vmlinux_v11.gz then Select Init RAM disk>Memory Card X>kloader>initrd.usb2.gz. Turn Autoboot off. 19.) Go to Configuration Menu at the bottom of the current page, make sure Use SBIOS from TGE, TGE for SBIOS New Modules, Enable hard disk and network, Patch libsd (enable USB) are all enabled, and Enable IOP debug output is disabled. 20.) Go to Module List and make sure that rom0:LIBSD is enabled. 21.) Go back to the Configuration Menu and select Edit Kernel Parameter and add the line newroot=/dev/sda1 (NOTE: pressing enter will save change and return to the Configuration Menu, so use a space after the existing line instead, and press enter once you have added this line.) 22.) Go back to Advanced Menu, and then Boot Menu, Select Save Config on MC0. 23.) Insert your USB disk into your PS2's USB port 1, go to Advanced Menu, and select Boot Current Config. 24.) Debian should boot now, when you reach the login screen use root as the login. The root user does not have a password by default, and there are no other users, so now you need to fix both. Run adduser yourusername and enter the needed info (omit personal details if you want), and a user will automatically be created. 25.) Type exit, and login as your new user with the login info you set. Run su to enter a root shell, then run passwd root and set a password for the root account. Make sure it is something you can remember! This version of Debian doesn't come with sudo preinstalled, you will need access to the root account until you can change that. 26.) Finally, while you are still in a root shell, run nano /etc/apt/sources.list and change the existing source to deb archive.debian.org/debian lenny main so that you can install packages via a network if needed. 27.) Networking will not function by default, to enable it add auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp to the file /etc/network/interfaces, plug in an ethernet cable, and reboot the PS2 into Linux again. 28.) Now that networking is up and running, you should install sudo for improved security when performing administrative tasks. This is Debian so log in to your user, drop to a root shell and run apt-get update && apt-get upgrade && apt-get install sudo (There will be several packages needing updates so be sure not to omit those commands.). 29.) You have sudo installed now, but you aren't in the sudoers file, so while in the root shell run visudo /etc/sudoers, and under ## ## User privilege specification ## root ALL=(ALL) ALL add the line yourusername ALL=(ALL) ALL Save your changes to the sudoers file, log out, and log back in. The base installation is now complete. Any other customization you want to make can be done as you would with any other Linux distro. If you want to install the PS2SDK for developing PS2 specific software you can find the source here: [link] (https://github.com/ps2dev/ps2sdk) If you try to compile it on the PS2 it will run out of memory and hang, so make sure to set up the build environment on your main machine, and copy files to the Debian USB manually or via a network in order to get them on the PS2 for testing. The PS2 controller will not work as a mouse, so I recommend a USB hub for both the mouse and keyboard (if that is not an option mousekeys can be activated as usual with Alt+Shift+Num Lock). Thanks for reading, and I hope this helped someone looking to install Linux on their PS2. I had tried for months to get this working, and have very recently done so, thus decided to try and make it easier for others wanting to do the same.
How to install Linux on the Playstation 2
1,339,116,601,000
A trivially conflicting package foo can be made to work with bar, by running dpkg --force-conflicts -i foo. But eventually it's time to upgrade, and 'apt-get' objects: % apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: foo : Conflicts: bar but 0.2-1 is installed E: Unmet dependencies. Try using -f. Can apt-get be tweaked/forced to tolerate the (pretty much fixed) conflict, then upgrade? (Quickie existence proof: uninstall foo, then upgrade, then reinstall foo as before. Therefore it is possible, the question is finding the least cumbersome mechanism.) An example, but this question is not about any two particular packages. For several years GNU parallel has had a trivial conflict with moretutils; each provides /usr/bin/parallel. dpkg can force co-existence: # assume 'moreutils' is already installed, and 'parallel' is in # apt's cache directory. dpkg --force-conflicts -i /var/cache/apt/archives/parallel_20141022+ds1-1_all.deb This creates a diversion, renames the moreutils version to /usr/bin/parallel.moreutils. Both programs work, until the user upgrades. I tried an -o option, but that didn't bring on peace: apt-get -o Dpkg::Options::="--force-conflicts" install parallel moreutils Possible -o options number in the hundreds, however...
Since OP asked for a list of commands (with which to change the relevant metadata of the package) in the comments to Gilles' answer, here it is: # download .deb apt download parallel # alternatively: aptitude download parallel # unpack dpkg-deb -R parallel_*.deb tmp/ # make changes to the package metadata sed -i \ -e '/^Version:/s/$/~nomoreutconfl/' \ -e '/^Conflicts: moreutils/d' \ tmp/DEBIAN/control # pack anew dpkg-deb -b tmp parallel_custom.deb # install dpkg -i parallel_custom.deb This is under the assumptions that the conflicts line only has moreutils as an entry (and without version restrictions) as was the case in my installation. Otherwise, use '/^Conflicts:/s/\(, \)\?moreutils\( [^,]\+\)\?//' as the second sed script to only remove the relevant part of the line and support version restrictions. Your installed package won't be overwritten by newer versions from the repository and you have to manually repeat this procedure for every update to the GNU parallel package if you want to keep this package up-to-date.
Set apt-get options to tolerate harmless 'dpkg --force-conflicts' kludge?
1,339,116,601,000
How can I get hard disk capacity, usage, etc. using the /proc or /sys filesystems? If it is possible, please tell me which file(s) I need to process to get that information.
This is Answer cat /sys/block/sda/size Above file will returns some number like 312581808, then this number need to multiply by 512 standard block size then u ll get long int value in bytes, then u can convert to GB.
How to get hard disk information from /proc and/or /sys
1,339,116,601,000
I was playing a game on Steam and all a sudden I got a kernel panic. I manually shut down the computer and booted back into Linux Mint 17.1 (Cinnamon) 64-bit, and went to go check through my log files in /var/log/, but I couldn't find any references or any kind of messages relating to the kernel panic that happened. It's strange why it never dumped the core or even made any note of it into the log files. How can I make sure that a core is always dumped in case a kernel panic happens again? Doesn't make any sense why nothing was logged when a kernel panic happened. Looking around on Google, people suggest to read through /var/log/dmesg, /var/log/syslog, /var/log/kern.log, /var/log/Xorg.log etc… but nothing. Not even in .Xsession-errors file either. Here are some photographs of the screen: I could always take a photo of the screen when and if it happens again, but I just want to make sure that I can get it to dump the core and create a log file on a kernel panic.
To be sure that your machine generates a "core" file when a kernel failure occurs, you should confirm the "sysctl" settings of your machine. IMO, following should be the settings (minimal) in /etc/sysctl.conf: kernel.core_pattern = /var/crash/core.%t.%p kernel.panic=10 kernel.unknown_nmi_panic=1 Execute sysctl -p after making changes in the /etc/sysctl.conf file.  You should probably also mkdir /var/crash if it doesn’t already exist. You can test the above by generating a manual dump using the SysRq key (the key combination to dump core is Alt+SysRq+C).
Kernel Panic dumps no log files
1,339,116,601,000
This is something that has been so confusing to me for a long time. I am using Linux but I have gaps in my understanding of certain aspects and one is about the terminal. As I came to understand it what we mean with the console is an emulator of old terminals. But I can not understand what is the deal with this. As I read the best old terminals were the VT series and the vector graphics type (Tektronix). So I assume that the current emulators emulate these. Now my needs in Linux concerning display presentation has reached as far as using a colorscheme for vim and appreciating colors when doing diff in files. But when I have issues I usually find advice to set xterm-256 or screen-256 etc. Additionally when I do: ls /lib/terminfo/ I see about a dozen of directories with configuration for (I assume) different kind of emulator types. So my question is, how does these xterm-256 or screen-256 and the rest fit in the idea of emulating just the top terminals in the past decades? What is the need to have so many terminal types? Is it something I should look into more? And why today with the modern technology need to emulate these old terminals like VT and not have something new? Perhaps my needs are too simple too appreciate the subtleties of this but it is has been something that puzzled me a lot. Since for instance if I have an issue with a colorscheme I just copy paste what I find in google about TERM etc without really understanding what am I doing or what is the problem. If someone helps me undertand this it would be really appreciated
Weird aspects of Unix usually exist for good reason, so you're right to look for one. In this case, though, the good reason has long since become obsolete, and you're looking at an antique artifact of a bygone era. Just about the only "terminal" in existence today is xterm & variants. Their capabilities vary very slightly, in ways that matter to only a few programs. If you just use xterm, and never touch the TERM variable or peek at the terminfo database, your life will generally be better. The TERM variable communicates information about the terminal to the application through the environment, cf. man xterm. Changing it doesn't change the terminal; it just represents different terminal functionality to the application. In the days of hardwired terminals, it was necessary to set TERM to represent the attached terminal. In the case of xterm, the software can set the variable itself. A quick tour of the vim docs shows (as you mention in your comment) that you have to change it to support color. That's progress for you. why today ... emulate these old terminals like VT and not have something new? The answer is as much anthropological as technical. Before the GUI, access to Unix machines was via dumb terminals e.g. VT-100. Shells and utilities like top already existed. When the GUI became technologically practical (in which X played a role) in the 1980s, Unix users still needed to use those programs, so xterm was invented to emulate ye olde VT-100. It was meant as a stopgap. "Everyone knew" that terminals were the past and GUIs were the future, and everyone expected "everything" to be accessed via the GUI. The original Macintosh, for example, had no arrow keys because why would you need them? Surely the cryptic Unix command line, with its missing vowels and helpless help $ help help: not found would soon go the way of drum memory and punch cards. And that did come to pass, in a way: 9 users in 10 running Windows or OS X never see the command line except when tech support drops by to fix something. Then two things happened to the Unix GUI, such as it was. Windows in particular drained the money out of the market. There was a big move to standardize it (cf. Sun News and OSF Motif), and then it ground to a halt around 1990. Just about that time the Internet took off, and things graphical in Unix moved into the web browser. The motivation and the money (pretty much the same thing) to engineer a complete GUI for Unix and render everything in section 8 of the manual obsolete disappeared. There is another reason, too, that very few foresaw: the command line has certain advantages over the GUI. Pipelines and regular expressions are remarkably powerful, not to mention repeatable with shell history and scripts. Even in the context of a GUI, the command line remained useful. So much so that it continues to be enhanced even today. As your question suggests, what's needed is a re-examination of the assumption that the GUI would triumph, and a re-invention of the terminal as an integral part of it. We need a new terminal, with proportional fonts and bit-addressable graphics in the terminal. Unfortunately, no one seems ready to do that. No corporate entity will undertake it; the market is huge, but still only a tiny proportion of computer users. The logical funder would be a government agency like DARPA, but human-interface research is considered "done" these days (didn't we already invent the GUI?). Until more people -- lots more people -- recognize the need, xterm is your friend, and likely to be your grandson's friend, too.
Why do we need so many terminal emulator packages and what is their use?
1,339,116,601,000
When I installed my SSD I just mounted with discard and didn't sweat it. However today I was reading about the pros and cons of using fstrim instead and decided to run the program to get an idea of how long it would actually take (still with my partitions mounted with discard). The command took several minutes on both my root and home partitions. For my home partition I used -v and got this: $ sudo fstrim -v /home /home: 137494052864 bytes were trimmed This is more than the amount of free space on the partition! $ df -h /home Filesystem Size Used Avail Use% Mounted on /dev/sda2 206G 78G 118G 40% /home Subsequent runs finish in less than a second, eg: $ sudo fstrim -v /home /home: 0 bytes were trimmed Surely if I have always had the partition mounted with discard, fstrim should not trim a large amount of data like that? The discard option is definitely enabled, here are the relevant fstab lines: UUID=xxxxxxxx... / ext4 noatime,discard,errors=remount-ro 0 1 UUID=xxxxxxxx... /home ext4 noatime,discard,errors=remount-ro 0 2 And mount output lines: /dev/disk/by-uuid/xxxxxxxx... on / type ext4 (rw,noatime,discard,errors=remount-ro,stripe=128,data=ordered) /dev/sda2 on /home type ext4 (rw,noatime,discard,errors=remount-ro,stripe=128,data=ordered) The SSD is a TOSHIBA THNSNS256GMCP. Why does this happen?
Two things here: fstrim trims all the data that is unallocated in the filesystem (well, not really all the data, only the data blocks that are not allocated, I don't think the unused parts of the inode table or the parts of not-completely used blocks are trimmed), regardless of whether discard is in used or not. fstrim cannot know which of those unallocated blocks have been "trimmed" or not already in the past, but it (actually the kernel, all the fstrim work is done in the FITRIM ioctl) does however keep track of which block group have been trimmed and will not trim them again if there hasn't been any unallocation in that block group since then, unless you're requesting a FITRIM with a smaller minimum extent length (from checking the ext4 code, it may be different for other file systems) which explains why you get 0 on the next run. Note that it doesn't harm to trim a block that has already been trimmed. That's just telling the SSD again that it can do whatever it wants with it (like erase it so it can be ready to use again for something else). In df output, the "available" value doesn't take into account the space that is "reserved" for root, you'll notice that 206 - 76 is 130G, not 118G. 12G (about 5%) are reserved. See tunefs -m to change how much is reserved.
fstrim trims more than half of partition size even though partition mounted with discard
1,339,116,601,000
mkdir ~/mnt/2letter echo PASSWORD | sshfs -o password_stdin www-data@localhost:/var/www/sites/2letter ~/mnt/2letter -o sshfs_sync,cache=no,password_stdin After this: $ ls -ld ~/mnt/2letter/ drwxr-xr-x 1 www-data www-data 4096 Jan 28 21:29 /home/porton/mnt/2letter/ I need to access /home/porton/mnt/2letter/ under my UID (porton) not as www-data, because I am not allowed by file system permissions to modify www-data owner files, but need to edit them. Moreover it seems to have been working with the correct UID with older versions of Linux. Why doesn't it work now?
Try chucking in the two following options -o idmap=user,uid=<YOUR UID>
UID/GID with sshfs of Linux FUSE
1,339,116,601,000
Given a device file, say /dev/sdb, is it possible to determine what driver is behind it? Specifically, I want to determine what driver my storage devices are using. fdisk -l lists 2 devices: /dev/sda and /dev/sdb. One is a SATA hard drive and the other is a USB Mass Storage device - actually an SD card. How do I determine, programmatically, which is which? I am writing a piece of software, and I want to protect the beginner from obliterating their hard drives, whilst allowing them to obliterate their SD cards.
Run udevadm info -a -n /dev/sda and parse the output. You'll see lines like DRIVERS=="ahci" for a SATA disk using the ahci driver, or DRIVERS=="usb-storage" for an USB-connected device. You'll also be able to display vendor and model names for confirmation. Also, ATTR{removable}=="1" is present on removable devices. All of this information can also be obtained through /sys (in fact, that's where udevadm goes to look), but the /sys interface changes from time to time, so parsing udevadm is more robust in the long term.
What driver is behind a certain device file?
1,339,116,601,000
I got an external Debian server. The problem is that my university campus doesn't allow connections to go outside when the port is different than TCP port 22, 80, 443, or UDP port 123. I tested them manually. On my Debian server I would like to listen to all my UDP and TCP ports so I can clearly figure out which TCP and UDP ports my university let through their firewall. Nmap is wonderful on the client side to test that, but what should I do on the server side?
tcpdump usually comes as standard on Linux distros. It will log all packets visible at the server note that you probably want to set it running with a filter for your client IP to cut down on the noise I think this includes packets not accepted by iptables on the local machine - but you might want to test this e.g. /usr/sbin/tcpdump -i eth0 -c 3000000 -np host client.example.com >tcp.log Then just run nmap from your client.
How to listen to all ports (UDP and TCP) or make them all appear open in Debian
1,339,116,601,000
Currently, I have something like: iptables -A INPUT -p ICMP --icmp-type 8 -j DROP iptables -A INPUT -s x.x.x.x -p ICMP --icmp-type 8 -j ACCEPT However, when I run the second command, it looks as if iptables just stops. I have to break out of it to get back to terminal. Perhaps I am doing it all wrong, but some insight would be helpful.
You need to run your rules in the opposite order. Iptables is sensitive to the order that commands were run. If a rule matches, it doesn't go on to check more rules, it just obeys that one. If you set the drop first, the accept rule will never get tested. By setting the specific accept with the source IP, then setting the more general policy to drop you will affect the expected behavior. iptables -A INPUT -s x.x.x.x -p ICMP --icmp-type 8 -j ACCEPT iptables -A INPUT -p ICMP --icmp-type 8 -j DROP As for the hang problem you seem to be having, are you sure you entered a valid IP address? Perhaps you can prefix that command with strace iptables … to see what it's doing while it appears to hang.
iptables drop all incoming ICMP requests except from one IP
1,339,116,601,000
I have a statically linked binary for a tool that I'm trying to run on RHLE4. The tool complains about glibc. It needs 2.4 and the one in the system is 2.3. Here is the message that it spits out: ./wkhtmltoimage-i386: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required by ./wkhtmltoimage-i386) Is there a way to build glibc2.4 and use it just for this tool, without replacing the glibc2.3 in the system ? While building glibc2.4 what prefix should I use for configure ?
Since the source code for this wkhtmltoimage tool is available, I'd suggest you recompile it from source with your system's native glibc. It will likely be even quicker than recompiling glibc, which is no easy task. A statically linked executable already includes code for all the C library calls it needs to make, so you cannot separately compile a new glibc and link the executable to it. However, programs using glibc are never completely static: some library calls (all those connected with the "Name Service", i.e., getuid() and similar) still make use of dynamically-loaded modules (the libnss*.so files usually found under /lib). This is likely why the program is failing: it is looking for some NSS module but can only find the glibc2.3 ones. If you absolutely want to go down the road of recompiling glibc, the following could possibly work (Warning: untested!): configure glibc2.4 to install in a non-system directory, e.g. /usr/local/glibc2.4, then compile and install; run wkhtmlto* by specifying it as the first component in the dynamic linker search path (LD_LIBRARY_PATH): env LD_LIBRARY_PATH=/usr/local/glibc2.4/lib wkhtmltoimage ... Update: This turned out not be so easy: having two distinct libc's on the system requires more than just recompile/install, because the libc will look for the runtime linker and dynamic-load NSS modules in fixed locations. The rtldi program allows installing different versions of the GNU libc on a single Linux system; its web page has instructions (but this is an expert-level task, so they are definitely not a step-by-step walkthrough). Let me stress again that it will be far less work to recompile wkhtmltoimage...
Running a statically linked binary with a different glibc
1,339,116,601,000
ps -o pid,ppid,stat,exe -e | grep deleted generates output like this: 1777 1346 Sl /usr/bin/python3.10 (deleted) 1778 1346 Sl /usr/bin/python3.10 (deleted) 1825 1327 Ss /usr/lib/bluetooth/obexd (deleted) 2007 1 Sl /usr/bin/python3.10 (deleted) 2101 1346 S /usr/bin/python3.10 (deleted) 2199 1 Sl /usr/bin/python3.10 (deleted) 371565 371305 SLl /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitNetworkProcess (deleted) 371566 371305 SLl /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitWebProcess (deleted) 376426 371305 SLl /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitWebProcess (deleted) 380141 371305 SLl /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitWebProcess (deleted) What does the (deleted) mean & how can I get ps to list the path without appending it?
On Linux, ps gets that information by doing a readlink("/proc/<pid>/exe"). $ ls -l /proc/self/exe lrwxrwxrwx 1 chazelas chazelas 0 Dec 17 10:35 /proc/self/exe -> /usr/bin/ls That file is a magic symlink to the file that the given process (or any of its ancestor as not all processes execute files) last executed. If the file has been deleted (and potentially replaced by a new version like after a package update), a (deleted) is appended to the string returned by readlink(). That symlink can still be followed to the actual deleted file (and in that way it is magic). $ cp /bin/sleep . $ ./sleep 1h & [1] 17417 $ ls -ld "/proc/$!/exe" lrwxrwxrwx 1 chazelas chazelas 0 Dec 17 10:38 /proc/17417/exe -> /home/chazelas/sleep* $ rm sleep $ ls -ld "/proc/$!/exe" lrwxrwxrwx 1 chazelas chazelas 0 Dec 17 10:38 /proc/17417/exe -> '/home/chazelas/sleep (deleted)' New version of sleep: $ cp /bin/sleep . $ ps -o exe -p "$!" EXE /home/chazelas/sleep (deleted) $ ls -ld "/proc/$!/exe" lrwxrwxrwx 1 chazelas chazelas 0 Dec 17 10:38 /proc/17417/exe -> '/home/chazelas/sleep (deleted)' $ ls -iLd "/proc/$!/exe" sleep 3114951 /proc/17417/exe* 3114969 sleep* With -L ls follows the link (by using stat() instead of lstat()) and it is able to get the corresponding inode number. We see that's two different files (different inode number). $! is still running the old deleted version of sleep, that file has no path on the filesystem other than that /proc/$!/exe magic symlink¹. /home/chazelas/sleep is now a different executable, so removing that (deleted) would be wrong as it would refer to the wrong file. Here, since exe is the last field, you can remove it by piping the output to: sed 's/ (deleted)$//' $ ps -o pid,ppid,stat,exe PID PPID STAT EXE 18928 11196 Ss /usr/bin/zsh 18943 18928 SN /home/chazelas/sleep (deleted) 18967 18928 R+ /usr/bin/ps $ ps -o pid,ppid,stat,exe | sed 's/ (deleted)$//' PID PPID STAT EXE 18928 11196 Ss /usr/bin/zsh 18943 18928 SN /home/chazelas/sleep 18968 18928 R+ /usr/bin/ps 18969 18928 S+ /usr/bin/sed But again, that would be a lie as /home/chazelas/sleep is not the executable that process 18943 is running, it's another sleep command that is now nowhere to be found as it has been deleted since that process executed it. ¹ and corresponding /proc/<pid>/exe for other processes potentially executing it or /proc/<pid>/fd/<fd> for processes having that file opened on some fd, or potentially some hard links to it.
ps -o pid,ppid,stat,exe -e | grep deleted has "(deleted)" appended to the executable path
1,339,116,601,000
Environment: I am using a CentOS-7 as a hypervisor for running several LXCs under libvirt. Each container runs a minimal installation of CentOS-7 with cut down FreePBX (Asterisk, Apache, MySQL + bits). Symptoms: There are 16 containers running without any problems. When I start one more it does start, but after the 17th container starts I can not do systemctl start/restart/stop <anything> in ANY of the containers: [root@test-lxc ~]# systemctl restart dnsmasq Error: Too many open files Diagnostics: The following diagnostics and counts are done while the 17th LXC is running and systemctl restart blabla is failing: I can ssh into any LXC and run most basic commands, e.g. ls, etc. I suspect the limit somehow affects only the systemd. I'm trying to understand where/why I hit the limit. [root@lxc-hypervisor]# sysctl fs.file-nr fs.file-nr = 29616 0 12988463 That was not tweaked, this is just what happened to be from the default install. Same as above maximum (last) value = 12988463 is reported by the hypervisor and also inside each LXC. Very similar 1st value just under 30000 is also reported in each LXC. When I try to count file descriptors across all process inside each LXC I get in the order 400 ~ 500 in each LXC. for pid in $( ls /proc/ | grep -E -e "^[0-9][0-9]*\$" ); do ls -l /proc/${pid}/fd/ 2> /dev/null | wc -l done The sum total around 9000 (9k) without the hypervisor itself. When I run that on the hypervisor I usually get suspiciously close values just over 10000 e.g. 10005. Questions: Q1. Where is the limit set or inherited from? Q2. Why does the limit affect systemctl start/stop/restart blah commands, but I can still ssh into LXCs, run commands such as bash scripts with loops that fork a lot, albeit as root. Q3. How to tweak limits to allow running more LXCs. To the best of my understanding RAM and other resources are not the limit. I did read many articles and answers on the subject of file descriptor limits, but I fail to see where my system hits the limits. Any other relevant information is also welcome.
I believe you are not hitting a global limit, but an inotify limit. This would be seen on containers running systemd because systemd uses the inotify facility for its bookkeeping, but the host would also be affected. Containers not using systemd (nor inotify) would probably be unaffected. /proc/sys/fs/inotify/max_user_instances: This specifies an upper limit on the number of inotify instances that can be created per real user ID. If only non-rootless (ie: root in the container is the real root) containers are in use, then root user becomes the bottleneck. Having multiple containers using the same rootless user mapping would also create such bottleneck for this container's root user (but not affect the host). The default is 128, far too little for containers use. CentOS7 (or Rocky9) doesn't include any default setup for this with LXC. Debian-based distributions include this file on the host: /etc/sysctl.d/30-lxc-inotify.conf: # Defines the maximum number of inotify listeners. # By default, this value is 128, which is quickly exhausted when using # systemd-based LXC containers (15 containers are enough). # When the limit is reached, systemd becomes mostly unusable, throwing # "Too many open files" all around (both on the host and in containers). # See https://kdecherf.com/blog/2015/09/12/systemd-and-the-fd-exhaustion/ # Increase the user inotify instance limit to allow for about # 100 containers to run before the limit is hit again fs.inotify.max_user_instances = 1024 So you should do the same by creating this file on the host. For immediate effect (on the host): sysctl -w fs.inotify.max_user_instances=1024
"Error: Too many open files" while starting service in environment with several LXCs
1,339,116,601,000
lseek man page: When users complained about data loss caused by a miscompilation of e2fsck(8), glibc 2.1.3 added the link-time warning "the llseek function may be dangerous; use `lseek64 instead." This makes this function unusable if one desires a warning-free compilation. Since glibc 2.28, this function symbol is no longer available to newly linked applications. What's the story behind this?
The problem was that glibc included a llseek symbol, with no corresponding declaration in its header files. e2fsck’s configuration script detected the symbol, and assumed that meant the function was usable. However, the implicit function declaration didn’t match what the function expected, and the function call ended up being miscompiled as a result. In particular, llseek expects a 64-bit offset, but the implicit declaration results in int arguments — this is what caused data loss, since e2fsck made changes at different offsets than what it expected. The reason e2fsck used llseek is that libc5, glibc’s predecessor on Linux, declared it and made it usable (it was in unistd.h). So e2fsck, when built against libc5, correctly used llseek; but when built against glibc, built successfully but failed to work correctly. This was fixed in e2fsprogs 1.12, with the following changelog entry: E2fsprogs now works with glibc (at least with the version shipped with RedHat 5.0). The ext2fs_llseek() function should now work even with i386 ELF shared libraries and if llseek() is not present. We also explicitly do a configure test to see if (a) llseek is in libc, and (b) if llseek is declared in the system header files. (See standard complaints about libc developers don't understand the concept of compatibility with previous versions of libc.) The C library was also changed to issue a warning if code tried to use llseek; the discussion can be found in the mailing list archives.
What happened to llseek and e2fsck?
1,564,750,780,000
On Ubuntu 18.04 I create a RAID 1 array like this: mdadm --create /dev/md/myarray --level=1 --run --raid-devices=2 /dev/sdc /dev/sdd I then add the output of mdadm --detail --scan /dev/md/myarray to /etc/mdadm/mdadm.conf. It looks like this: ARRAY /dev/md/myarray metadata=1.2 name=MYHOSTNAME:myarray UUID=... The device name has been prefix with "MYHOSTNAME:". At this point the symlink /dev/md/myarray still exists, but after the first time I reboot it becomes /dev/md/MYHOSTNAME:myarray, breaking things. To make it worse, this happens only on some machines - on others the symlink remains /dev/md/myarray. All are running Ubuntu 18.04, so I have no idea why. How do I get a consistent device path for my MD device, ideally the exact one I specified ("/dev/md/myarray")? I tried editing mdadm.conf to remove the hostname, but even if the line says ARRAY /dev/md/myarray metadata=1.2 name=myarray UUID=... the symlink still changes on reboot - on machines that "want" the hostname. I also tried going the other way and adding the hostname in both place: ARRAY /dev/md/HOSTNAME:myarray metadata=1.2 name=HOSTNAME:myarray UUID=... but again on machines that "don't want" the hostname the symlink becomes /dev/md/myarray after a reboot! I can't use the numeric device (/dev/md127) either because when there are multiple MD devices created like this they tend to alternate between md126 and md127 as well! This is crazy!
How do I get a consistent device path for my MD device, ideally the exact one I specified ("/dev/md/myarray")? After mdadm --create /dev/md/foobar ..., both hostname and name are stored in the mdadm metadata, as you should verify with mdadm --examine or mdadm --detail: # mdadm --detail /dev/md/foobar Name : ALU:foobar (local to host ALU) ALU happens to be the hostname of my ArchLinux machine: # hostname ALU You can specify the host that should be stored at create time: # mdadm --create /dev/md/foobar --homehost=barfoo # mdadm --detail /dev/md/foobar Name : barfoo:foobar ...but usually nobody remembers to do that. And that's already where the problems start... you might have created your RAID array from some LiveCD or other, and the hostname in that environment didn't match your main install at all. And then the metadata stores some completely unrelated hostname. Similarly if you set everything up correctly, but then encounter problems with your RAID and boot a rescue system to check things out, yet again there's a mismatch with the hostnames. Or the other way around, the hostname may match even if it's the wrong machine - if you used the same hostname for two independent systems and then migrate drives. Then the alien arrays take over the names of the original ones... Now, the metadata can also be changed later using mdadm --assemble --update=homehost or --update=name, that is one way to deal with problem. It should be set correctly but it's difficult to change as (for some reason) short of hexediting metadata directly, it can only be done at assembly time. Another way is to ignore the systems hostname and instead specify --homehost on assembly or set HOMEHOST in mdadm.conf. This is described in some detail in the mdadm.conf manpage. HOMEHOST The homehost line gives a default value for the --homehost= option to mdadm. There should normally be only one other word on the line. It should either be a host name, or one of the special words <system>, <none> and <ignore>. If <system> is given, then the gethostname(2) systemcall is used to get the host name. This is the default. [...] When arrays are created, this host name will be stored in the metadata. When arrays are assembled using auto-assembly, arrays which do not record the correct homehost name in their metadata will be assembled using a "foreign" name. A "foreign" name alway ends with a digit string preceded by an underscore to differentiate it from any possible local name. e.g. /dev/md/1_1 or /dev/md/home_0. So you can try to set HOMEHOST ALU (in my case), or the more generic HOMEHOST <ignore> (or HOMEHOST <none>) in the mdadm.conf. But it will only work when that mdadm.conf is present. And again if you set ignore and then hook up an array from another machine, you might run into name conflicts. So it'd be best to set the hostname correctly in metadata and mdadm.conf and not ignore it, and better yet set the actual hostname in initramfs before assembly but it can be hard to put into practice. My personal preference is to just stick to the classic numeric style. Identify by UUID and nothing else: ARRAY /dev/md1 UUID=8fe790ca:f3fa3388:4ae125b6:2c3a5d44 ARRAY /dev/md2 UUID=f14bef5b:a5356e51:25fde128:09983091 ARRAY /dev/md3 UUID=0639c68d:4c844bb1:5c02b33e:00ab4a93 This is also consistent (but also depends on it to have been created this way and/or set accordingly in the metadata, otherwise you also might have to --update it). And alien arrays that don't match the given UUIDs should end up as /dev/md127+. At the end of the day no matter what you do, you should not blindly rely on /dev/mdX or /dev/md/names the same way you don't blindly rely on /dev/sdX letters. Always use filesystem UUIDs to identify whatever is on those arrays. There's too many corner cases where names might unexpectedly change, so at best, this can be an orientation help or hint to the sysadmin, it's not the answer to all problems.
MD device name changing to include "HOSTNAME:" after first reboot. How do I get a consistent name?
1,564,750,780,000
I have a laptop running linux with nvidia optimus/intel hybrid graphics where all outputs are connected to the intel card. It is driven by the i915 driver. An external monitor or beamer is discovered only one time a boot cycle: If I disable or unplug it (and then plug it again), it cannot be enabled again, because the linux kernel does not detect it anymore: There are no udev or acpi events on plug/unplug and the sysfs, in my case /sys/class/drm/card0-DP-1/status, indicates that the output is disconnected. After a reboot the display is detected again, and again exactly one time. Suspending/hibernating and resuming suffice as well, but only if the output is uplugged while rebooting. I think this is somehow related to the kernel probing/reprobing for output devices on boot. Can the kernel be somehow induced to re-probe for monitors, and thus to hopefully detect them again?
This isn't the xrandr-approach the I know works in X, but for console you can try this — you can write to that /sys/class/drm/card0-DP-1/status file as well. I couldn't find proper documentation, but thankfully Linux is open source. Reviewing the source code, it looks like it takes a few values: detect, on, on-digital, and off. So echo detect > /sys/class/drm/card0-DP-1/status should force a re-check for a monitor. Or echo on-digital > /sys/class/drm/card0-DP-1/status might manage to turn it on, regardless of what the detection thinks. edit: Under X, I've used this to deal with HDMI that did not detect being plugged it — it'll force-enable the output. But unfortunately video only, HDMI audio won't work (and apparently isn't possible without a kernel patch): xrandr --newmode "Mode 2" 148.500 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync xrandr --addmode HDMI-1 "Mode 2" xrandr --output HDMI-1 --mode "Mode 2" --right-of LVDS-1 All those numbers specify the video timings; normally it's auto-detected, the easiest way to get them is to grab the mode it's using when you've booted with it so it's working (xrandr --verbose will show them).
How to make linux detect/re-probe monitors with intel i915 driver?
1,564,750,780,000
How could I check the file system format of a disk image? I know that I can check with the file command but I would like to automate the behaviour. $ file img.raw img.raw: data $ file img.ext4 img.raw: Linux rev 1.0 ext4 filesystem data, UUID=346712e7-1a56-442b-a5bb-90ba4c6cc663 (extents) (64bit) (large files) (huge files) $ file img.vfat img.vfat: DOS/MBR boot sector, code offset 0x3c+2, OEM-ID "mkfs.fat", sectors/cluster 16, reserved sectors 16, root entries 512, Media descriptor 0xf8, sectors/FAT 256, sectors/track 32, heads 64, sectors 1024000 (volumes > 32 MB) , serial number 0x4b5e9a12, unlabeled, FAT (16 bit) I would like to check if the given image disk is formatted with the given format. For example checkfs <image> <format> returns 0 if the image contains a 'format' file system, another value otherwise. I thought about doing something like file <image> | grep <format> and check the return code, however for vfat filesystems, 'vfat' is not appearing on file's output. I could also write a script to do it but I can't find tools which allow me to get the file system format of a disk image. I've also tried with the following tools: fdisk, parted, and df. Is there a tool which would allow me to check a disk image file system format and that works with most used file system formats?
Finally found what I needed blkid -o value -s TYPE <image> will return the fs type or nothing if it's raw data. EDIT: As mentioned by @psusi, parted has a machine parsable output. I find it less convenient than using blkid but it could also be useful. parted -m <image> print | tail -n +3 | awk -F ":" '{print $(NF-2)}' will print the fs type of each partition. tail -n +3 is used to skip the two first lines awk -F ":" '{print $(NF-2)}' is used to get the fs type which is the third last element starting from the end
Check disk image file system type
1,564,750,780,000
I have a script running in a server and it will create many sub-processes( around 800 ). I want to kill them all in one stretch. Below is the ps information. root 26363 0.0 0.0 119216 1464 ? Ss Mar02 0:00 SCREEN -S website_status root 26365 0.0 0.0 108472 1844 pts/12 Ss Mar02 0:00 \_ /bin/bash root 4910 0.0 0.0 161684 1956 pts/12 S Mar02 0:00 \_ su webmon webmon 4939 0.0 0.0 108472 1924 pts/12 S+ Mar02 0:00 \_ bash webmon 1094 3.4 0.0 107256 2432 pts/12 S 05:37 2:26 \_ sh /home/webmon/scripts/for_html/website/website_status.sh webmon 5159 0.0 0.0 100956 1288 pts/12 S 05:37 0:00 \_ mysql -vvv -h 192.168.12.38 -uwebmon -px xxxxxxxxxxxxx -e show processlist; webmon 5160 0.0 0.0 103252 816 pts/12 S 05:37 0:00 \_ grep in set webmon 5161 0.0 0.0 105952 900 pts/12 S 05:37 0:00 \_ awk {print $1} webmon 12094 0.0 0.0 100956 1288 pts/12 S 05:37 0:00 \_ mysql -vvv -h 192.168.12.38 -uwebmon -px xxxxxxxxxxxxx -e show processlist; webmon 12095 0.0 0.0 103252 820 pts/12 S 05:37 0:00 \_ grep Sleep -c webmon 15044 0.0 0.0 60240 3004 pts/12 S 05:37 0:00 \_ ssh -q 192.168.12.38 uptime | grep -o load.* | cut -f2 -d: webmon 15166 0.0 0.0 100956 1292 pts/12 S 05:37 0:00 \_ mysql -vvv -h 192.168.12.38 -uwebmon -px xxxxxxxxxxxxx -e show processlist; webmon 15167 0.0 0.0 103252 816 pts/12 S 05:37 0:00 \_ grep in set webmon 15168 0.0 0.0 105952 900 pts/12 S 05:37 0:00 \_ awk {print $1} webmon 18484 0.0 0.0 100956 1288 pts/12 S 05:38 0:00 \_ mysql -vvv -h 192.168.12.38 -uwebmon -px xxxxxxxxxxxxx -e show processlist; webmon 18485 0.0 0.0 103252 816 pts/12 S 05:38 0:00 \_ grep in set webmon 18486 0.0 0.0 105952 900 pts/12 S 05:38 0:00 \_ awk {print $1} webmon 25110 0.0 0.0 60240 3008 pts/12 S 05:38 0:00 \_ ssh -q 192.168.12.38 uptime | grep -o load.* | cut -f2 -d: webmon 2598 0.0 0.0 100956 1292 pts/12 S 05:38 0:00 \_ mysql -vvv -h 192.168.12.38 -uwebmon -px xxxxxxxxxxxxx -e show processlist; webmon 2599 0.0 0.0 103252 816 pts/12 S 05:38 0:00 \_ grep in set webmon 2600 0.0 0.0 105952 900 pts/12 S 05:38 0:00 \_ awk {print $1} Killing of script only didn't work out, what is the best and fastest way if I have many sub-process here?
Have you tried pkill -signal -P ppid? From the pkill manual: pkill - look up or signal processes based on name and other attributes -signal Defines the signal to send to each matched process -P ppid Only match processes whose parent process ID is listed If you wanted to kill 2432, and all its children, you should first try pkill -15 -P 2432, and if that doesn't work and you're willing to use the nuclear option: pkill -9 -P 2432.
Best way to kill processes created by bash script?
1,564,750,780,000
I will be using Ubuntu Linux for this project. For training of a particular application at a conference I need: To have each student be able to ssh into the same user account on a server Upon each login automatically put the user in separate isolated environments Each isolated environment includes the application, example config files, and the standard unix toolset (e.g. grep, awk, sort, uniq, etc.) However, access to an entire linux filesystem is fine too as long as the user can only damage his own isolated environment and not those of others. The virtual environments should be destroyed when the users SSH session ends For #1 we would like to do the single user account so we don't have to deal with creating an account for each student and handing out the user names and passwords. Does anyone know how I can meet these goals? Which technology e.g. LXC, Chroot, etc. is best for this? I've been toying with the idea of using .bash_profile and .bash_logout to handle the creation and destruction of these environments but not sure which technology is capable of creating the environments I need.
With Docker you can do this very easily. docker pull ubuntu docker run -t -i ubuntu /bin/bash # make your changes and then log out docker commit $(docker ps -a -q | head -n 1) sandbox cat > /usr/local/bin/sandbox <<EOF #!/bin/sh exec docker run -t -i --rm=true sandbox /bin/bash EOF chmod a+x /usr/local/bin/sandbox echo /usr/local/bin/sandbox >> /etc/shells useradd testuser -g docker -s /usr/local/bin/sandbox passwd testuser Whenever testuser logs in, they will be placed into an isolated container where they can't see anything outside it, not even the containers of other users. The container will then be automatically removed when they log out. Note: This can be circumvented by the user specifying a command. For example: ssh foo.example.com /bin/bash. If security is a concern, you can use the ForceCommand option in /etc/sshd_config. Explanation: docker pull ubuntu Here we fetch the base image that we're going to work with. Docker provides standard images, and ubuntu is one of them.   docker run -t -i ubuntu /bin/bash # make your changes and then log out Here we launch a shell from the ubuntu image. Any changes you make will be preserved for your users. You could also use a Dockerfile to build the image, but for a one time thing, I think this is simpler.   docker commit $(docker ps -a -q | head -n 1) sandbox Here we convert the last container that was run into a new image called sandbox.   cat > /usr/local/bin/sandbox <<EOF #!/bin/sh exec docker run -t -i --rm=true sandbox /bin/bash EOF This will be a fake shell that the user is forced to run on login. The script will launch them into a docker container which will automatically be cleaned up as soon as they log out.   chmod a+x /usr/local/bin/sandbox I hope this is obvious :-)   echo /usr/local/bin/sandbox >> /etc/shells This may not be required on your system, but on mine a shell cannot be a login shell unless it exists in /etc/shells.   useradd testuser -g docker -s /usr/local/bin/sandbox We create a new user that with their shell set to a script we will create. The script will force them to launch into the sandbox container. They are a member of the docker group so that the user can launch a new container. An alternative to putting the user in the docker group would be to grant them sudo permissions to a single command.   passwd testuser I hope this is also obvious.  
replicate and isolating user environments on the fly
1,564,750,780,000
I have a daemon that uses syslog(3) to log to a file that is not a descendant of /var/log. Currently, this requires that SELINUX be disabled. How can I configure an enabled SELINUX to allow this logging? I am an SELINUX novice. Any guidance or advice would be appreciated.
If you look at the context set for the directory /var/log you'll noticed the following things. First, the directory /var/log has the following selinux context set: $ ls -Z /var | grep "log$" drwxr-xr-x. root root system_u:object_r:var_log_t:s0 log Second, the log file, /var/log/messages, has no additional context: $ ls -Z /var/log/messages -rw------- root root ? /var/log/messages So it would seem that you only need to set a context similar to the one on /var/log on whatever directory you're planning on writing this additional log file to. Something like this should do it. Method #1: replicating selinux label Below will copy the context that's associated with /var/log and apply it to /opt/blah as well. $ mkdir /opt/blah $ ls -Z /opt | grep blah drwxr-xr-x root root ? blah # label directory with context $ chcon --reference /var/log /opt/blah # see the newly added context $ ls -Z /opt/ | grep blah drwxr-xr-x. root root system_u:object_r:var_log_t:s0 blah Method #2: applying context directly You can also apply them directly like so: $ chcon system_u:object_r:var_log_t:s0 /opt/blah I'm away from a system where I can confirm the need to run these commands but I believe you need to tell SELinux to pick up these newly applied contexts to the filesystem as well. $ semanage fcontext -a -t var_log_t "/opt(/.*)?" $ restorecon -R -v /opt confirm changes # confirm identical to /var/log context $ ls -Z /var/ | grep "log$" drwxr-xr-x. root root system_u:object_r:var_log_t:s0 log References RHEL Deployment Guide - Chapter 44. Working With SELinux CentOS SELinux Howto ⁠5.6.2. Persistent Changes: semanage fcontext
Configuring SELINUX to allow logging to a file that's outside /var/log
1,564,750,780,000
I am on Linux 2.6.32-26-generic When I look in to the linux source code for "ioctl.h" hearer file, I could see many variants. (for different platforms, I guess). i.e. ./fs/ocfs2/ioctl.h ./fs/btrfs/ioctl.h ./fs/ceph/ioctl.h ./include/config/i2o/config/old/ioctl.h ./include/asm-generic/ioctl.h ./include/linux/hdlc/ioctl.h ./include/linux/ioctl.h ./drivers/video/via/ioctl.h ./drivers/staging/vt6655/ioctl.h ./drivers/staging/vt6656/ioctl.h ./arch/ia64/include/asm/ioctl.h ./arch/h8300/include/asm/ioctl.h ./arch/microblaze/include/asm/ioctl.h ./arch/score/include/asm/ioctl.h ./arch/avr32/include/asm/ioctl.h ./arch/alpha/include/asm/ioctl.h ./arch/x86/include/asm/ioctl.h ./arch/m32r/include/asm/ioctl.h ./arch/mn10300/include/asm/ioctl.h ./arch/sparc/include/asm/ioctl.h ./arch/powerpc/include/asm/ioctl.h ./arch/m68k/include/asm/ioctl.h ./arch/sh/include/asm/ioctl.h ./arch/xtensa/include/asm/ioctl.h ./arch/mips/include/asm/ioctl.h ./arch/s390/include/asm/ioctl.h ./arch/arm/include/asm/ioctl.h ./arch/blackfin/include/asm/ioctl.h ./arch/frv/include/asm/ioctl.h ./arch/parisc/include/asm/ioctl.h ./arch/cris/include/asm/ioctl.h But I see that file being included as #include <sys/ioctl.h> How does this mapping work?
I believe file is being included is /usr/include/sys/ioctl.h (not from /usr/src/linux or some). And on my system it belongs to glibc, not kernel or kernel-headers. Actually, nothing gets included from kernel source - headers inside /usr/src/linux (or so) are being used only for kernel compilation. If some software needs some kernel headers to compile it uses ones in /usr/include/linux (and some others), which are usually part of package like kernel-headers or linux-headers.
"sys/ioctl.h" header in linux
1,564,750,780,000
I want to migrate the configuration of an Ubuntu desktop to a new box with different hardware. What is the easiest way to do this? /etc/ contains machine and hardware specific settings so I can't just copy it blindly. A similar problem exists for installed packages. edit: This is a move from x86 to x86-64.
First, if you're going to keep running 32-bit binaries, you're not actually changing the processor architecture: you'll still be running an x86 processor, even if it's also capable of doing other things. In that case, I recommend cloning your installation or simply moving the hard disk, as described in Moving linux install to a new computer. On the other hand, if you want to have a 64-bit system (in Ubuntu terms: an amd64 architecture), you need to reinstall, because you can't install amd64 packages on an i386 system or vice versa. (This will change when Multiarch comes along). Many customizations live in your home directory, and you can copy that to the new machine. The system settings can't be copied so easily because of the change in processor architecture. On Ubuntu 10.10 and up, try OneConf. OneConf is a mechanism for recording software information in Ubuntu One, and synchronizing with other computers as needed. In Maverick, the list of installed software is stored. This may eventually expand to include some application settings and application state. Other tools like Stipple can provide more advanced settings/control. One of the main things you'll want to reproduce on the new installation is the set of installed packages. On APT-based distributions, you can use the aptitude-create-state-bundle command (part of the aptitude package) to create an archive containing the list of installed packages and their debconf configuration, and aptitude-run-state-bundle on the new machine. (Thanks to intuited for telling me about aptitude-create-state-bundle.) See also Ubuntu list explicitly installed packages and the Super User and Ask Ubuntu questions cited there, especially Telemachus's answer, on how to do this part manually. For things you've changed in /etc, you'll need to review them. Many have to do with the specific hardware or network settings and should not be copied. Others have to do with personal preferences — but you should set personal preferences on a per-user basis whenever possible, so that the settings are saved in your home directory. If you plan in advance, you can use etckeeper to put /etc under version control (etckeeper quickstart). You don't need to know anything about version control to use etckeeper, you only need to start learning if you want to take advantage of it to do fancy things.
How do I migrate configuration between computers with different hardware?
1,564,750,780,000
I set a group called music, using the command groupadd -g 500 music. Then I add alice to the music group, and set alice to be the admin of the music group, using these two commands gpasswd -a alice music and gpasswd -A alice music. Now alice can add people to the music group without the foil of the denied permission, but how to cancel this permission? I checked the man page of the gpasswd but found nothing about option of canceling the admin permission. I try to reach the objective by removing alice away from group music, using command gpasswd -d alice music, but alice still have the permission to manipulate group member. How can I cancel the admin permission of a group?
You can set the administrators list to the empty list: gpasswd -A '' music which will revoke such privilege. The actual storage of the administrators information is in the 3rd field of the entry in /etc/gshadow, the shadow side of /etc/group, as described in gshadow(5): Each line of this file contains the following colon-separated fields: group name [...] encrypted password [...] administrators It must be a comma-separated list of user names. Administrators can change the password or the members of the group. Administrators also have the same permissions as the members (see below). members [...] So the same could have been achieved by running: vigr -s (with -s selecting /etc/gshadow instead of /etc/group) and manually removing out alice in the 3rd field of the music entry. So from something like: music:!:alice: to: music:!::
How to cancel the admin permission of a group?
1,564,750,780,000
A simple example. I'm running a process that serves http request using TCP sockets. It might A) calculate something which means CPU will be the bottleneck B) Send a large file which may cause the network to be the bottleneck or C) Complex database query with semi-random access causing a disk bottleneck Should I try to categorize each page/API call as one or more of the above types and try to balance how much of each I should have? Or will the OS do that for me? How do I decide how many threads I want? I'll use 2 numbers for hardware threads 12 and 48 (intel xeon has that many). I was thinking of having at 2/3rds of the threads be for heavy CPU (8/32), 1 thread for heavy disk (or 1 heavy thread per disk) and the remaining 3/15 be for anything else which means no trying to balance the network. Should I have more than 12/48 threads on hardware that only supports 12/48 threads? Do I want less so I don't cause the CPU to go into a slower throttling mode (I forget what it's called but I heard it happens if too much of the chip is active at once). If I have to load and resource balance my threads how would I do it?
Linux: The Linux kernel have a great implementation for the matter and have many features/settings intended to manage the ressources for the running process (over CPU governors, sysctl or cgroup), in such situation tuning those settings along with swap adjustment (if required) is recommended, basically you will be adapting the default functioning mode to your appliance. Benchmark, stress tests and situation analysis after applying the changes are a must especially on production servers. The performance gain can be very important when the kernel settings are adjusted to the needed usage, on the other hand this require testing and a well understanding of the different settings which is time consuming for an admin. Linux does use governors to load balance CPU ressources between the running application, many governors are available; depending on your distro's kernel some governor may not be available (rebuilding the kernel can be done to add missing or non upstream governors). you can check what is the current governor, change it and more importantly in this case, tune its settings. Additional documentations: reading, guide, similar question, frequency scaling, choice of governor, the performance governor and cpufreq. SysCtl: Sysctl is a tool for examining and changing kernel parameters at runtime, adjustments can be made permanent with the config file /etc/sysctl.conf, this is an important part of this answer as many kernel settings can be changed with Sysctl, a full list of available settings can be displayed with the command sysctl -a, details are available on this and this article. Cgroup: The kernel provide the feature: control groups, which are called by their shorter name cgroups in this guide. Cgroups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig (control group config) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. Source, further reading and question on the matter. Ram: This can be useful if the system have a limited amount of ram, otherwise you can disable the swap to mainly use the ram. Swap system can be adjusted per process or with the swappiness settings. If needed the ressources (ram) can be limited per process with ulimit (also used to limit other ressources). Disk: Disk I/O settings (I/O Scheduler) may be changed as well as the cluster size. Alternatives: Other tools like nice, cpulimit, cpuset, taskset or ulimit can be used as an alternative for the matter.
Should I attempt to 'balance' my threads or does linux do this?
1,564,750,780,000
I've downloaded my contact list in a .vcf format to my linux machine. I would like to be able to consult it without having to connect to the internet. The search feature is most important. I've got a script with grep and so on but I was hoping someone had already done the work to make things beautiful and readable.
There are a number of console-based tools designed to process vCard files; I know of the following: Rolo, a full-screen address-book manager; Khard, a console-based CardDAV client (which works fine with locally-stored vCard files); mutt_vc_query, a simple querying tool for vCard files (designed for Mutt, but usable standalone).
Is there a linux local tool to view, search, add contacts to/from a .vcf file?
1,564,750,780,000
Suppose I don't know where my Django source is stored, but I know that it contains these directories in this way: django/contrib/admin. How can I use the find command or any more appropriate coreutils command to find where this partial directory path (structure) is available? Example output: /home/me/python/extracted/django/contrib/admin/ /home/me/env/django/contrib/admin/
You should use -path flag for such purpose find /home/me -path "*django/contrib/admin*"
How to find a partial directory path?
1,564,750,780,000
Based on what I have read about pseudoterminals in Linux, there are two types of pseudoterminals: BSD-style pseudoterminals (which is deprecated) and UNIX 98 pseudoterminals. I have created two images that shows my understanding of these two types of pseudoterminals. The following image shows how the BSD-style pseudoterminals works (please correct me if the image is wrong): This type of pseudoterminals is not hard to understand, each terminal is connected to a unique master driver. But in the UNIX 98 pseudoterminals, things are a little more confusing. The following image shows how I think this type of pseudoterminals works: So basically all terminals use the same master driver (/dev/ptmx), but I am not sure how the master driver knows how to do the following: If data is being sent from one of the terminal processes, how does the master driver knows to which TTY slave driver the data should be passed to? If data is being sent from one of the TTY slave drivers, how does the master driver knows to which terminal process the data should be passed to? Does the master driver knows how to do this in the way that I have shown in the image (i.e. the master driver have a mapping table that maps each terminal PID to its corresponding TTY slave driver)?
You are curiously fascinated by names. /dev/ptmx is not a "driver", it's just a name in the filesystem, which has a special meaning. A process opens a new master pty by calling posix_openpt(), which returns a file descriptor; the same effect can be achieved by calling open() on /dev/ptmx. Each time a process calls open() of /dev/ptmx a new pseudoterminal is created; the pseudoterminal is destroyed when there are no more processes having this file descriptor open. This file descriptor refers to the master side of the pseudoterminal, and can be passed to descendant processes like any other file descriptor. For more detailed information see unix.stackexchange.com/questions/117981. (Hat tip to @JdeBP for the suggestion.) Once a process has a file descriptor referring to a master side of the pseudoterminal, it can find out the name of the slave side of the pseudoterminal by calling ptsname(), and can pass this name to any process it wants to control through the pseudoterminal.
BSD-style pseudoterminals vs. UNIX 98 pseudoterminals
1,564,750,780,000
In FreeBSD there are an utility to support circular log files called clog. This is very interesting for avoid the maintenance of the logs of some services (outside systemd and their journald). Are there any alternative to do the same in linux and/or with rsyslog?
There are tools to do much the same in both FreeBSD and Linux, and other operating systems besides. Making automatically-rotated strictly size-capped logs The following tools maintain strictly size-capped, automatically rotated, rotateable-on-demand, log file sets in a directory that one specifies. Dan Bernstein's multilog from daemontools, or Bruce Guenter's multilog from daemontools-encore, or Adam Sampson's multilog from freedt Laurent Bercot's s6-log from s6 Gerrit Pape's svlogd from runit Wayne Marshall's tinylog from perp my cyclog from nosh Usage is very simple: Send a to-be-logged process's standard output and standard error through a pipe to their standard input, in the normal way:./thing-to-be-logged 2>&1 | cyclog logs/ cyclog adds TAI64N timestamps to lines as standard. For timestamp-free processing when something is already timestamped, use one of the multilogs, s6-log, or svlogd, in each of which timestamp addition is a non-default option. Substituting for syslog What you point to modifies FreeBSD syslog itself, with a patch dating from 2001 that may not apply cleanly nowadays, to have another output file mechanism. An alternative approach is to simply replace the syslog dæmon completely, alongside configuring more services to simply log to standard error (under service management that pipes standard error to logging services) instead of using syslog in the first place. For example: The nosh toolset provides several such substitutes, that split up the job of syslog and generate output suitable for feeding through the standard input of one of the aforementioned logging tools: a klogd service that runs a simple program named klog-read to read from /proc/kmsg and simply write that log stream to its standard error. a local-syslog-read service that runs a program named syslog-read to read datagrams from /dev/log (/run/log on the BSDs) and simply write that log stream to its standard error. a udp-syslog-read service that runs the aforementioned syslog-read program to listen on the UDP syslog port and simply write that log stream to its standard error. a local-priv-syslog-read service that runs the aforementioned syslog-read program to read datagrams from /run/logpriv and simply write that log stream to its standard error. Further reading Jonathan de Boyne Pollard (2015). "Logging". The daemontools family. Frequently Given Answers. Jonathan de Boyne Pollard (2016). Don't use logrotate or newsyslog in this century.. Frequently Given Answers. Jonathan de Boyne Pollard (2016). "Logging". nosh Guide. Softwares. https://unix.stackexchange.com/a/294206/5132 https://unix.stackexchange.com/a/326166/5132 https://unix.stackexchange.com/a/340631/5132
Circular log in linux [duplicate]
1,564,750,780,000
I'm using CentOS 6.8 I'd like to know if I can I find all files with the .log extension and order by file size and display the file size next to the filename? I'm currently using this command to find all files with the .log extension: find . -name \*.log
This seems to work for me: find . -name \*.log -ls | sort -r -n -k7 where... find = https://man7.org/linux/man-pages/man1/find.1.html . = current folder -name = allows you to search for a file name pattern, in this case the asterisk is preceded by a slash to allow it to be a wildcard. -ls = lists output in ls -dils format. sort = https://man7.org/linux/man-pages/man1/sort.1.html -r = reverses the results, going biggest to smallest -n = compares them as a numerical value -k = sorts on a field, in this case the 7th field has the size variable in the output You could also add | head -n 20 to the end of this to get the top/largest 20 files.
Can I find all files with the .log extension and order by file size?
1,564,750,780,000
Situation: increase swap size (/dev/sda3) greater than Ram (8 GB) when HD 128 GB Motivation: 8 GB RAM is too little; 30 GB free space in my SSD; I want to turn 20 GB to SSD swap Characteristics of system Swap non-immutable/changeable. I cannot find any evidence why /mnt/.swapfile should be immutable so you do not need the change the file attributes of the swapfile sudo lsattr /mnt/.swapfile -------------e-- /mnt/.swapfile Command sudo fdisk -lu /dev/sda gives Disk /dev/sda: 113 GiB, 121332826112 bytes, 236978176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 082F85CA-EE3E-479C-8244-858B196FA5BA Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot /dev/sda2 4096 220323839 220319744 105.1G Linux filesystem /dev/sda3 220323840 236976127 16652288 8G Linux swap Command df -h gives Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 793M 9.4M 784M 2% /run /dev/sda2 104G 74G 25G 75% / tmpfs 3.9G 54M 3.9G 2% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 793M 64K 793M 1% /run/user/1000 Allocate more disk space for Swap in /dev/sda3. My unsuccessful workflow for the task when HD and Swap on the same partition, /dev/sda3 masi@masi:~$ sudo -i root@masi:~# swapoff /dev/sda3 root@masi:~# swapon [blank] root@masi:~# dd if=/dev/zero of=/dev/sda3 bs=20480 count=1M dd: error writing '/dev/sda3': No space left on device 416308+0 records in 416307+0 records out 8525971456 bytes (8.5 GB, 7.9 GiB) copied, 18.7633 s, 454 MB/s root@masi:~# mkswap /dev/sda3 Setting up swapspace version 1, size = 8 GiB (8525967360 bytes) no label, UUID=245cb42c-1d4e-4e21-b544-16b64af962d6 root@masi:~# swapon -p 99 /dev/sda3 root@masi:~# swapon NAME TYPE SIZE USED PRIO /dev/sda3 partition 8G 0B 99 root@masi:~# vi /etc/fstab ... HD and Swap on same Partition - Current Workflow [Ijaz, cas, FarazX] Merging. Use fallocate at the beginning instead dd because no need to put zeros masi@masi:~$ sudo fallocate -l 20G /mnt/.swapfile masi@masi:~$ sudo mkswap /mnt/.swapfile Setting up swapspace version 1, size = 20 GiB (21474832384 bytes) no label, UUID=45df9e48-1760-47e8-84d7-7a14f56bbd72 masi@masi:~$ sudo swapon /mnt/.swapfile swapon: /mnt/.swapfile: insecure permissions 0644, 0600 suggested. masi@masi:~$ sudo chmod 600 /mnt/.swapfile masi@masi:~$ free -m total used free shared buff/cache available Mem: 7925 1494 175 196 6255 5892 Swap: 28610 0 28610 Add the following line in your /etc/fstab which is better than adding the thing to your runlevels (/etc/rc.local), where I put the swapfile to the /mnt/.swapfile to maintain Linux/Unix philosophy and maintain the integrity of my system backup scripts; If swapping to an SSD, use the discard option so that the blocks are trimmed on every reboot, so not sw # http://unix.stackexchange.com/a/298212/16920 # http://unix.stackexchange.com/a/298543/16920 # If swap is on SSD, trim blocks each time at startup. /mnt/.swapfile none swap defaults,discard 0 0 # If swap on External HDD, just use sw. #/media/masi/SamiWeek/.swapfile none swap sw 0 0 Sources How to increase swap space? https://askubuntu.com/a/178726/25388 General discussion about increasing swap space for beginners. Linux Partition HOWTO for HDDs, not SSDs: 4. Partitioning requirements. http://www.tldp.org/HOWTO/Partition/requirements.html So do not put your swap to outer tracks on SSDs but use defaults,discard options to trim your blocks as proposed by @cas. System: Linux Ubuntu 16.04 64 bit Linux kernel: 4.6 Linux modules: wl Hardware: Macbook Air 2013-mid Ram: 8 GB SSD: 128 GB
You just want to increase the swap size on your system using the space from sda2. Your sda2 /dev/sda2 104G 74G 25G 75% / You can add additional swap space to your system by using swap file created on / that will utilize your sda2. Just do: dd if=/dev/zero of=/swapfile bs=20480 count=1M and then do: sudo mkswap /swapfile sudo swapon /swapfile and check, you swap space will increase by that amount using free -m and yes , to enable it at boot time add the entry in /etc/fstab /swapfile none swap sw 0 0
How to Allocate More Space to Swap and Increase its Size Greater than Ram?
1,564,750,780,000
I'm trying to understand how Linux capabilities are passed to a process that has been exec()'d by another one. From what I've read, in order for a capability to be kept after exec, it must be in the inheritable set. What I am not sure of, though, is how that set gets populated. My goal is to be able to run a program as a regular user that would normally require root. The capability it needs is cap_dac_override so it can read a private file. I do not want to give it any other capabilities. Here's my wrapper: #include <unistd.h> int main(int argc, char *argv[]) { return execl("/usr/bin/net", "net", "ads", "dns", "register", "-P", NULL); } This works when I set the setuid permission on the resulting executable: ~ $ sudo chown root: ./registerdns ~ $ sudo chmod u+s ./registerdns ~ $ ./registerdns Successfully registered hostname with DNS I would like to use capabilities instead of setuid, though. I've tried setting the cap_dac_override capability on the wrapper: ~ $ sudo setcap cap_dac_override=eip ./registerdns ~ $ ./registerdns Failed to open /var/lib/samba/private/secrets.tdb ERROR: Unable to open secrets database I've also tried setting the inheritable flag on the cap_dac_override capability for the net executable itself: ~ $ sudo setcap cap_dac_override=eip ./registerdns ~ $ sudo setcap cap_dac_override=i /usr/bin/net ~ $ ./registerdns Failed to open /var/lib/samba/private/secrets.tdb ERROR: Unable to open secrets database I need to use the wrapper to ensure that the capability is only available when using that exact set of arguments; the net program does several other things that could be dangerous to give users too broad of permissions on it. I'm obviously misunderstanding how the inheritance works. I can't seem to figure out how to set up the wrapper to pass its capabilities along to the replacement process so it can use them. I've read the man page, and countless other documents on how it should work, and I thought I was doing what it describes.
It turns out that setting +i on the wrapper does not add the capability to the CAP_INHERITABLE set for the wrapper process, thus it is not passed through exec. I therefore had to manually add CAP_DAC_OVERRIDE to CAP_INHERITABLE before calling execl: #include <sys/capability.h> #include <stdio.h> #include <unistd.h> int main(int argc, char **argv[]) { cap_t caps = cap_get_proc(); printf("Capabilities: %s\n", cap_to_text(caps, NULL)); cap_value_t newcaps[1] = { CAP_DAC_OVERRIDE, }; cap_set_flag(caps, CAP_INHERITABLE, 1, newcaps, CAP_SET); cap_set_proc(caps); printf("Capabilities: %s\n", cap_to_text(caps, NULL)); cap_free(caps); return execl("/usr/bin/net", "net", "ads", "dns", "register", "-P", NULL); } In addition, I had to add cap_dac_override to the permitted file capabilities set on /usr/bin/net and set the effective bit: ~ $ sudo setcap cap_dac_override=p ./registerdns ~ $ sudo setcap cap_dac_override=ei /usr/bin/net ~ $ ./registerdns Capabilities = cap_dac_override+p Capabilities = cap_dac_override+ip Successfully registered hostname with DNS I think I now fully understand what's happening: The wrapper needs CAP_DAC_OVERRIDE in its permitted set so it can add it to its inheritable set. The wrapper's process inheritable set is different than its file inheritable set, so setting +i on the file is useless; the wrapper must explicitly add CAP_DAC_OVERRIDE to CAP_INHERITABLE using cap_set_flag/cap_set_proc. The net file needs to have CAP_DAC_OVERRIDE in its inheritable set so that it can in fact inherit the capability from the wrapper into its CAP_PERMITTED set. It also needs the effective bit to be set so that it will be automatically promoted to CAP_EFFECTIVE.
Passing capabilities through exec
1,564,750,780,000
Is there a way to generate cartesian product of arrays without using loops in bash? One can use curly brackets to do a similar thing: echo {a,b,c}+{1,2,3} a+1 a+2 a+3 b+1 b+2 b+3 c+1 c+2 c+3 but I need to use arrays as inputs, and most obvious tricks fail me.
You could use brace expansion. But it's ugly. You need to use eval, since brace expansion happens before (array) variable expansion. And "${var[*]}" with IFS=, to create the commas. Consider a command to generate the string echo {a,b,c}+{1,2,3} Assuming the arrays are called letters and numbers, you could do that using the "${var[*]}" notation, with IFS=, to insert commas between the elements instead of spaces. letters=(a b c) numbers=(1 2 3) IFS=, echo {"${letters[*]}"}+{"${numbers[*]}"} Which prints {a,b,c}+{1,2,3} Now add eval, so it runs that string as a command eval echo {"${letters[*]}"}+{"${numbers[*]}"} And you get a+1 a+2 a+3 b+1 b+2 b+3 c+1 c+2 c+3
Array Cartesian product in bash
1,564,750,780,000
Yesterday I had to install a Windows with its Grub override. Well, it's not the first time I had to fix Grub, so I used LiveCD, mounted the root partition (I don't have boot, just / and home) and ran grub-install --root-directory=/mnt/ /dev/sda. However, it didn't work. After Googling a while I found a tutorial in which instead of just mounting the Linux partition, he also did mount --bind /mnt/dev /dev and mount --bind /mnt/proc /proc/. After that chroot to /mnt and then installed Grub, and using this method, it worked. What are the mount --bind commands for? I'm familiar with the usage of --bind used (man page) but I do not know why it was used on this example.
proc and sys filesystems are provided by the running kernel -- when the kernel is not running, they cease to exist. This means that when you chroot into another operating system, these filesystems are not present. Many programs expect them to exist so that they can function, for example, they may require information about the running system, or want to modify the way the kernel handles something. It is often enough simply to provide /proc and /sys from the current kernel for these programs to work as expected. A symlink would not suffice, as the act of chrooting would invalidate the file paths used. In Linux, you also cannot hard link directories (other than . and .., as provided by mkdir). This means that a third option has to be used to mirror these filesystems to the chrooted environment -- bind mounting. A bind mount is provided by the kernel directly, and works as expected within a chroot.
What are these commands for?
1,564,750,780,000
Environment My LAN setup is quite basic: A router connected to the ISP's modem and the internet My development pc directly connected to the router The router provides DHCP but does not run its own DNS server. In fact, there is no DNS server hosted anywhere on my LAN (typical home network setup). The router is configured to send the ISP's DNS servers as part of the DHCP lease information. I set up a VirtualBox machine on my development PC and installed Debian Squeeze (6.0.4) on it. The VirtualBox network mode is Bridged Adapter to simulate a standalone server on my LAN. Being a VirtualBox server instead of a physical server is not really important, but I mention it for completeness. The Problem Every time a network operation executes a DNS reverse lookup of a LAN ip prior to executing, the server has long delays. Some examples of slow network operations: SSH connection to the server from my dev PC Connection to admin port of Glassfish server netstat -l (netstat -nl is very fast) Starting MTA: exim4 on boot takes a long time to complete Some of these have workarounds like adding my dev pc's Ip to /etc/hosts or adding a command-specific option to avoid doing DNS reverse lookups. Obviously, using /etc/hosts only goes so far because it is at odds with DHCP. However, I can't help but think that I'm missing something. Do I really need to setup a DNS server somewhere on my LAN? That seems like a huge and useless effort for my needs and I can't believe there isn't another option in a DHCP environment like mine. I searched the net a lot for this and maybe I don't have the right search terms, but I can't find the solution... update 1 following BillThor's answer Using host (dig gives the same results): # ip of stackoverflow.com $ time host -v 64.34.119.12 Trying "12.119.34.64.in-addr.arpa" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15537 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;12.119.34.64.in-addr.arpa. IN PTR ;; ANSWER SECTION: 12.119.34.64.in-addr.arpa. 143 IN PTR stackoverflow.com. Received 74 bytes from 192.168.1.1#53 in 15 ms real 0m0.020s user 0m0.008s sys 0m0.000s # ip of dev pc $ time host -v 192.168.1.50 Trying "50.1.168.192.in-addr.arpa" ;; connection timed out; no servers could be reached real 0m10.004s user 0m0.004s sys 0m0.000s My /etc/resolv.conf (was automatically created during installation) nameserver 192.168.1.1 Both host and dig return very fast for a public ip but take 10s to timeout for a LAN ip. I guess 10s is my current timeout value. update 2 With dev-pc in /etc/hosts file: $ time getent hosts 192.168.1.50 192.168.1.50 dev-pc real 0m0.001s user 0m0.000s sys 0m0.000s Without dev-pc in /etc/hosts file: $ time getent hosts 192.168.1.50 real 0m10.012s user 0m0.004s sys 0m0.000s It looks more and more like I'll have to find piecewise program options or parameters for each one trying to do reverse DNS lookups! None of the machines (virtual or not) can act as a DNS server on my LAN since they are not always up. Unfortunately, the router's firmware doesn't include a DNS server.
Is 192.168.1.1 your router's IP address? nameserver 192.168.1.1 suggests your router is advertising itself as a DNS server, rather than "sending the ISP's DNS servers". What brand and model of router do you have? Does the web interface show log messages? I'm wondering if your router is forwarding the request to your ISP's nameservers, but your ISP's nameservers are dropping the request, because they don't want you to know what their machine with IP 192.168.1.50 is called. Suggestions: Double check your router's settings. It should answer requests for your own private network. Maybe you can add a static host entry in your router's web interface? Try installing Avahi on all the systems on your network. Tell your router to use Google Public DNS (8.8.8.8 and 8.8.4.4) or OpenDNS
Reverse DNS lookups slowing down network operations on LAN
1,564,750,780,000
Is there any way to change the brightness and color? Using any command line tools? I am trying in Fedora and Ubuntu but no luck so far. Follow up: [command] [conneccted output] [effects R:G:B, value 0 to 255] | / | / | / ^ ^ ^ ^ ^ ^ xrandr --output VGA1 --gamma 0:0:0
You can modify gamma settings (colors and effectively contrast too) using xrandr tool. First determine the output name of your monitor: $ xrandr -q | grep connected DFP1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm CRT1 disconnected (normal left inverted right x axis y axis) In the above example I have a monitor connected and seen as output DFP1. So now for the gamma modification example: $ xrandr --output DFP1 --gamma 0.8:0.8:1.1 Where gamma values are in format Red:Green:Blue. Edit: Another option is xcalib (you may need to install it first). It can be used with -a parameter to have effect directly on the connected monitor. See the output of xcalib for more details. Unfortunately, the color/brightness settings seem to work additively, so you might need to do randr --output ... --gamma 1:1:1 to restore the default state.
How to use command line to change brightness and color?
1,564,750,780,000
I'm experimenting with generating some custom kernels using genkernel. However, each iteration leaves a file in /boot called System.map-genkernel-<arch>-<version>. Is it safe to rename and/or delete the System.map-* files?
The System.map file is mainly used to debug kernel crashes. It's not actually necessary, but it's best to keep it around if you're going to use that kernel. If you've decided you don't need that kernel, then it's safe to delete the corresponding map file. If you're really low on disk space, you could compress the map files. They aren't that big, so this won't save much space, but bzip2 will squeeze them down to about 25% of the original size. Then you can uncompress one if you discover that you need it.
Safe to delete System.map-* files in /boot?
1,564,750,780,000
I'm trying to delete a bunch of old ZFS snapshots but I get errors saying that the datasets are busy: [root@pool-01 ~]# zfs list -t snapshot -o name -S creation | grep ^pool/nfs/public/mydir | xargs -n 1 zfs destroy -vr will destroy pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly will reclaim 408M cannot destroy snapshot pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly: dataset is busy will destroy pool/nfs/public/mydir@autosnap_2019-02-24_02:13:17_hourly will reclaim 409M cannot destroy snapshot pool/nfs/public/mydir@autosnap_2019-02-24_02:13:17_hourly: dataset is busy will destroy pool/nfs/public/mydir@autosnap_2019-02-24_01:13:18_hourly will reclaim 394M Running lsof shows no processes accessing these snapshots: [root@pool-01 ~]# lsof | grep pool/nfs/public/mydir There also appears to be no holds on any of the snapshots: [root@pool-01 ~]# zfs holds pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly NAME TAG TIMESTAMP Is there anything else I should look out for? Anything else I can do besides a reoot?
This appears to be unintended behavior on ZoL, I left the ZFS box alone for a few days and finally gave up and rebooted the said box, and I was able to destroy those snapshots after reboot.
ZFS on Linux: cannot destroy snapshot, dataset is busy
1,564,750,780,000
After getting a new VPS with Debian 9, I created a new user using root. I created a new username called joe with this command adduser joe. Then, I used usermod -aG sudo joe to grant administrative privileges. After that, I logged out and used Putty to login as joe. I entered the password for joe. After entering the password, it displayed this message: Could not chdir to home directory /home/joe: Permission denied -bash: /home/joe/.bash_profile: Permission denied I checked the directory of /home/joe by using this command: sudo ls -al /home/joe total 20 drw-r--r-- 2 joe joe 4096 Feb 7 16:32 . drwxr-xr-x 4 root root 4096 Feb 7 16:32 .. -rw-r--r-- 1 joe joe 220 Feb 7 16:32 .bash_logout -rw-r--r-- 1 joe joe 3526 Feb 7 16:32 .bashrc -rw-r--r-- 1 joe joe 675 Feb 7 16:32 .profile How can I enter into /home/joe directory after login as joe?
Apparently /home/joe doesn't have execute permission for the user. Execute permission for the directory allows to traverse it. Try sudo chmod 755 /home/joe and then log in again.
Could not chdir to home directory /home/user: Permission denied
1,564,750,780,000
I use CentOS 7 with kernel 3.1.0 I know there is a hitman in Linux called oom killer which kills a process that uses too much memory out of available space. I want to configure it to log the activities so that I can check whether it happens or not. How can I set it up? Thanks,
OOMkiller's activities are guaranteed to be in /var/log/dmesg (at least for a time). Usually the system logger daemon will also put it in /var/log/messages by default on most distributions with which I've worked. These commands might be of help in tracking the logs down: grep oom /var/log/* grep total_vm /var/log/* This answer has more details about parsing those log entries to see exactly what is going on.
How to make OOM killer log into /var/log/messages when it kills any process?
1,564,750,780,000
Today I removed my GPU out of an Linux (Ubuntu) machine, and after that the ethernet stopped working. Running 'service networking restart' threw an error message, and when I ran 'ifconfig' only the local loopback was visible. After this I re-installed my GPU and out of nowhere the internet started working again?? I would really like to have my machine be able to access the internet without having to install a GPU in it.. The GPU installed is an NVIDIA GeForce GTX 750 Ti, and I'm using the onboard ethernet connector. If you need any more specifications, please tell me and I'll dig a bit further. The output of ip link WITH a GPU: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:2f:ad:4d brd ff:ff:ff:ff:ff:ff And without a GPU: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether d0:50:99:2f:ad:4d brd ff:ff:ff:ff:ff:ff
Your networking devices are renamed to correspond with their location on the PCI bus. When you removed your GPU, your ethernet device changed from enp2s0 to enp1s0. In order to reconnect, you have a few options: Create profiles for enp1s0 that match those of enp2s0 Change the rules for naming devices to give this card a unique name, and edit your profiles accordingly See if swapping the positions of the network card and GPU make the network card always appear as enp1s0, and if so, edit the profiles to use that name
Internet not working without gpu installed?
1,564,750,780,000
In bash, the following works for setting the date from a UNIX timestamp ( seconds from the epoch ): date +%s -s @`date +%s` In Busybox, this does not work. How can I do the same for the date command with Busybox? Thanks.
Try date @`date +%s` I don't think it's got anything to do with bash. Busybox's date command is a lightweight version of the more classic GNU/FSF date
BusyBox Date Command Set Time with UNIX Timestamp
1,564,750,780,000
On my CentOS 7, at one point, sudo ss -plt listed a port marked as LISTENING on *:30565, but there was no information whatsoever in the process column of its row. The other listening ports were showing their owning process as usual, like users:(("sshd",pid=1381,fd=3)), but that one row did not have any process information. lsof -i :30565 or netstat -p did not yield any information either. I haven't been able to reproduce this, and I struggle to think of a situation a "non-process" might be listening on a port (as I'm quite sure Linux does the intended cleanup work when a tcp-listening process dies). As it happens with multiple programs too, the only explanation I can think of is that this is an "intended but very rootkit-y" behaviour of CentOS, but I'm most surely missing something. What might possibly have caused this?
The point on netstat not showing the process information on some situations, for instance NFS, is that NFS is a kernel module, and as such, it does not run as a normal process, and does not have a PID. You can regularly find threads about this situation if including NFS on your google searches: netstat doesn't report PID/Program name for some ports Note: for other users not using sudo, using -p needs root privileges to be able to show the related process of a port.
"netstat -p"/"ss -p" not showing the process of a listening port
1,564,750,780,000
Without initramfs/initrd support, the following kernel command line won't work: linux /bzImage root=UUID=666c2eee-193d-42db-a490-4c444342bd4e ro How can I identify my root partition via UUID without the need for an initramfs/initrd? I can't use a device name like /dev/sda1 either, because the partition resides on a USB-Stick and needs to work on different machines.
I found the answer burried in another thread: A UUID identifies a filesystems, whereas a PARTUUID identifies a partition (i.e. remains intact after reformatting). Without initramfs/initrd the kernel only supports PARTUUID. To find the PARTUUID of the block devices in your machine use sudo blkid This will print, for example /dev/sda1: UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="ext2" PARTUUID="f3f4g3f4-02" You can now modify you linux command line as follows: linux /bzImage root=PARTUUID=f3f4g3f4-02 ro This will boot from the partition with PARTUUID f3f4g3f4-02, which in this case is /dev/sda1.
How to identify root partition via UUID without initramfs/initrd
1,564,750,780,000
I have a 24/7 always-on Debian Jessie based headless home server that has a large 1TB SSD for the OS and all of my frequently accessed files. This same system has 4 larger hard disk drives in a SnapRAID array. These are mainly for archiving infrequently accessed Blu-rays and want those drives to remain spun down in standby unless I actually read or write to them. They are all formatted as ext4 and mounted with noatime and nodiratime enabled. So even though no process or program should be regularly accessing those drives in any direct way, the hard drives constantly get spun up from standby. It seems to be related to graphical programs that provide a gui file browser, even something like Chromium. If I don't even browse into those drives, I'm thinking that these processes by simply getting a list of available drives spins up the hard disks. Much like blkid does. The problem is, it's hard to determine the root cause of this since none of these processes are actually reading or writing the filesystem on those drives, so no files are actually changing or being touched. Is there some sort of cache that I can populate or a buffer to prevent these programs from spinning up the hard drive simply by getting a list of available disks? This is honestly driving me insane, since I can't find a reliably way to keep these disks spun-down even though there is no direct access of the filesystem. UPDATE: Thanks to Stephen's answer, I was able to trace the disk activity to gvfs and udisks. It's a real shame that these processes insist on waking up disks in standby when they aren't actually being accessed to do any real I/O with the filesystem. So far I just uninstalled them, knowing that it will remove some functionality from PCManFM and the like.
You can use blktrace (available in Debian) to trace all the activity to a given device; for example sudo blktrace -d /dev/sda -o - | blkparse -i - or just sudo btrace /dev/sda will show all the activity on /dev/sda. The output looks like 8,0 3 51 135.424002054 16857 D WM 167775248 + 8 [kworker/u16:0] 8,0 3 52 135.424011323 16857 I WM 209718336 + 8 [kworker/u16:0] 8,0 3 0 135.424011659 0 m N cfq496A / insert_request The fifth column is the process identifier, and the last one gives the process name when there is one. You can also store traces for later analysis; blktrace includes a number of analysis tools such as the afore-mentioned blkparse and btt. blktrace is a very low-level tool so it may not be all that easy to figure out what caused activity in the first place, but with the help of the included documentation (see /usr/share/doc/blktrace if you installed the Debian package) and the blktrace paper it should be possible to figure out what's causing the spin-ups.
hard disks spin up by processes / applications that simply get a list of disks? How to prevent?
1,564,750,780,000
What I'm trying to do: I'm trying to scan my File-Server for malware, and I'm using clamav/clamscan, where the man page say's it can scan files up to 4GB. This man page states: --max-filesize=#n Extract and scan at most #n kilobytes from each archive. You may pass the value in megabytes in format xM or xm, where x is a number. This option protects your system against DoS attacks (default: 25 MB, max: <4 GB) --max-scansize=#n Extract and scan at most #n kilobytes from each scanned file. You may pass the value in megabytes in format xM or xm, where x is a number. This option protects your system against DoS attacks (default: 100 MB, max: <4 GB) My system is: Newish hardware ASRock motherboard, CPU: AMD Athlon(tm) II X2 270 Processor(3400MHz) Memory: 4GB OS: Debian Wheezy all updates. Questions: What am I doing wrong here? What do those errors and warnings below mean? Is there a fix for this behavior? My case: I've been trying to scan two 3TB hard-drives with clamscan for over a week now but it always gives the same errors(except Bytecode number varies): LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached after approx. 40-50 hours of scanning: (Note that in the next snippet is the actual clamscan command I'm trying to run) PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 2012 root 20 0 1903M 246M 1244 R 101. 6.6 47h27:45 clamscan -r -i --remove --max-filesize=4000M --max-scansize=4000M /DATA1/ I've tried to delete the files suggested in one forum where they suspected corruption in some of those files that is bytecode.cvd, main.cvd, daily.cld and re-download them(with the update tool): root ~ # ls -ahl /usr/local/share/clamav/ total 145M drwxr-sr-x 2 clamav clamav 4.0K Mar 26 04:29 . drwxrwsr-x 10 root staff 4.0K Mar 20 01:59 .. -rw-r--r-- 1 clamav clamav 65K Mar 26 04:29 bytecode.cvd -rw-r--r-- 1 clamav clamav 83M Mar 26 04:29 daily.cld -rw-r--r-- 1 clamav clamav 62M Mar 18 01:17 main.cvd -rw------- 1 clamav clamav 156 Mar 26 04:29 mirrors.dat root ~ # rm -f /usr/local/share/clamav/bytecode.cvd /usr/local/share/clamav/daily.cld /usr/local/share/clamav/main.cvd root ~ # freshclam ClamAV update process started at Thu Mar 26 04:42:21 2015 Downloading main.cvd [100%] main.cvd updated (version: 55, sigs: 2424225, f-level: 60, builder: neo) Downloading daily.cvd [100%] daily.cvd updated (version: 20242, sigs: 1358870, f-level: 63, builder: neo) Downloading bytecode.cvd [100%] bytecode.cvd updated (version: 247, sigs: 41, f-level: 63, builder: dgoddard) Database updated (3783136 signatures) from db.UK.clamav.net (IP: 129.67.1.218) I've also tried to set --max-filesize and --max-scansize lower per the forum post I found here where it states that there is a limit to files/scans size at 2.17GB: clamscan -r -i --remove --max-filesize=2100M --max-scansize=2100M /DATA1/ but it gave the same errors. The program is the latest from the official site: clamav-0.98.6 configured and compiled from source with these options: ./configure --enable-bzip2 I've tried to re-install the program and also at first I had more options set in the compilation(--enable-experimental, --with-dbdir=/usr/local/share/clamav) The last option I know of is to uninstall this version and try the packages from my distributions repositories. But I would like to get this one working if at all possible. UPDATE: I've also tried to install clamav from the repositories but it gives the same problems/errors. I've found this, but it's old and doesn't seem to know what the problem is. And here but still not a definite answer or fix. The drives I've been trying to scan are these: # df -h /dev/sdb1 2.7T 2.6T 115G 96% /DATA1 /dev/sdc1 2.7T 2.6T 165G 95% /DATA2 Here is fdisk: # fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Possible cause: It could be something related to memory/CPU that the system has but I don't have that information I found this which states that clamscan loads the file to scan into memory and if there isn't enough memory it will fail. This is likely what is happening as I'm setting the scanner to scan files up-to 4Gigs and that's how much memory the system has. Excerpt: How big is that file? How much RAM (physical and swap separate, please) is installed on the scanning machine? Currently, ClamAV has a hard file limit of around 2.17GB. Because we're mapping the file into memory, if you don't have enough memory available to map the whole file, the memory mapping code (as currently implemented) will fail and the file won't be scanned. One of our long-term goals is to investigate being able to properly support large files. Possible solution: Hope the above is the problem(not enough memory), then I can simply extend the systems memory to 8GB, but it's unlikely it is so simple because I tried to run those scans on a system with 12GB ram. EDIT #1 Here is a run on another system with Fedora 21 + 12 GB RAM: clamscan -r -i --remove --max-filesize=1700M --max-scansize=1700M --exclude=/proc --exclude=/sys --exclude=/dev / LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: Bytcode 27 failed to run: Time limit reached LibClamAV Error: cli_scanxz: premature end of compressed stream LibClamAV Error: cli_scanxz: premature end of compressed stream ----------- SCAN SUMMARY ----------- Known viruses: 3779101 Engine version: 0.98.6 Scanned directories: 101382 Scanned files: 744103 Infected files: 0 Total errors: 18419 Data scanned: 285743.78 MB Data read: 394739.73 MB (ratio 0.72:1) Time: 32171.073 sec (536 m 11 s) when I ran those same scans on it with sizes set to 2100M-4000M it gave the same errors as mentioned in my original question.
I've found this(thanks to @FloHimself): Brief Re-introduction to ClamAV Bytecode Signatures, it's an good overview/supplement of some of the usages of the program and some useful options: Excerpt: Bytecode signatures are a specialized type of ClamAV signature which is able to perform additional processing of the scanned file and allow for more robust detection. Unlike the standard ClamAV signature types, bytecode signatures have a number of unique distinctions which need to be respected for their effective usage. Trust Bytecode signatures, by default, are considered untrusted. In fact, only bytecode signatures published by Cisco, in the bytecode.cvd are considered “trusted”. This means that the ClamAV engine will, by default, never load, trigger or execute untrusted bytecodes. One can bypass this safety mechanism by specifying the bytecode unsigned option to the engine but it should be noted that it is up to the user’s discretion on using untrusted bytecode signatures. For clamscan, the command line option is --bytecode-unsigned. For clamd, one would need to specify BytecodeUnsigned yes to clamd.conf. Timeout Bytecode signatures are designed to only run for a limited amount of time designated by an internal timeout value. If execution time exceeds the value, the bytecode signature’s execution is terminated and the user is notified. The bytecode signature timeout value can be set by the user. For clamscan, the command line is --bytecode-timeout=[time in ms]. For clamd, one would specify BytecodeTimeout [time in ms] to clamd.conf. And this is useful: Issue Reporting If anyone encounters issue with bytecode signatures, whether within the clambc-compiler or within ClamAV, they can report them to https://bugzilla.clamav.net/. Be sure to include the bytecode signature, bytecode source(if possible), and any other pieces of useful information. Answer The key seems to be to set the --bytecode-timeout= high so the scanner has time to scan the whole file. The default value is 60000 milliseconds/60 seconds, and I have set it to 190000 which works and doesn't give the timeout errors. This value could probably be set lower but it works for me. Tested on two systems that had the errors before the setting. UPDATE: Tested on three systems and many scans, the errors are gone with this setting for --bytecode-timeout. Here is the new command: clamscan -r -i --remove --max-filesize=4000M --max-scansize=4000M --bytecode-timeout=190000 /DATA1 Note: I also upgraded the servers memory to 8GB, I'm not sure if clamscan loads the file to memory when it's being scanned but one post said that much and if so that is another consideration.
Warnings/Errors when running clamav/clamscan, scanning 3TB hard-drive
1,564,750,780,000
I have win 7 and linux mint 14 installed. Is it possible to modify the GRUB Menu to show Windows as the first option instead of Linux, which it currently does. Mainly so that during boot it starts Windows by default.
If the order of your boot menu is important (and not just that Windows boots by default), and you don't have anything bootable besides Linux Mint and Windows (like OSX, BSD) you can do: cd /etc/grub.d mv 30_os-prober 09_os-prober as the alphabetical order of the files in /etc/grub.d, determines in what order they are processed. Then you run sudo update-grub¹ to generate the /boot/grub/grub.cfg file, which determines the menu ordering. If you just want to have Windows boot you can also change /etc/default/grub and change the entry GRUB_DEFAULT=0 to GRUB_DEFAULT=4 and run sudo update-grub. 4 is the normal entry for Windows after 0 for Mint, 1 for the submenu with older versions of Mint, 2 for memcheck and 3 for memcheck via a serial interface. Your setup might be slightly different, but you can count (starting from 0) while in the grub menu, or just try and change if your guestimate is off.² There is third alternative you might want to consider, and which I myself prefer. This is to to change your /etc/default/grub so that it will automatically boot the system you last selected, if you don't select a different menu entry by hand. For that you change the line GRUB_DEFAULT=0 into GRUB_DEFAULT=saved GRUB_SAVEDEFAULT=true and run sudo update-grub ¹ I tended to forget the name of the update-grub command often trying grub-TAB and hope the resulting expansions showed me the grub-something command I needed to run. That was until I realised that it says what to do at the top of /etc/default/grub file I was editing anyway. Of course once I found out how to look the command name up, I never forgot.... ² As @derobert indicated, you can also use a string that matches the menu entry you want to select. This is the only documentation I have found about that feature.
reorder GRUB to list Windows on top
1,564,750,780,000
I have a VPS I'm planning to delete. This particular cloud provider makes no guarantee that the data on the drive will be wiped before giving the disk to the next person. What's a best effort attempt I can make to secure-wipe sensitive data (whether existing as files or as deleted data) on the drive? Assume the provider does not offer a separate, bootable OS to perform maintenance from If not every last bit of sensitive data can be guaranteed to be wiped, that's ok (I would have encrypted the data, if it were that critically sensitive!)
Use the scrub command1 on the user data portions2 of the VPS filesystem. BEWARE: The following commands purposely destroy data. Here is a list of ideas for scrubbing targets, in a sensible order, but you may need to vary it for your particular VPS configuration: Databases, typically stored under /var. For instance, if you're using MySQL, you'd want to say something like this: # service stop mysql # command varies by OS, substitute as necessary # find /var/lib/mysql -type f -exec scrub {} \; /usr/local should only contain software you added to the system outside the normal OS package system. Nuke it all: # find /usr/local -type f -exec scrub {} \; The web root. For most Linux web servers on bare VPSes running Apache, this is a pretty good guess: # service stop apache # ditto caveat above # find /var/www -type f -exec scrub {} \; If you're on a managed VPS with a nice control panel front end which lets you set up virtual hosting, or you're on shared hosting, chances are that your web root lives somewhere else. You'll need to find it and use that instead of /var/www. Email. Be sure to catch both the MTA's spooling directories as well as the individual users' mailbox files and directories. Any configuration files with potentially sensitive data in them. I can't rightly think of anything in this category, since configuration data is generally fairly boring. One way to attack it would be to say # ls -ltr /etc | tail -30 That will give you the 30 files you most recently touched in /etc, which will give you a list of files most likely touched by you, rather than containing stock configuration information. Be careful! There are files you can scrub in /etc that will prevent you from being able to log back in. You might want to put off scrubbing those until later in the process. Password files, keys, etc. This list varies considerably between systems, but here are some places to start looking: /etc/shadow /etc/pki/* /etc/ssh/*key* /etc/ssl/{certs,private}/* ~/.ssh # for each user At this point, you probably cannot log back in again, so be sure not to drop your SSH connection to the VPS. Erase the free space on every mounted filesystem that may contain user data: For each user data filesystem2 mount point MOUNTPT: # mkdir MOUNTPT/scrub # scrub -X MOUNTPT/scrub For instance, if /home is on its own filesystem, you'd create a /home/scrub directory and scrub -X that. You have to do this for each filesystem separately. This fills that filesystem with pseudorandom noise. If there is user data on the root filesystem, don't do that one yet, since filling the root filesystem may crash the system. Burn the world. If the OS hasn't crashed by this point, your shell hasn't dropped your session, etc., you can do a best-effort attempt to burn the world: # find /var /home /etc -type f -exec scrub {} \; Unix being the way it is about file locking, you still might not lose your connection to the VPS while this command executes, even though it is overwriting files you need to log in. You may nevertheless be unable to execute any more commands once it does finish. This is definitely a "saw off the tree limb you are sitting on" kind of command. If by some thin chance you are still logged in after this completes, you can now erase the free space on the root filesystem: # mkdir /scrub # scrub -X /scrub Nuke the VPS. Finally, log into your VPS control panel and tell it to reinstall your VPS with a different OS. Pick the biggest and most featureful one your VPS provider offers. This will overwrite part of your VPS's disk with fresh, uninteresting data. There's a chance it will overwrite something sensitive that your prior steps missed. In all the scrub(1) commands above, I haven't given any special options, since the defaults are reasonable. If you are feeling especially paranoid, there are methods in scrub to use more passes, different data overwriting patterns, etc. Scrub uses data overwriting techniques that require truly heroic measures to overcome. It's a question of incentives: how much work is someone willing to put in to recover your data? That tells you how paranoid you should be about following the steps above, and adding additional steps. Due to the nature of virtual machines, there may be "echoes" of your user data in the host system due to VPS migrations and such, but those echoes are inaccessible to outsiders. If you cared about such things, you shouldn't have chosen to use a VPS provider in the first place. If you added other directories to the standard list2 of user data trees, you should probably scrub those early on, since the order of scrubbing is from most-user-centric to least. You do the least user centric parts last, since they tend to be parts of the filesystem that affect the system's own functioning. You don't want to lock yourself out of the VPS before you're done scrubbing. Scrub is highly portable, and is probably in your OS's package repo already, but if you have to build it from source it's not hard. Typically, the trees containing user data are /home, /usr/local, /var, and /etc, in decreasing "density" of user data vs system default data. You may need to add other directories to this list due to your system administration style or VPS management software preferences. We aren't going to bother scrubbing places like /usr/bin and /lib, as these should only contain copies of files that are widely available, and thus boring. (The OS, software you've installed from public sources, etc.)
Secure wipe (scrub) filesystem of VPS from VPS itself
1,564,750,780,000
I have an embedded system using jffs2 and want to pass rootflags=noatime in the kernel bootargs parameter. This results in a kernel panic: jffs2: Error: unrecognized mount option 'noatime' or missing value [...] No filesystem could mount root, tried: jffs2 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(31,3) However, if I boot normally and then remount the jffs2 filesystem with noatime, it works fine: $ mount -o remount,noatime / I am puzzled by this as according to the documentation, the rootflags argument "allows you to give options pertaining to the mounting of the root filesystem just as you would to the mount program". Looks like a kernel bug to me, but on the other hand it seems so obvious that perhaps I am overlooking something. I have tested this with kernel versions 3.7 and 3.14. Can someone shed some light?
Googling rootflags noatime brings up this post from 2003 by Andrew Morton, perhaps it still applies. http://lkml.org/lkml/2003/8/12/236 While testing something, I tried to boot with 'rootflags=noatime', and found the system wouldn't boot, as ext3, ext2, and reiserfs all failed to recognize the option. Looking at the code in fs/ext3/super.c:parse_options() and init/do_mounts.c:root_data_setup(), it appears to be impossible to set any of the filesystem-independent flags via rootflags, which explains the special-case code for the 'ro' and 'rw' flags. However, there doesn't seem to be any way to pass nodev, noatime, nodiratime, or any of the other flags. (And yes, all 3 of those make sense in my environment - it's a laptop and I don't need atime, and I use devfs so nodev on the root makes sense too). The fs-independent options are parsed in user space by mount(8), and are passed into the kernel as individual bits in a `flags' argument.
Kernel panic when passing noatime in bootargs
1,564,750,780,000
Normally when system starts you have all output printed on the TTY1, and that's ok, but I start X-server via startx and achieve this by the following lines in the ~/.profile file : if [[ $(tty) = /dev/tty4 ]]; then exec startx &> ~/.xsession-errors fi So, as you can see I use TTY4 to start X-server, and I want to switch to that console automatically after the boot is done. Is there a way to do this?
I've found the answer. It's simple, you just have to add chvt 4 to /etc/rc.local file, and that's it.
How to change the default TTY after boot?
1,564,750,780,000
SERVER:/etc # ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited pending signals (-i) 96069 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 96069 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited SERVER:/etc # How can I set the limit of the root user from 1024 to something else, PERMANENTLY? How can I set the ulimit globally? Will the changes take effect in the moment? p.s.: I already googled for it but can't find the file where I can set it permanently: SERVER:/etc # grep -RiI ulimit * 2>/dev/null | egrep -v ":#|#ulimit" init.d/boot.multipath: ulimit -n $MAX_OPEN_FDS init.d/multipathd: ulimit -n $MAX_OPEN_FDS rc.d/boot.multipath: ulimit -n $MAX_OPEN_FDS rc.d/multipathd: ulimit -n $MAX_OPEN_FDS and..: SERVER:/etc # grep -RiI 'MAX_OPEN_FDS' * 2>/dev/null init.d/boot.multipath:MAX_OPEN_FDS=4096 init.d/boot.multipath: if [ -n "$MAX_OPEN_FDS" ] ; then init.d/boot.multipath: ulimit -n $MAX_OPEN_FDS init.d/multipathd:MAX_OPEN_FDS=4096 init.d/multipathd: if [ -n "$MAX_OPEN_FDS" ] ; then init.d/multipathd: ulimit -n $MAX_OPEN_FDS rc.d/boot.multipath:MAX_OPEN_FDS=4096 rc.d/boot.multipath: if [ -n "$MAX_OPEN_FDS" ] ; then rc.d/boot.multipath: ulimit -n $MAX_OPEN_FDS rc.d/multipathd:MAX_OPEN_FDS=4096 rc.d/multipathd: if [ -n "$MAX_OPEN_FDS" ] ; then rc.d/multipathd: ulimit -n $MAX_OPEN_FDS SERVER:/etc #
Use pam_limits(8) module and add following two lines to /etc/security/limits.conf: root hard nofile 8192 root soft nofile 8192 This will increase RLIMIT_NOFILE resource limit (both soft and hard) for root to 8192 upon next login.
How to modify ulimit for open files on SUSE Linux Enterprise Server 10.4 permanently?
1,314,161,608,000
If I'm using Ubuntu 11.04, how can I configure it such that that only two users can shut down/suspend/hibernate my PC: the root user and one regular user?
The shutdown binary will only work for the root user. The typical approach to this is to set up sudo rules to allow the user to execute shutdown as root. Assuming the user doesn't already have full sudo permissions (the first user on an Ubuntu desktop system does, for example) you might add the following line to /etc/sudoers (using the visudo utility, for safety): joe hostname=(root) /sbin/shutdown -h now If you want them to be able to shut down without being prompted for their password, then add the NOPASSWD option, like this: joe hostname=(root) NOPASSWD: /sbin/shutdown -h now You can modify the way they can run shutdown by using wildcards or explicit declarations. For example shutdown -h now allows an immediate halt of the system, it will not reboot. You could allow -r instead to reboot the system. After you configure sudoers, joe can run the following command to reboot the system: sudo /sbin/shutdown -h now As joe, you can run the following command to see what commands you have access to run using sudo: sudo -l
How can I set that only root + a given user can shut down my pc?
1,314,161,608,000
Given a host that is in an unknown state of configuration, I would like to know if there is an effective way of non-interactively determining if the firewall rule set in place is managed by iptables or nftables. Sounds pretty simple and I've given this quite a bit of thought, but haven't come back with a meaningful answer to put on a script...
A variant of this problem was addressed recently in Kubernetes, so it’s worth looking at what was done there. (The variant is whether to use iptables-legacy or iptables-nft and their IPv6 variants to drive the host’s rules.) The approach taken in Kubernetes is to look at the number of lines output by the respective “save” commands, iptables-legacy-save and iptables-nft-save (and their IPv6 variants). If the former produces ten lines or more of output, or produces more output than the latter, then it’s assumed that iptables-legacy should be used; otherwise, that iptables-nft should be used. In your case, the decision tree could be as follows: if iptables isn’t installed, use nft; if nft isn’t installed, use iptables; if iptables-save doesn’t produce any rule-defining output, use nft; if nft list tables and nft list ruleset don’t produce any output, use iptables. If iptables-save and nft list ... both produce output, and iptables isn’t iptables-nft, I’m not sure an automated process can decide.
Check whether iptables or nftables are in use
1,314,161,608,000
Let's say I have the following trivial script, tmp.sh: echo "testing" stat . echo "testing again" Trivial as it is, it has \r\n (that is, CRLF, that is carriage return+line feed) as line endings. Since the webpage will not preserve the line endings, here is a hexdump: $ hexdump -C tmp.sh 00000000 65 63 68 6f 20 22 74 65 73 74 69 6e 67 22 0d 0a |echo "testing"..| 00000010 73 74 61 74 20 2e 0d 0a 65 63 68 6f 20 22 74 65 |stat ...echo "te| 00000020 73 74 69 6e 67 20 61 67 61 69 6e 22 0d 0a |sting again"..| 0000002e Now, it has CRLF line endings, because the script was started and developed on Windows, under MSYS2. So, when I run it on Windows 10 in MSYS2, I get the expected: $ bash tmp.sh testing File: . Size: 0 Blocks: 40 IO Block: 65536 directory Device: 8e8b98b6h/2391513270d Inode: 281474976761067 Links: 1 Access: (0755/drwxr-xr-x) Uid: (197609/ USER) Gid: (197121/ None) Access: 2020-04-03 10:42:53.210292000 +0200 Modify: 2020-04-03 10:42:53.210292000 +0200 Change: 2020-04-03 10:42:53.210292000 +0200 Birth: 2019-02-07 13:22:11.496069300 +0100 testing again However, if I copy this script to an Ubuntu 18.04 machine, and run it there, I get something else: $ bash tmp.sh testing stat: cannot stat '.'$'\r': No such file or directory testing again In other scripts with the same line endings, I have also gotten this error in Ubuntu bash: line 6: $'\r': command not found ... likely from an empty line. So, clearly, something in Ubuntu chokes on the carriage returns. I have seen BASH and Carriage Return Behavior : it doesn’t have anything to do with Bash: \r and \n are interpreted by the terminal, not by Bash ... however, I guess that is only for stuff typed verbatim on the command line; here the \r and \n are already typed in the script itself, so it must be that Bash interprets the \r here. Here is the version of Bash in Ubuntu: $ bash --version GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu) ... and here the version of Bash in MSYS2: $ bash --version GNU bash, version 4.4.23(2)-release (x86_64-pc-msys) (they don't seem all that much apart ...) Anyways, my question is - is there a way to persuade Bash on Ubuntu/Linux to ignore the \r, rather than trying to interpret it as a (so to speak) "printable character" (in this case, meaning a character that could be a part of a valid command, which bash interprets as such)? EDIT: without having to convert the script itself (so it remains the same, with CRLF line endings, if it is checked in that way, say, in git) EDIT2: I would prefer it this way, because other people I work with might reopen the script in Windows text editor, potentially reintroduce \r\n again into the script and commit it; and then we might end up with an endless stream of commits which might be nothing else than conversions of \r\n to \n polluting the repository. EDIT2: @Kusalananda in comments mentioned dos2unix (sudo apt install dos2unix); note that just writing this: $ dos2unix tmp.sh dos2unix: converting file tmp.sh to Unix format... ... will convert the file in-place; to have it output to stdout, one must setup stdin redirection: $ dos2unix <tmp.sh | hexdump -C 00000000 65 63 68 6f 20 22 74 65 73 74 69 6e 67 22 0a 73 |echo "testing".s| 00000010 74 61 74 20 2e 0a 65 63 68 6f 20 22 74 65 73 74 |tat ..echo "test| 00000020 69 6e 67 20 61 67 61 69 6e 22 0a |ing again".| 0000002b ... and then, in principle, one could run this on Ubuntu, which seems to work in this case: $ dos2unix <tmp.sh | bash testing File: . Size: 20480 Blocks: 40 IO Block: 4096 directory Device: 816h/2070d Inode: 1572865 Links: 27 Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-04-03 11:11:00.309160050 +0200 Modify: 2020-04-03 11:10:58.349139481 +0200 Change: 2020-04-03 11:10:58.349139481 +0200 Birth: - testing again However, - aside from the slightly messy command to remember - this also changes bash semantics, as stdin is no longer a terminal; this may have worked with this trivial example, but see e.g. https://stackoverflow.com/questions/23257247/pipe-a-script-into-bash for example of bigger problems.
As far as I’m aware, there’s no way to tell Bash to accept Windows-style line endings. In situations involving Windows, common practice is to rely on Git’s ability to automatically convert line-endings when committing, using the autocrlf configuration flag. See for example GitHub’s documentation on line endings, which isn’t specific to GitHub. That way files are committed with Unix-style line endings in the repository, and converted as appropriate for each client platform. (The opposite problem isn’t an issue: MSYS2 works fine with Unix-style line endings, on Windows.)
Handling Bash script with CRLF (carriage return) in Linux as in MSYS2?
1,314,161,608,000
sudo dd if=/dev/sda of=/dev/null bs=1M iflag=direct atopsar -d 5 # in a second terminal top # in a third terminal Results from atopsar : 19:18:32 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_ ... 19:16:50 sda 18% 156.5 1024.0 0.0 0.0 5.0 1.15 ms 19:16:55 sda 18% 156.3 1024.0 0.0 0.0 4.9 1.15 ms ... Why is disk utilization ("busy") reported as much less than 100% ? According to top, the dd process only uses 3% of a CPU or less. top also provides an overall report of hardware and software interrupt (hi and si) usage of the system's CPU's, which shows as less than 1%. I have four CPUs (2 cores with 2 threads each). /dev/sda is a SATA HDD. It is not an SSD, it is not even a hybrid SSHD drive. It cannot read faster than about 150 megabytes per second :-). So that part of the results makes sense: 156 read/s * 1024 KB/read = 156 MB/s The kernel version is 5.0.9-200.fc29.x86_64 (Fedora Workstation 29). The IO scheduler is mq-deadline. Since kernel version 5.0, Fedora uses the multi-queue block layer. Because the single queue block layer has been removed :-). I believe the disk utilization figure in atopsar -d and atop is calculated from one of the kernel iostat fields. The linked doc mentions "field 10 -- # of milliseconds spent doing I/Os". There is a more detailed definition as well, although I am not sure that the functions it mentions still exist in the multi-queue block layer. As far as I can tell, both atopsar -d and atop use common code to read this field 10. (I believe this field is also used by sar -d / iostat -x / mxiostat.py) Additional tests Variant 2: Changing to bs=512k, but keeping iflag=direct. dd if=/dev/sda of=/dev/null bs=512k iflag=direct 19:18:32 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_ ... 19:18:00 sda 35% 314.0 512.0 0.0 0.0 2.1 1.12 ms 19:18:05 sda 35% 313.6 512.0 0.2 4.0 2.1 1.11 ms Variant 3: Using bs=1M, but removing iflag=direct . dd uses about 10% CPU, and 35% disk. dd if=/dev/sda of=/dev/null bs=1M 19:18:32 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_ ... 19:21:47 sda 35% 242.3 660.2 0.0 0.0 5.4 1.44 ms 19:21:52 sda 31% 232.3 667.8 0.0 0.0 9.5 1.33 ms How to reproduce these results - essential details Beware of the last test, i.e. running dd without iflag=direct It is a bit of a hog. I saw it freeze the system (mouse cursor) for ten seconds or longer. Even when I had swap disabled. (The test fills your RAM with buff/cache. It is filling the inactive LRU list. I think the turnover evicts inactive cache pages relatively quickly. At the same time, the disk is busy with sequential reads, so it takes longer when you need to page something in. How bad this gets probably depends on whether the kernel ends up also turning over the active LRU list, or shrinking it too much. I.e. how well the current "mash of a number of different algorithms with a number of modifications for catching corner cases and various optimisations" is working in your case). The exact results of the first test are difficult to reproduce. Sometimes, KB/read shows as 512 instead of 1024. In this case, the other results look more like the results from bs=512k. Including that it shows a disk utilization around 35%, instead of around 20%. My question stands in either case. If you would like to understand this behaviour, it is described here: Why is the size of my IO requests being limited, to about 512K?
This was the result of a change in kernel version 5.0: block: delete part_round_stats and switch to less precise counting We want to convert to per-cpu in_flight counters. The function part_round_stats needs the in_flight counter every jiffy, it would be too costly to sum all the percpu variables every jiffy, so it must be deleted. part_round_stats is used to calculate two counters - time_in_queue and io_ticks. time_in_queue can be calculated without part_round_stats, by adding the duration of the I/O when the I/O ends (the value is almost as exact as the previously calculated value, except that time for in-progress I/Os is not counted). io_ticks can be approximated by increasing the value when I/O is started or ended and the jiffies value has changed. If the I/Os take less than a jiffy, the value is as exact as the previously calculated value. If the I/Os take more than a jiffy, io_ticks can drift behind the previously calculated value. (io_ticks is used in part_stat_show(), to provide the kernel IO stat for "field 10 -- # of milliseconds spent doing I/Os".) This explains my results very nicely. In the Fedora kernel configuration, a "jiffy" is 1 millisecond. I expect a large read IO submitted by dd can be pending for more than one or two jiffies. Particularly on my system, which uses an old-fashioned mechanical HDD. When I go back to the previous kernel series 4.20.x, it shows the correct disk utilization: $ uname -r 4.20.15-200.fc29.x86_64 $ atopsar -d 5 ... 13:27:19 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_ 13:28:49 sda 98% 149.4 1024.0 13.0 5.3 2.2 6.04 ms 13:28:54 sda 98% 146.0 1024.0 7.2 5.7 1.5 6.38 ms This old kernel used the legacy single-queue block layer, and the cfq IO scheduler by default. The result is also the same when using the deadline IO scheduler. Update: since kernel 5.7, this approximation is adjusted. The command in the question shows 100% disk utilization again. The new approximation is expected to break down for some more complex workloads (though I haven't noticed one yet). block/diskstats: more accurate approximation of io_ticks for slow disks Currently io_ticks is approximated by adding one at each start and end of requests if jiffies counter has changed. This works perfectly for requests shorter than a jiffy or if one of requests starts/ends at each jiffy. If disk executes just one request at a time and they are longer than two jiffies then only first and last jiffies will be accounted. Fix is simple: at the end of request add up into io_ticks jiffies passed since last update rather than just one jiffy. Example: common HDD executes random read 4k requests around 12ms. fio --name=test --filename=/dev/sdb --rw=randread --direct=1 --runtime=30 & iostat -x 10 sdb Note changes of iostat's "%util" 8,43% -> 99,99% before/after patch: Before: Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0,00 0,00 82,60 0,00 330,40 0,00 8,00 0,96 12,09 12,09 0,00 1,02 8,43 After: Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0,00 0,00 82,50 0,00 330,00 0,00 8,00 1,00 12,10 12,10 0,00 12,12 99,99 Now io_ticks does not loose time between start and end of requests, but for queue-depth > 1 some I/O time between adjacent starts might be lost. For load estimation "%util" is not as useful as average queue length, but it clearly shows how often disk queue is completely empty. Fixes: 5b18b5a ("block: delete part_round_stats and switch to less precise counting") Signed-off-by: Konstantin Khlebnikov <[email protected]> Reviewed-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
`dd` is running at full speed, but I only see 20% disk utilization. Why?
1,314,161,608,000
Using the command line tools available in a common GNU/Linux distro (e.g. Fedora/Debian/Ubuntu/etc), is there a general way to get the value of some specific WHOIS field (e.g. the registrant's organisation name), ideally without having to build a custom WHOIS parser that is hard-coded to handle the differences between each registry's output? This seems worth asking, because the output from the whois command does not appear to be very consistent. For example, compare: $ whois trigger.io [...] Owner OrgName : Amir Nathoo [...] with: $ whois facebook.com [...] Registrant Organization: Facebook, Inc. [...] I would like, instead, to be able to pass, as arguments to some command: the domain name the desired field and have the output simply be the value of the desired field. For instance, based on the examples above, something like: $ some_whois_command -field organization_name trigger.io Amir Nathoo $ some_whois_command -field organization_name facebook.com Facebook, Inc. Is this possible? Ideally, I would like the solution to centre on the whois command, e.g. with some suitable usage of -i, -q, -t, and/or -v, as I want to learn how to make effective use of these options. I will accept another solution as correct if necessary, however.
The problem appears to be at least two-fold: WHOIS responses do not share a common schema, and there is a dearth of WHOIS clients able to parse WHOIS responses and to map their fields (e.g. using a suitable ontology) onto a single schema. The Ruby Whois project is the most extensive effort I have found. It aims to provide a parser for each of the 500+ different WHOIS servers, and its developers deserve immense credit, but it remains a work in progress. This is a sorry state of affairs. The IETF's proposed solution for this and other WHOIS woes is called the Registration Data Access Protocol (RDAP). Quoting RFC 7485, which explains the rationale for RDAP: In the domain name space, there were over 200 country code Top-Level Domains (ccTLDs) and over 400 generic Top-Level Domains (gTLDs) when this document was published. Different Domain Name Registries may have different WHOIS response objects and formats. A common understanding of all these data formats was critical to construct a single data model for each object. (Emphasis mine.) Unfortunately, whereas most (all?) TLD registries provide WHOIS servers for their subdomains, only one two TLD registries have so far formally fielded RDAP servers for their subdomains: CZNIC for .cz domains, and NIC Argentina for .ar domains. So, this is not (yet) a generally applicable solution across a wide range of TLDs. We can only hope that all the other registries will hurry up and field RDAP servers. As for software, the only RDAP command line client for POSIX systems that I have found so far is nicinfo.
Obtain WHOIS data field(s) without parsing?
1,314,161,608,000
There's an issue with Ubuntu that hasn't been fixed yet, where the PC freezes or gets really slow whenever it is copying to an USB stick (see Why is my PC freezing while I'm copying a file to a pendrive?, http://lwn.net/Articles/572911/ and https://askubuntu.com/q/508108/234374). A workaround is to execute the following commands as root (see here for an explanation) as root: echo $((16*1024*1024)) > /proc/sys/vm/dirty_background_bytes echo $((48*1024*1024)) > /proc/sys/vm/dirty_bytes How do I revert these changes? When I restart my PC, will it get rolled back to default values?
These are sysctl parameters. You can set them either by writing to /proc/sys/CATEGORY/ENTRY or by calling the sysctl command with the argument CATEGORY.ENTRY=VALUE. These settings affect the running kernel, they are not persistent. If you want to make these settings persistent, you need to set them at boot time. On Ubuntu, create a file in the directory /etc/sysctl.d called becko-vm-dirty.conf containing # Shrink the disk buffers to a more reasonable size. See http://lwn.net/Articles/572911/ vm.dirty_background_bytes = 16777216 vm.dirty_bytes = 50331648 To revert the changes, write the old value back. There is no “restore defaults” command. Note that these parameters are a bit peculiar: there are also parameters called vm.dirty_ratio and vm.dirty_background_ratio, which control the same setting but express the size as a percentage of total memory instead of a number of bytes. For each of the two settings, whichever of ratio or bytes was set last takes precedence.
Pernicious USB-stick stall problem. Reverting workaround fix?
1,314,161,608,000
I use Cinnamon on ArchLinux and Nemo is it's default file manager. I've tried 3 GUI file archivers (p7zip with WxGTK, peazip & file-roller), but none of them add a "compress option" in the context menu. How can I add a "compress option" to Nemo's right click context menu?
custom nemo action This ArchLinux wiki article titled: Nemo describes the steps required to create your own context menu item. General steps Create a .nemo_action file. The file has to have this extension! Here's an example virus scanner .nemo_action file: clamscan.nemo_action: [Nemo Action] Name=Clam Scan Comment=Clam Scan Exec=gnome-terminal -x sh -c "clamscan -r %F | less" Icon-Name=bug-buddy Selection=Any Extensions=dir;exe;dll;zip;gz;7z;rar; Place the .nemo_action file in one of the following locations: $HOME/.local/share/nemo/actions/ /usr/share/nemo/actions/ nemo fileroller On that same wiki page there is also mention of a extension to Nemo called Nemo Fileroller. You might be able to install this extension instead of creating your own.
How to make nemo support compressing files by context menu?
1,314,161,608,000
I expected that timedatectl would update /etc/timezone when changing timezones, but no: % sudo timedatectl set-timezone 'Asia/Kuala_Lumpur' % cat /etc/timezone Asia/Bangkok Is there a reason that it doesn't? (Bug?) If I manually update /etc/timezone to match timedatectl set-timezone, are there any side-effects I should be aware of? Is there anywhere else I should consider changing timezone, eg xfce4 panel clock?
timedatectl updates /etc/localtime, which is the documented way of setting the default timezone in most Linux-based environments (along with its override, the TZ environment variable, which is the only POSIX-defined way of specifying the timezone). /etc/timezone appears to be mostly Debian-specific (including derivatives). On Debian systems, timedatectl set-timezone also updates /etc/timezone, in version of systemd older than 252.6-1 (so until Debian 11 included). If you manually update /etc/timezone, you should also update the /etc/localtime symlink (and make sure you keep the latter a symlink). Updates to /etc/localtime appear to be taken into account by (most?) desktop environments, so there’s no need to use environment-specific tools to update the timezone. If you’re running Debian, you should use dpkg-reconfigure tzdata to configure the default timezone; that updates /etc/localtime and /etc/timezone as above, and it also updates the selected timezone in the debconf database (which serves as the default when configuring tzdata). If you don’t do this, the next time tzdata is updated, the timezone will be restored to the value in the debconf database. dpkg-reconfigure tzdata also takes care of updating the SE Linux context, if you’re using SE Linux.
`timedatectl set-timezone` doesn't update `/etc/timezone`
1,314,161,608,000
I cannot ssh into my server from one of my Ubuntu installations, but if I use another Ubuntu installation or Windows operating system connecting with SSH works smoothly. So something is broken in one of my Ubuntu installation and I'm struggling to find the exact problem. I've tried reinstalling ssh/openssh-client/openssh-/ssh. Here is few lines from verbose output : ssh username@MYSERVERADDRESS -v debug1: Offering RSA public key: /home/user/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 debug1: Authentication succeeded (publickey). Authenticated to MYSERVER ([MYSERVERADDRESS]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: pledge: network packet_write_wait: Connection to MYSERVERADDRESS port 22: Broken pipe Tried many different solutions from googling but never worked any. Deleted .ssh directory, Deleted /etc/ssh/ssh_config (It was automatically created again with default values). One more information is that problem isn't from server-side as I can SSH into server using another os and same network. Update : Firewall disabled Server hosted on cloud I've 3 different machines with dual booted Windows and Linux. SSH working perfectly all machines except one in which Linux is troubling connection, and in same machine using Windows everything working fines. More clear view of point 4 : Total 3 machines each loaded with Linux and Windows (dual boot), and only one machine while running Linux having problem with SSH. Let me know if you need more data from me (except SERVER ADDRESS and USERNAME).
I've found solution for this problem (sorry for answering my own question). I'm answering it because If someone has this problem then he/she can use solution that I found. Actually problem is on both sides server as well as client side. Server side problem was that /home/<user>/.ssh/known_hosts file on server was having invalid entry for Ubuntu installation as both operating system having same hardware id and same ip (static ip) but different keys. So what I did is : ssh-keygen -f /home/<user>/.ssh/known_hosts -R ip.ip.ip.ip In my case ip.ip.ip.ip is static public ip of my network. Execute this command on both server as well as client machine where ip.ip.ip.ip will changed respectively. I don't what this command does ( I found this solution from googling / trial-error) You can also copy your client machines known_hosts file to other client machine or operating systems. Sorry for my bad english. And If anyone knows what this command does and why this command solved problem, then please tell us ! thank you. Bingo solved!
Can't ssh into my server from home linux but can ssh into same server from windows
1,314,161,608,000
I know one situation of this will occur when u already installed the latest version of a package then it will occur. is there any other situations that this error occur?
Yum shows this error when it is unable to proceed with the command. There can be many reasons why this message could appear: The package is already installed and up-to-date The package does not exist on the configured repository No repository is correctly configured There was a problem fetching the package from the remote URL (unable to connect, cannot find the package, etc.) The package requires dependencies that aren't available The package conflicts with another installed package To troubleshoot the issue, you should focus on the message which appears before "Nothing to do", and not on the message "Nothing to do" which is purely the result of the error.
when 'Error: Nothing to do' error occur in install through yum?
1,314,161,608,000
I have a serial port /dev/ttyS2 that is connected to a supervisor. Normally, I use this line to send commands back and forth between CPU and supervisor. However, under some settings, I want to just redirect the entire console to this port. I can achieve this via a reboot and updating the uBoot kernel variable to direct console=ttyS2,115200. But is there a way to achieve this without a reboot?
You could launch getty once you've booted to get a serial connection to your system. Note that this will not give you the default outputs typically seen with your console (Kernel Panics and other verbosities typically seen in console but not in normal terminals). But if you are just looking to get a login via serial after boot this should work. /sbin/agetty -L 115200 ttyS2 vt100 That should connect to /dev/ttyS2 at 115200 baud and emulate a vt100 terminal.
Redirect console to a serial port
1,314,161,608,000
I'm interested in per-process network I/O counters, like those in /proc/net/dev and found what I thought was it under /proc/<pid>, i.e. /proc/<pid>/net/dev. But it seems that was too easy because they contain the same counters as the system. If I diff between system and <pid> I get the same counters*. So that makes me wonder what is it supposed to represent? Or is it just a way to allow a specific process to read /proc/net/dev by setting permissions to /proc/net/<pid>/dev and not globally? man proc does not document this and neither does http://man7.org/linux/man-pages/man5/proc.5.html Distro: CentOS 7.1 w/ kernel 3.10.0-229.el7.x86_64 *diff <(cat /proc/<pid>/net/dev) <(cat /proc/net/dev)
/proc/net/dev contains statistics about network interfaces, while /proc/<pid>/net/dev contains statistics about network interfaces from the process' point of view. I suppose that if a process runs on a network namespace (see man ip-netns) where it has access only to a limited set of interfaces, only these will show up in /proc/<pid>/net/dev.
What is /proc/<pid>/net/dev?
1,314,161,608,000
What I would like to achieve is an interactive program that runs either before or after asking the user for the password, but won't handle over the access to the computer unless it exited with success. To make it somewhat more understandable, here's an example: I would like to gain access to my computer, by first writing my username, then my password, and after that answering a simple randomly generated mathematical question correctly. For this to work, I use the following system-auth file: auth required pam_unix.so try_first_pass nullok nodelay auth optional pam_faildelay.so delay=600000 auth optional pam_exec.so stdout /home/math auth optional pam_permit.so auth required pam_env.so The problem is, that the program named math can't handle inputs from the user, as it automatically reads an EOF from PAM, which essentially renders it useless. I have also tried the following variant of the questionable line in which case it reads in the password, which is also not what I want: auth optional pam_exec.so stdout expose_authtok /home/math
No stdout/stdin there at the PAM stage. You need to call pam_conv(3) via pam_get_item(3) to perform i/o. Good example at ben.akrin.com including the relevant C source example. pam_conv(3) pam_get_item(3)
How to add additional steps to login?
1,314,161,608,000
I want systemd on mount /mnt/test to automatically call a program (in real life cryptsetup to unlock the underlying device, for testing here echo) before the file system is mounted and after it is unmounted. With /etc/systemd/system/stickbak-encryption.service: [Unit] Description=stickbak encryption Before=mnt-test.mount StopWhenUnneeded=true [Service] Type=oneshot ExecStart=/bin/echo Unlock device. RemainAfterExit=true ExecStop=/bin/echo Lock device. [Install] RequiredBy=mnt-test.mount and /etc/fstab (partly): /dev/$DEVICE /mnt/test auto noauto 0 0 this works (after daemon reload and enabling the service) for systemctl start mnt-test.mount and respectively systemctl stop mnt-test.mount (as root). On mount /mnt/test, however, systemctl status mnt-test.mount stickbak-encryption shows the latter service being inactive (dead), while the former is active (mounted). How can I (or can I not?) set up a dependency that is honoured when /bin/mount is called as well? The status of the mount unit shows that mount /mnt/test seems to be translated to ExecMount=/bin/mount /dev/$DEVICE /mnt/test -t auto -o noauto, so apparently systemd gets notified.
I very recently asked myself the same question, but I quickly came to the realisation that it doesn't work that way. When you use the mount command-line program, systemd is not involved: mount reads /etc/fstab (or takes options from the command-line) and mounts the device. When you start a systemd mount unit, it's parsed by systemd which internally uses mount system calls to perform the mount. So there is no way to systemd to get involved when you use mount. As an aside, there is an interesting difference between using mount and systemd in what they accept as valid in /etc/fstab. Systemd parses the file and creates mount units that it then uses. When it does so, it accepts fewer parameters than mount requires. If you use systemd then you only have to supply the device and mount point, however mount requires further options such as the file system and options.
How to add a dependency to a systemd.mount that is activated by /bin/mount?
1,314,161,608,000
How does Linux determine which network interface to use, when both are connected to the same network? Note that this isn't a question on routing. I'm familiar with how that works. This is if, say, I have my laptop connected to my wireless router through both my ethernet card and my wireless card, or if I have two ethernet cards both connected to the same router. I can say from experience that in my case, my laptop seems to favor the ethernet card (eth0) over the wireless (eth1--I know that's not a typical name for a wireless interface, but that's what I have), but I was wondering, how does it decide that? If it just picks from the lowest numbered interface, what if the two choices are, say, eth0 and wlan0? Edit: @Nils has pointed out that this is still a matter of routing, and the routing table provides the answer (see his answer). This still leaves my original question, but in a different form. What determines the order of entries in the routing table in Linux? For example, here is my routing table while connected to both interfaces: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.4.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 It's clear that eth0 is higher priority in the table than eth1 for destinations in the local network, but is that decided in Linux from link latency, link throughput, even the interface name, or what? (The same question could go for why eth0 is the interface for the default route.)
Well - this IS a routing-question. The answer is simple: The first entry that will give the best routing-entry is "the winner". So look at netstat -rn to see which interface is first. Update: The network-inferface-routing-settings normally get set up during system startup. So the startup order of network devices will be the order in that table. With PCI-devices these normally are being processed by lowest slot-number first. External devices (e.g. USB) normally come later on. But the exact order depends on the network-startup-script of your Linux flavour (this is something where they very much differ). BTW: If you want to make use of your two links, you should go for bonding. There you can set up the order of usage for your links.
Network interface preference
1,314,161,608,000
I am using prlimit in Ubuntu to do some resource restrictions in my sandbox which has been very helpful. However, I am not quite sure what to do with RLIMIT_NICE. The docs say: RLIMIT_NICE (since Linux 2.6.12, but see BUGS below) Specifies a ceiling to which the process's nice value can be raised using setpriority(2) or nice(2). However, according to getpriority(2), a process can raise it's nice value only if owned by a superuser in the first place. But if this is the case, the RLIMIT_NICE value is not going to add too any functionality because a privileged user can arbitrarily lower or higher RLIMIT values anyway. So I don't understand how to use or interpret RLIMIT_NICE. For non-privileged users the entire thing seems useless because they cannot raise priority in the first place, and it makes no sense to set it below the current priority. However for superusers it doesn't really add anything either because the nice, and RLIMIT_NICE soft- and hard limits can arbitrarily be raised. So what is the idea behind RLIMIT_NICE ?
In fact, RLIMIT_NICE allows you to bypass the basic rule that says that "a process can raise its nice value only if owned by root". Demonstration: # ulimit -e 30 # su nobody $ nice -n -10 top You will see that your top process runs with niceness -10. Now if you try nice -n -11 top, it will run with niceness 0, because -11 is not allowed by RLIMIT_NICE=30. The formula is given in the manpage: the maximal niceness allowed is 20-rlimit. So: 0 means "you can raise niceness to 20", a.k.a. useless; 20 means "you can raise niceness to 0", which lets you go back to 0 if you lowered your priority; 40 means "you can start processes up to nice -n -20.
Is there any use for RLIMIT_NICE?
1,314,161,608,000
I need to monitor the I/O statistics of a process that writes to disk. The purpose is to avoid write rates too high for long periods. I know there's iostat tool to accomplish this task on a system-wide perspective. Is there something similar to monitor single process disk usage?
What you want is iotop. Most distributions have a package for it, usually called (logically enough) iotop. One very cool command (at least, on a system that isn't very busy) is iotop -bo. This will show I/O as it occurs. It also has options to only monitor specific processes or processes owned by specified users.
Getting disk i/o statistics for single process in Linux
1,314,161,608,000
Are there any linux/unix console applications similar to Yadis that would allow me to: be set up from the console backup multiple directories backup / sync in real time after the files (text files) are changed Update 1: I write shell scripts, ruby scripts, aliases etc etc to make my work easier. I want to have backup of these files. The solution I am looking for will copy these files after any change was made to them to a subdirectory of my dropbox directory and that's it. Backup is done and available from anywhere. Always fresh and ready and I don't have to think about it. I know I can run cron few times a day but I thought there must be a solution for what I am looking for available on linux. I am not so linux experienced so I asked here.
You could probably hack this together using inotify and more specifically incron to get notifications of file system events and trigger a backup. Meanwhile, in order to find a more specific solution you might try to better define your problem. If your problem is backup, it might be good to use a tool that is made to create snapshots of file systems, either through rsnap or a snapshoting file system like xfs or using any file system with lvm. If your problem is sycronizing, perhaps you should look into distributed and/or netowrk file systems. Edit: In light of your update, I think you are making this way to complicated. Just make a folder in your dropbox for scripts. Then in your bashrc files do something like this: export PATH=$PATH:~/Dropbox/bin source ~/Dropbox/bashrc Whatever scripts you have can be run right from the dropbox folder in your home directory, and any aliases and such you want synced can go in a file inside Dropbox that gets sourced by your shell. If other people besides you need access to the scripts, you could symlink them from your Dropbox to somewhere like /usr/local/bin.
real time backup if file changed?
1,314,161,608,000
Ok, Skype yet again has issues with sound. This time, it is unable to record audio. The system is using PulseAudio and I am using a web cam as a microphone. Actually, I tried another web cam and had both plugged in at one point. lsusb shows the devices plugged in. Skype only allows selecting pulseaudio as the input device, with no other choices. Skype only seems to see the mic input on the analog sound card (which has no mic attached). I have tried using KMix, the KDE phonon dialog (hidden in KMix's menus), alsamixer and even happened across pavucontrol. None seemed to be able to reorder the preference for device to use as the mic or at least select the desired mic. It could be a flaw in OpenSUSE 11.4, or I could just be missing something obvious. All would show that the webcam mic was available and allow me to set the input levels, for either webcam. How do I select the webcam mic?
Try pavucontrol. More information here - https://help.ubuntu.com/community/SkypeTroubleshooting#Selecting%20Microphone%20%28input%20device%29 yum install pavucontrol You will have to select the right mic in the Recording tab of pavucontrol
Sound input issue with skype, selecting a microphone?
1,314,161,608,000
After analyzing the postfix logs on my server I noticed the following message : Sep 4 15:12:50 vps66698 postfix/smtpd[25401]: connect from unknown[195.22.126.189] Sep 4 15:12:50 vps66698 postfix/smtpd[25401]: disconnect from unknown[195.22.126.189] What is this message and how to improve the safety of my server ? Thank you for your answers Mickael
Normally when a remote machine tries to connect to your postfix server, postfix will attempt to do a DNS lookup of the address and report that in the file eg connect from 66-220-155-155.outmail.facebook.com[66.220.155.155] or connect from mail-it0-x249.google.com[2607:f8b0:4001:c0b::249] Now if the IP address can not be properly resolved to a name then it reports unknown instead: eg connect from unknown[42.119.145.220] connect from unknown[192.3.220.210] connect from unknown[39.52.115.55] Now in your case the connection is from 195.22.126.189. If we attempt to look that up we get DNS errors, and so postfix just reports unknown. Seeing "connect/disconnect" sequences for servers is normal on the internet; it could be spammers, botnots, misconfigured servers, scanning tools... My personal server that just handles mail for me saw 10 of these in the past 4 hours.
Connect / disconnect unknown postfix log
1,314,161,608,000
If you send a SIGSTOP to a web server, does the kernel just tell the network stack to block/sleep all connections to that server's socket(s) until it is continued? It seems that the server timeout values don't matter; it will wait indefinitely, but how? Will requests sit in the socket buffer indefinitely? What if the web server gets tons of requests? What happens when the socket buffer fills up?
From the point of view of any part of the system that isn't concerned with the process's state and stats, a process that is stopped (i.e. won't be scheduled until it receives a SIGCONT) is indistinguishable from a process that's running, but isn't responding to a particular query. For example, the network stack operates in the same way whether the process is stopped, or is doing work (using CPU time) but not making any system call, or is blocked in some system call that isn't unblocked by the network-originating event (e.g. waiting to read from a pipe), etc. While the process is stopped, there's no such thing as a timeout in the process. There are usually no timeouts in the network stack either: the packet has been received by the machine, even if it hasn't gotten to the machine. As far as e.g. TCP transmission is concerned, the packet has been received and it's up to the application to respond. If the socket's buffer fills up, the network stack will start dropping packets. There's no reason why the behavior of the network stack would depend on the process's state. The process could come out of the stopped state at any time. There could be multiple processes listening on a socket, so any decision based on the process state would have to take all of them into account.
What happens to requests to a service that is stopped with SIGSTOP
1,314,161,608,000
Platform: RHEL 5.10 netcat Version: 1.84-10.fc6 I was trying to figure out if my inability to ssh was TCP-level and usually I use nc for this. This time, however, I got something unexpected. [bratchley@ditirlns01 ~]$ nc -vz dixxxldv02.xxx.xxx 22 -w 15 nc: connect to dixxxldv02.xxx.xxx port 22 (tcp) timed out: Operation now in progress Connection to dixxxldv02.ncat.edu 22 port [tcp/ssh] succeeded! Normally if it can't connect within the specified timeout it just prints the first line. Thinking it was just some weird race condition (like the TCP connection kept completing just as I was approaching timeout) I lengthened the timeout period to 30 seconds but got the same exact results. Telnet also fails so I think there is an IDS/Network Firewall blocking the traffic. I was just curious if anyone has seen this before or what it mean.
Shortly after posting, I found the problem: [bratchley@ditirlns01 ~]$ host ditirldv02.ncat.edu ditirldv02.ncat.edu has address 152.8.143.20 ditirldv02.ncat.edu has address 152.8.143.5 [bratchley@ditirlns01 ~]$ So it appears that nc will cycle through all A records for a given host and test each one individually. The first failure was for the incorrect IP address, the success was for the correct one.
nc both fails and succeeds
1,314,161,608,000
I have a windows PC that is on network A and a windows PC on network B. Also in network B I have several Linux servers. From the PC on network A, I can ssh to any server on network B. I cannot, however, connect to the PC on network A from a computer on network B. Windows PC Local __ _ can ssh [__]|=| .-,( ),-. ---------> .-,( ),-. /::/|_| .-( )-. .-( )-. ( Network A ) ( Network B ) '-( ).-' '-( ).-' '-.( ).-' <--------- '-.( ).-' can't ssh Linux Server #1 Windows PC Remote __ _ __ _ [_Linux Server #2 [__]|=| /: __ _ /::/|_| [__]|=| /::/|_| I would like to be able to RDP to my network A PC from my network B PC. Is this possible using some sort of reverse tunneling through one of the linux boxes?
Solution 1: From your PC on network A, create a reverse ssh tunnel with something like Putty by connecting to a Linux host on Network B. The local port should be 3389, the remote host 127.0.0.1 and the port is arbitrary (lets use 6000 as an example). Then from your PC on network B, use putty to connect to the same Linux host, and do a forward tunnel. Local port should be set to something OTHER than 3389 (as microsoft RDP client will not allow connections to localhost, but it will allow connections to localhost on an arbitrary port). So lets reuse the same port number of 6000, the remote ip should be 127.0.0.1 and remote port 6000. You then point the RDP client at 127.0.0.1:6000 . In effect you connect to port 6000 on PC in network B. Putty forwards that to the Linux host, which has been set to forward it to 127.0.0.1 on port 6000. The putty connection from the PC on network A listens on 6000 and forwards it to 127.0.0.1 on PC A to port 3389 which RDP then accepts the connection. Solution 2: Setup an SSHD on the PC on network B, and then you only have to do a single reverse port forward. There is Bitvise SSHD which runs on Windows and is free for non-business use. Bitvise also do a separate client that handles RDP tunneling in conjunction with a WinSSHD. The nice thing about this solution it that is saves usernames, settings (like full screen and so forth), and can be launched from a save file and will stop you from having to set up/remember to connect the port forwards before using RDP.
I need to RDP to a server through a reverse tunnel
1,314,161,608,000
I tried to slow down my CPU with powernowd and cpufreq-selector. I hoped that with lower frequency it will be less hot and I won't hear my fan every time I will run internet radio. I used the following commands sudo cpufreq-selector -f 800000 and sudo powernowd -m 0 -l 40 -u 60 but this wasn't sufficient for more quiet laptop. My CPU is Intel(R) Pentium(R) M processor 1.60GHz Could you advise me on how to make it run cooler / quieter? I solved it yesterday with sudo cpufreq-selector -g powersave
The very important thing is to lower down the cpu clock first. The second important part is to verify there is no physical cooling problem (like dust on the fans, cat or dog hairs in the heatsink ect) On most computers, the fan speed is directly operated by the bios or the os automatically. The cleaning/lowering cpu speed process should let the cooling performs better, and, if the T° goes down, so the speed of the fans too. If it's not, I advise you to look into fancontrol It's a little application that uses lm-sensors and pwm-config (pwm is the thing commonly used to drive the fan rotation speed from the computer, see: https://secure.wikimedia.org/wikipedia/en/wiki/Fan_control#Pulse-width_modulation) to regulate automatically the fan speed according to the CPU temperature Here is little documentation about it: http://www.lm-sensors.org/wiki/man/fancontrol I assure you that is more easier than it looks, but primary you need to install and configure sensors to your computer. It has a little easy-config script to help you to detect & configure your sensors and stuff, called sensors-detect in most distros. I advise you to google about it, including your distro name and your laptop brand/model to find the best settings After that, just run pwmconfig as root. It detects driveable fans and temperature sensors, and asks you Min/Max Temps for cpu and Min/Max fan speed you want.
Slow down the CPU
1,314,161,608,000
Does OpenBSD use bcrypt by default? Why doesn't every modern Linux Distribution use BCRYPT? http://codahale.com/how-to-safely-store-a-password/ https://secure.wikimedia.org/wikipedia/en/wiki/Bcrypt WHY????
A couple of reasons: The BCrypt-based scheme isn't NIST approved. Hash functions are designed for this kind of usage, whereas Blowfish wasn't. The added security is BCrypt is based on it being computationally expensive, rather than the type of algorithm. Relying on computationally expensive operations isn't good for long-term security. See http://en.wikipedia.org/wiki/Crypt_%28Unix%29 for some discussion on this.
BCRYPT - Why doesn't the Linux Distributions use it by default?
1,314,161,608,000
The memory resource controller for cgroups v1 allows for setting limits on memory usage on a particular cgroup using the memory.limit_in_bytes file. What is the Linux kernel's behavior when this limit is reached? In particular: Does the kernel OOM kill the process and if so is the oom_score of the process taken into account, or is it the process that asked for the memory that caused the limit to be hit that gets killed? Or would the request for memory just be rejected in which case the process would only die if it didn't deal with such an event?
By default OOM is overseeing cgroups. memory.oom_control contains a flag (0 or 1) that enables or disables the Out of Memory killer for a cgroup. If enabled (0), tasks that attempt to consume more memory than they are allowed are immediately killed by the OOM killer. The OOM killer is enabled by default in every cgroup using the memory subsystem; to disable it, write 1 to the memory.oom_control file: ~]# echo 1 > /cgroup/memory/lab1/memory.oom_control When the OOM killer is disabled, tasks that attempt to use more memory than they are allowed are paused until additional memory is freed. References Redhat docs - 3.7. MEMORY
What's the Linux kernel's behaviour when processes in a cgroup hit their memory limit?
1,404,399,176,000
I have a high-availability cluster (Heartbeat) connected via serial line and two ethernet NICs. I'd like to set up a monitoring script capable of recognizing disconnected serial line (basically the same question was answered at SO, however I am not satisfied with such a general answer). I cannot simply open the serial device and read the data myself, since the serial line is opened by Heartbeat. So I started to look for some indirect clues. The only difference I have found so far is in the contents of /proc/tty/driver/serial. This is how it looks like when it's connected: # cat /proc/tty/driver/serial serinfo:1.0 driver revision: 0: uart:16550A port:000003F8 irq:4 tx:2722759 rx:2718165 brk:1 RTS|CTS|DTR|DSR|CD And when disconnected: # cat /proc/tty/driver/serial serinfo:1.0 driver revision: 0: uart:16550A port:000003F8 irq:4 tx:2725233 rx:2720703 brk:1 RTS|DTR I'm not confident enough to decide that the signals listed at the end of the line have the very meaning of connected/disconnected cable as I have not found any documentation on the contents of the /proc/tty/driver/serial. I can only assume that the presence of the signal means the given signal is on "right now" (or was in recent past? or?). The Serial HOWTO says that additional signals present when the cable is connected (CTS flow control signal, DSR "I'm ready to communicate", CD "Modem connected to another") are all in the "input" direction. So there has to be somebody alive at the other end. Assuming that meaning of signals is as described in the Serial HOWTO, I can base my decision on the presence of, say CD signal. However I am not really sure. So the question is: is my method "right", or do I have any better options I am not aware of? EDIT: I did some additional observations and had a talk with my colleague. Turns out the presence or absence of signals at the end of the line is quite good indicator of the serial port activity, on both ends. However, it's not an indicator of physical presence of a cable. Whenever there was a program writing to serial port outgoing signals were present (RTS|DTR). When the other side was writing incoming signals were present (CTS|DSR|CD). When none of the sides communicates there are no signals at all (that does not necessarily mean there is no cable present). Don't forget that the exact signals depend on the wiring of the cable (I have "null modem with partial handshaking").
RS232 has no "cable presence" indicator of any kind. You're just getting transmission or metadata (control) signals through, or you don't - that's all you know. If you receive an incoming signal (CTS|DSR|CD) you know the cable is connected. If you don't receive any incoming signal, the state of the cable is indeterminate and there is no way to determine if it's plugged in without additional hardware solutions - or performing some kind of exchange with the remote device. The usual approach is performing some kind of "keep-alive" transmissions (even just metadata - e.g. momentarily set DTR and expect CTS) but if the discipline of protocol used by software at the two ends of the cable forbids such idle exchange, you're pretty much stuck with using a soldering iron to proceed. What you might try, is some kind of additional "demon" that sets up a pipe, forwarding data between your software and the physical device (on both ends), encapsulating it - and performing "connection checks" if the pipe is idle. Let me add one rather common solution: if your endpoint device doesn't use hardware control, you can short DTR with CTS inside the plug on the host side and use 'hardware control' on the host side. Generating DTR automatically drives CTS, enabling the transmission, if the cable is present, so transmission is unaffected. Meanwhile, with cable absent, the system will react to lack of CTS in a manner appropriate to this event, e.g. generating a timeout or suspending transmission until the cable is plugged in.
How do I know if a serial port is actually transmitting data, without opening the device?
1,404,399,176,000
I would like to use HDMI on my graphic card for audio output. ALSA shows it as a card with 4 devices and I can get sound through one of them (the other three are different channels, perhaps? I have only stereo output connected). Although Pulseaudio has the right card set as default, it seems to me that it plays on the wrong device. Pacmd shows that the sink has parameter alsa.device set to the first device listed by ALSA, but I can get sound only from the second one. How can I force Pulseaudio to use another device of the same card as the default output? or How can I force ALSA to switch numbers of the first and the second device on the card?
It seems like I found a solution, at least for this particular case. Since I knew the card and device number assigned by ALSA, I just had to open /etc/pulse/default.pa. in editor and change this line #load-module module-alsa-sink into this load-module module-alsa-sink device=hw:2,7 where 2 and 7 are my particular instances of card and device numbers. This created a new sink connected to the correct device, which was then used as the default output.
Changing default audio device in Pulseaudio
1,404,399,176,000
Days ago I broke my laptop display by accident, the right side of the screen is damaged, but the most part of the left side is usable. I did some research trying to find a way to modify the dimension of the screen to fit into the area with no damage and I found xrandr. I found the next .sh archive but I can't find a way to put the screen into the left side, neither modifying the --transform parameters or the --fb command. #!/bin/bash #change these 4 variables accordingly ORIG_X=1280 ORIG_Y=800 NEW_X=1160 NEW_Y=800 ### X_DIFF=$(($NEW_X - $ORIG_X)) Y_DIFF=$(($NEW_Y - $ORIG_Y)) ORIG_RES="$ORIG_X"x"$ORIG_Y" NEW_RES="$NEW_X"x"$NEW_Y" ACTIVEOUTPUT=$(xrandr | grep -e " connected [^(]" | sed -e "s/\([A-z0-9]\+\) connected.*/\1/") MODELINE=$(cvt $NEW_X $NEW_Y | grep Modeline | cut -d' ' -f3-) xrandr --newmode $NEW_RES $MODELINE xrandr --addmode $ACTIVEOUTPUT $NEW_RES xrandr --output $ACTIVEOUTPUT --fb $NEW_RES --panning $NEW_RES --mode $NEW_RES xrandr --fb $NEW_RES --output $ACTIVEOUTPUT --mode $ORIG_RES --transform 1,0,$X_DIFF,0,1,$Y_DIFF,0,0,1 I also tried to do it without the .sh archive running the next line: xrandr --output LVDS-1 --fb 800x768 --mode 800x768 --transform 1,0,566,0,1,0,0,0,1 The screen took the position I want but after running that command a black border on the left side of the screen appears and I can't remove it. Any idea of what it's going wrong here?
Just set the screen size with xrandr --fb (no --mode, --transform, whatever). $ xrandr --fb 800x768 xrandr will complain about the screen size being too small, but will apply the settings nonetheless. Example: $ xrandr --fb 1520x1080 xrandr: specified screen 1520x1080 not large enough for output VGA-0 (1920x1080+0+0) X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 29 (RRSetPanning) Serial number of failed request: 43 Current serial number in output stream: 43 # from the xtruss output --- ConfigureNotify(event=w#000004A8, window=w#000004A8, x=0, y=0, width=1520, height=1080, border-width=0, above-sibling=None, override-redirect=False) $ xwininfo -root | grep geo -geometry 1520x1080+0+0 That should probably be a warning rather than an error; there are situations where it makes perfect sense to set the screen size to something smaller than the actual display(s). Update: Multi-head enabled window managers get the info about the screen(s) via the Xrandr(3) and Xinerama(3) extensions, and do not clamp their dimensions inside the root window rectangle. A temporary workaround would be to prevent them from using the Xrandr and Xinerama extensions via a LD_PRELOAD hack. That could be improved by turning the dummy functions into wrappers that trim the returned rectangles. This worked for me on vanilla debian 9.5 with the mate desktop environment and either the lightdm or gdm3 display manager: root# apt-get install mate-desktop-environment lightdm root# apt-get install gcc root# cat <<'EOT' | cc -fPIC -x c - -shared -o /etc/X11/no_xrr.so int XineramaIsActive(void *d){ return 0; } void *XineramaQueryScreens(void *dpy, int *n){ *n = 0; return 0; } int XineramaQueryExtension(void *d, int *i, int *j){ return 0; } int XRRQueryExtension(void *d, int *i, int *j){ return 0; } EOT root# cat <<'EOT' >/etc/X11/Xsession.d/98-no_xrr export LD_PRELOAD=/etc/X11/no_xrr.so case $STARTUP in /usr/bin/ssh-agent*) STARTUP="/usr/bin/ssh-agent env LD_PRELOAD=$LD_PRELOAD ${STARTUP#* }";; esac EOT Then, from the session menu of lightdm choose "MATE", and as the logged-in user: $ LD_PRELOAD= xrandr --fb 800x768 I wasn't able to get it to work though with either plasma or gnome3/gnome-shell/mutter yet.
Xrandr problem trying to avoid broken display
1,404,399,176,000
Can I remove the mouse pointer entirely from X? As in removing it and not just hiding it? I don't use the mouse at all. Everything I do is completely keyboard driven, so I hide the mouse pointer and disable my touchpad. However, the cursor still has a position on my screen, which causes applications to fire hover events. This can be extremely annoying, for instance in chrome, if a link happens to intersect the cursor it will display a bright white tooltip in the bottom left of the window.
Configure your X session to start with the argument -nocursor. For example: exec /usr/bin/X -nocursor -nolisten tcp "$@"
Can I remove the mouse pointer entirely from X?
1,404,399,176,000
I have a Ubuntu 12.04 system. On that system I have one USB 3.0 port. In this port I am trying use my USB 2.0 device, but every time when I plug in that USB 2.0 device into the USB 3.0 port, the system gets hung and I try to shutdown but it won't allow me to. If I continue to match Port types to Device speeds (2.0 to 2.0) everything woks fine. So my question is: How can I allow a USB 3.0 port to use a USB 2.0 device or is there any way to convert a USB 3.0 port to a USB 2.0 port like drivers or something? Am I mistaking the issue I'm having for another type of problem? Here is my lsub command output Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:0823 Logitech, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 003: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Bus 002 Device 009: ID 046d:0823 Logitech, Inc. Bus 002 Device 008: ID 5555:1110 Epiphan Systems Inc. VGA2USB Bus 002 Device 006: ID 0d8c:0008 C-Media Electronics, Inc. Bus 002 Device 007: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) and for dmesg command output you can refer this link Suggestions would be appreciated. Thanks in advance.
This issue is retalted to the kernel. I have the same type of problem. To resovle this issue I updated my kernel. For this I reffred this link
USB 2.0 device is not working in USB 3.0 ports
1,404,399,176,000
Mod note: The entire network is pretty against shopping recommendation questions; there was an attempt to edit this one to avoid it, but it seems to have failed. The goal is "how do I decide which printer to buy", not "which printer should I buy". If you're naming a specific model in your answer, you're probably doing it wrong I am interested in purchasing a printer and scanner and would very much like the convenience of using an all-in-one model. The issue is that I am a very strict user of Debian GNU/Linux. I have heard very bad things about all-in-one support. I'm looking for low-end (preferably even store-bought models) that I can safely print and scan with, using free software. If I have to install a non-free binary driver; I would do so, but it wouldn't be my preference. However, I do want to ensure it works with Debian. What resources can I consult before buying to ensure the model I select will work? And if only a few models work, how can I find the needle in the haystack?
This site helps find linux-compatible printers: http://linuxdeal.com/printers.php?type=aio This site helps let you know if printers you already have or want are linux-compatible: http://www.openprinting.org/printers
Do any "All In One" Printer/Scanners work on Linux?
1,404,399,176,000
I'm currently reading the book How Linux Works and in chapter 5 it talks about Linux parameters. Curious I started to see what were the parameters passed to my installed kernel when it boots up and noticed: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 I have been searching online for an explanation for this parameter but have not been successful. Could anyone point me to the right direction where I could find more information or explain what is this BOOT_IMAGE about? One thing to note is that I'm running a remote Debian server. I know that the serve itself is virtualized, probably with Xen. Does this have to do with Xen and how it boots up instances? UPDATE: So while investigating I noticed that vmlinuz-3.16.0-4-amd64 is the kernel image. Also looking at man bootparam it reads: Most of the sorting goes on in linux/init/main.c. First, the kernel checks to see if the argument is any of the special arguments ’root=’, ’nfsroot=’, ’nfsaddrs=’, ’ro’, ’rw’, ’debug’ or ’init’. The meaning of these special arguments is described below. Anything of the form ’foo=bar’ that is not accepted as a setup function as described above is then interpreted as an environment variable to be set. A (useless?) example would be to use ’TERM=vt100’ as aboot argument. Any remaining arguments that were not picked up by the kernel and were not interpreted as environment variables are then passed onto process one, which is usually the init program. The most common argument that is passed to the init process is the word ’single’ which instructs init to boot the computer in single user mode, and not launch all the usual daemons. Check the manual page for the version of init installed on your system to see what arguments it accepts. Running systemctl show-environment will display something like: [root@localhost ~]# systemctl show-environment BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin LANG=en_US.UTF-8 So it looks like we are passing as argument the location of the Linux image we are booting from. The only questions left is what process use this environment variable and why?
According to http://homepage.smc.edu/morgan_david/cs40/lilo-readme.txt: LILO always passes the string BOOT_IMAGE=<name> to the kernel, where <name> is the name by which the kernel is identified (e.g. the label). This variable can be used in /etc/rc to select a different behaviour, depending on the kernel. So it was (or remains on some systems) a way to selectively distinct the behavior of boot scripts, depending on label (or kernel file name on other bootloaders). init probably passes this variable down to scripts.
What is the BOOT_IMAGE parameter in /proc/cmdline
1,404,399,176,000
I use the following dspcat command on AIX and can dump message catalogs created with the gencat command: dspcat –g /u/is/bin/I18N/l/lib/libca/libcalifornia.117.cat >> /tmp/message.smc I have spent a good solid hour looking for hints on how to dump one of these catalogs on Linux but this command does not seem to be available. Any help would be appreciated.
I found the source code for dspcat.c: http://www.smart.net/~rlhamil/. Specifically in this tarball. I tried compiling it and was missing a variable: $ make cc -O -DSOLARIS dspcat.c -o dspcat dspcat.c: In function ‘format_msg’: dspcat.c:11:23: error: ‘NL_TEXTMAX’ undeclared (first use in this function) static char result[NL_TEXTMAX*2+1]; ^ dspcat.c:11:23: note: each undeclared identifier is reported only once for each function it appears in dspcat.c: In function ‘print_file’: dspcat.c:240:23: error: ‘NL_SETMAX’ undeclared (first use in this function) int setlo=1, sethi=NL_SETMAX, msglo=1, msghi=NL_MSGMAX, x, y; ^ dspcat.c:240:49: error: ‘NL_MSGMAX’ undeclared (first use in this function) int setlo=1, sethi=NL_SETMAX, msglo=1, msghi=NL_MSGMAX, x, y; ^ dspcat.c: In function ‘main’: dspcat.c:338:30: error: ‘NL_MSGMAX’ undeclared (first use in this function) if (msg_nr<1 || msg_nr>NL_MSGMAX) { ^ dspcat.c:353:32: error: ‘NL_SETMAX’ undeclared (first use in this function) if (msg_set<1 || msg_set>NL_SETMAX) { ^ make: *** [dspcat] Error 1 The variable NL_SETMAX does not appear to be defined on my system. I did locate this header file, bits/xopen_lim.h that did have this variable so I added this to the list of headers on a whim. $ make cc -O -DSOLARIS dspcat.c -o dspcat dspcat.c: In function ‘format_msg’: dspcat.c:11:33: warning: integer overflow in expression [-Woverflow] static char result[NL_TEXTMAX*2+1]; ^ dspcat.c:11:16: error: size of array ‘result’ is negative static char result[NL_TEXTMAX*2+1]; ^ dspcat.c:11:16: error: storage size of ‘result’ isn’t constant dspcat.c:15:29: warning: integer overflow in expression [-Woverflow] for (x=0; x < (NL_TEXTMAX*2) && *s != '\0'; s++) ^ make: *** [dspcat] Error 1 If I have more time I'll play with this, but I believe if you statically set that variable within the code directly you may be able to compile this yourself.
Is there a utility like dspcat on Linux?
1,404,399,176,000
Threads/websites I searched but didn't fully help me Split a physical X display into two virtual displays? https://chipsenkbeil.com/notes/linux-virtual-monitors-with-xrandr/ https://askubuntu.com/questions/150066/split-monitor-in-two/998435#998435 Context I have a screen with a 5120x1440px screen resolution. I want to split this monitor into two virtual screens, so that I can work with this monitor as if it were a dual monitor set-up. I also want to quickly switch back to only be using one screen, so I wanted to do all this in a bash script, but this doesn't matter at the moment. The output of xrandr is the following Screen 0: minimum 8 x 8, current 5120 x 1440, maximum 32767 x 32767 DP-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) DP-3 disconnected (normal left inverted right x axis y axis) DP-4 connected primary 5120x1440+0+0 (normal left inverted right x axis y axis) 1mm x 1mm 3840x1080 119.97 + 99.96 59.97 5120x1440 119.97* 100.00 59.98 2560x1440 59.95 2560x1080 119.88 100.00 60.00 59.94 1920x1080 119.88 100.00 60.00 59.94 1680x1050 59.95 1600x900 60.00 1440x900 59.89 1280x1024 75.02 60.02 1280x800 59.81 1280x720 60.00 1152x864 75.00 1024x768 75.03 70.07 60.00 800x600 75.00 72.19 60.32 56.25 640x480 75.00 72.81 59.94 DP-5 disconnected (normal left inverted right x axis y axis) USB-C-0 disconnected (normal left inverted right x axis y axis) Implementation Following the tutorials and posts I found, This is what I would need to do xrandr --setmonitor VIRTUAL-LEFT 2560/0x1440/1+0+0 DP-4 xrandr --setmonitor VIRTUAL-RIGHT 2560/1x1440/1+2560+0 none To explain the numbers VIRTUAL-LEFT 2560 because that's half of 5120 0 because in the examples, the axis (1mm in my case) is divided by 2 and the left display gets the rounded down number 1440 because that's my screen height 1 because in the examples, the other axis (also 1mm in my case) is used as is 0+0 because that's the same as in the xrandr output DP-4 because that's the connected primary VIRTUAL-Right 2560 because see above 1 because see above although this takes the rounded up number 1440 because see above 1 because see above 2560+0because that will be the offset from left and is used in the examples none cause that's how it's done in EVERY example Since I don't get a change, I do as suggested in the examples xrandr --fb 5120x1441 xrandr --fb 5120x1440 Expected result I would now expect, to have two virtual screens with a ready to go desktop. Actual result The screens are cut in half, the left screen has my current desktop, but the right screen is entirely black, however, I can move my mouse over, but I cannot configure it, I cannot see it in displays, I can't do anything with it What is the solution here?
Taking into account the comments, I'd suggest DWM (a dynamic/tiling window manger) with fake full screen patch. This patch allows apps to "fullscreen" into space currently given to them, a window or half screen or whatever. It would be ideal for your scenario, half screen for gaming (faked full) and the rest for other apps. Though using a WM instead of DE (desktop environment like xfce, gnome, etc) is a more advanced setup. Have in mind that in Linux you can start two X servers concurrently with different managers/environments. One can be xfce, your current setup, the other DWM. You can switch between them using ctrl + alt + F1 - F12 to switch between them. Regarding the PBP feature of your monitor, you can use two outputs from the same PC: I suppose you have more than one output on your graphics controller, for example, a DisplayPort and HDMI or VGA. So you can connect both DisplayPort & HDMI and act as if your PC is connected to two screens. So if your PC has two outputs (most likely) you just need a separate cable, of the proper type of course.
Use xrandr to split display in two virtual screens
1,404,399,176,000
How to display meminfo in megabytes in top? discusses how to change the units of memory (e.g. megabytes, gigabytes, etc.) when using top. Is there a way to do this in htop as well? man htop doesn't address this, neither does it seem evident from the info displayed via F1 when running the program. Alternatively, if there were at least a way to change the threshold at which htop automatically switches from one unit to another, that would work. E.g. right now, it will display memory in terms of 4406M which I find quite difficult to read at a glance in a long line of processes. So, if I could just get it to not go above hundreds of units, and thus automatically switch this to, e.g. 4.4G, when it goes above 999M then that would work too.
I don't think it is possible, but, if you are up for building your own htop, not impossible. The code is at github and if you look at Meter.c you will see the logic they use to decide what to display.
htop - change units for memory usage display
1,404,399,176,000
I'm running Arch linux on my laptop, which is kernel 3.12.9 right now. Something has changed about the way the kernel maps in a dynamically-linked executable and I can't figure it out. Here's the example: % /usr/bin/cat /proc/self/maps ... 00400000-0040b000 r-xp 00000000 08:02 1186756 /usr/bin/cat 0060a000-0060b000 r--p 0000a000 08:02 1186756 /usr/bin/cat 0060b000-0060c000 rw-p 0000b000 08:02 1186756 /usr/bin/cat 00d6c000-00d8d000 rw-p 00000000 00:00 0 [heap] 7f29b3485000-7f29b3623000 r-xp 00000000 08:02 1182988 /usr/lib/libc-2.19.so ... My question is: what is the third mapping from /usr/bin/cat? Based on readelf -l /usr/bin/cat, there's a loadable segment of 0x1f8 bytes that should map at 0x400000. There's a loadable segment of 0xae10 bytes at 0x60ae10. Those two pieces of file correspond to the 00400000-0040b000 mapping, and the 0060a000-0060b000 mapping. But the third mapping, which claims to be at a file offset of 0xb000 bytes, doesn't seem to correspond to any Elf64_Phdr. In fact, the elf header only has 2 PT_LOAD segments. I read through fs/binfm_elf.c in the kernel 3.13.2 source code, and I don't see that the kernel maps in anything other than PT_LOAD segments. If I run strace -o trace.out /usr/bin/cat /proc/self/maps, I don't see any mmap() calls that would map in a piece of /usr/bin/cat, so that 3rd piece is mapped in by the kernel. I ran the same command (cat /proc/self/maps) on a RHEL server that was running kernel 2.6.18 + RH patches. That only shows 2 pieces of /usr/bin/cat mapped into memory, so this might be new with kernel 3.x.
I finally figured this out. The kernel does map only 2 segments. The third piece is a portion of one of the two loaded by the kernel. The run time linker, the program named in the INTERP pheader, which is /usr/lib/ld-2.24.so for me right now, changes the permissions on the mappings using mprotect() so that there are read/write global variables, read-only global variables, and a read/execute text segment. You can see this happen using strace, but it's easy to miss, as it's only a single mprotect() call. It wasn't a kernel change that caused this, it was a GNU lib C change.
/proc/self/maps - 3rd mapped piece of file?
1,404,399,176,000
I'm using RHEL 6 . I'm unable to play the video using Movie Player. It shows an error, Movie Player requires additional plugins to decode this file. The following plugins are required : MPEG-4-AAC decoder and H.264 decoder. But RHEL asking for a subscription. How to solve this problem or Suggest me any alternate softwares to install ?
I solved the problem by installing the RPM Fusion repo and then yum install gstreamer-ffmpeg
Movie Player requires additional plugins to decode this file
1,404,399,176,000
See USB driver bug exposed as "Linux plug&pwn", or this link Two choices [GNOME, Fedora 14]: Use the gnome-screensaver Use the "switch user" function [gnome menu -> log out -> switch user] So the question is: which one is the safer method to lock the screen if a user leaves the pc? Is it true that using the [2] method is safer? The way I see it, the gnome-screensaver is just a "process", it could be killed. But if you use the log out/switch user function, it's "something else". Using the "switch user" function, could there be a problem like with the gnome-screensaver? Could someone "kill a process" and presto...the lock is removed? Could the GDM [??] "login windows process" get killed and the "lock" gets owned? If the [2] method is safer, then how can i put an icon on the GNOME panel, to launch the "switch user" action by 1 click?
Well your first link is about kernel mode arbitraty code execution there is not much you can do against that. Logging out won't help. Grsecurity and PaX could prevent this but I'm not sure. It surely protect against introducing new executable code but I can't find any evidence that it randomizes where the kernel code is located which means an exploit could use the code already in executable memory to perform arbitrary operations (a method known as return-oriented-programming). Since this overflow happens on the heap compiling the kernel with -fstack-protector-all won't help. Keeping the kernel up to date and people with pendrives away seems to be your best bet. The second method is the result of a badly written screensaver which means logging out prevents that particular bug. Even if the attacker kills GDM he will not get in. Try killing it yourself from SSH. You get a black screen or a text-mode console. Besides AFAIK GDM runs as root (like login) so the attacker would need root privileges to kill it. Switching users don't have this effect. When you switch user the screen is locked with the screensaver and GDM is started on the next virtual terminal. You can press [ctrl]+[alt]+[f7] to get back to the buggy screensaver.
Is locking the screen safe?
1,404,399,176,000
I've been building LFS/BLFS for about a month now, with multiple failures and almost no successes, and I've just been informed that there exist Xorg-like window systems that are incredibly tiny, as Xorg's LFS build is over 200MB of just source packages. I Googled around the web, but the Wikipedia article on TinyX pointed me to a nonexistent home page for a nice Xorg clone. I'm looking to build a DSL-like distro (truthfully, it's a faster clone of ChromeOS), and I've got everything ready, except an X server. What I was looking for was the following: Something that's reasonably small, as I was hoping to get my distro down to 50MB when it is compressed. Something that is fairly compatible with the normal X server (I don't know what I'm talking about, but I was hoping for something that works with any X application). Something that will work fully (no hiccups!) with OpenBox or FluxBox (preferably OpenBox, as I've almost made my theme for it). Something that works with Plymouth, as an epic boot screen make a bad operating system look good in the eyes of simple users. Also, as a side question, how do I package my final build? I've built a small rendering system which I wish to distribute, but I can't figure out how to make an ISO out of it, like Ubuntu or DSL.
Xfree86 (http://www.xfree86.org/) includes "tiny" X servers in their build. I believe they are video-card-specific, in that there's an MGA server, and an ATI server, etc etc. No loadable modules. I have built XFree86 from source a coule of years ago (under Slackware 3.2!) but I don't think I tried the "tiny" servers to see if they worked. The rest of the compile worked fine. I tried XFree86 under a more modern (2.6.35, I think) Linux kernel and distro this summer, and it would not compile without significant source mods, some of which didn't seem at all clear how to do to me. So, I can't say if Xfree86 would meet your needs or not.
What is the most compatible tiny X server?
1,404,399,176,000
I am trying to understand the difference in behaviour between FreeBSD ACLs and Linux ACLs. In particular, the inheritance mechanism for the default ACLs. I used the following on both Debian 9.6 and FreeBSD 12: $ cat test_acl.sh #!/bin/sh set -xe mkdir storage setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage touch outside cd storage touch inside cd .. ls -ld outside storage storage/inside getfacl -d storage getfacl storage getfacl outside getfacl storage/inside umask I get the following output from Debian 9.6: $ ./test_acl.sh + mkdir storage + setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage + touch outside + cd storage + touch inside + cd .. + ls -ld outside storage storage/inside -rw-r--r-- 1 aaa aaa 0 Dec 28 11:16 outside drwxr-xr-x+ 2 aaa aaa 4096 Dec 28 11:16 storage -rw-rw----+ 1 aaa aaa 0 Dec 28 11:16 storage/inside + getfacl -d storage # file: storage # owner: aaa # group: aaa user::rwx group::rwx mask::rwx other::--- + getfacl storage # file: storage # owner: aaa # group: aaa user::rwx group::r-x other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::--- + getfacl outside # file: outside # owner: aaa # group: aaa user::rw- group::r-- other::r-- + getfacl storage/inside # file: storage/inside # owner: aaa # group: aaa user::rw- group::rwx #effective:rw- mask::rw- other::--- + umask 0022 Notice that the outside and inside files have different permissions. In particular, the outside file has -rw-r--r--, which is the default for this user and the inside file has -rw-rw----, respecting the default ACLs I assigned the storage directory. The output of the same script on FreeBSD 12: $ ./test_acl.sh + mkdir storage + setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage + touch outside + cd storage + touch inside + cd .. + ls -ld outside storage storage/inside -rw-r--r-- 1 aaa aaa 0 Dec 28 03:16 outside drwxr-xr-x 2 aaa aaa 512 Dec 28 03:16 storage -rw-r-----+ 1 aaa aaa 0 Dec 28 03:16 storage/inside + getfacl -d storage # file: storage # owner: aaa # group: aaa user::rwx group::rwx mask::rwx other::--- + getfacl storage # file: storage # owner: aaa # group: aaa user::rwx group::r-x other::r-x + getfacl outside # file: outside # owner: aaa # group: aaa user::rw- group::r-- other::r-- + getfacl storage/inside # file: storage/inside # owner: aaa # group: aaa user::rw- group::rwx # effective: r-- mask::r-- other::--- + umask 0022 (Note Debian's getfacl will also show the default ACLs even when not using -d where as FreeBSD does not, but I don't think the actual ACLs for storage are different.) Here, the outside and inside files also have different permissions, but the inside file does not have the group write permission that the Debian version does, probably because the mask in Debian retained the w while the mask in FreeBSD lost the w. Why did FreeBSD lose the w mask but Debian retained it?
In short I’d say (assume) they’re using umask differently. 0022 is exactly group-other unset W. You can change umask to remove write prohibition and check the result. Citing Solaris aka SunOS manual (and comments as well) since that seems to be pretty related: "… The umask(1) will not be applied if the directory contains default ACL entries. …"
Why did FreeBSD lose the w mask but Debian retained it?
1,301,414,593,000
I've compiled linked and created a program in C++ now I have foobar.out I want to be able to put it into the bin directory and use it like system wide commands e.g. ssh, echo, bash, cd... How can I achieve that?
There are two ways of allowing you to run the binary without specifying its path (not including creating aliases or shell functions to execute it with an absolute path for you): Copy it to a directory that is in your $PATH. Add the directory where it is to your $PATH. To copy the file to a directory in your path, for example /usr/local/bin (where locally managed software should go), you must have superuser privileges, which usually means using sudo: $ sudo cp -i mybinary /usr/local/bin Care must be taken not to overwrite any existing files in the target directory (this is why I added -i here). To add a directory to your $PATH, add a line in your ~/.bashrc file (if you're using bash): PATH="$HOME/bin:$PATH" ... if the binary is in $HOME/bin. This has the advantage that you don't need to have superuser privileges or change/add anything in the base system on your machine. You just need to move the binary into the bin directory of your home directory. Note, changes to .bashrc takes effect when the file is sourced next time, which happens if you open a new terminal or log out and in again, or run source ~/.bashrc manually.
Creating a program in bin
1,301,414,593,000
I always boot up my GNU/Linux laptop in console mode. But sometimes I need to bring up the GUI mode. it always requires entering the root password. So I wrote the following script "gogui.sh" and put it in /usr/bin: #!/bin/bash echo "mypassword" | sudo service lightdm start It is a really stupid idea, as if someone read the file, can easily see my password. Is the an alternative to this?
Passing a password to sudo in a script is utterly pointless. Instead, add a sudo rule adding the particular command you want to run with the NOPASSWD tag. Take care that the command-specific NOPASSWD rule must come after any general rule. saeid ALL = (ALL:ALL) ALL saeid ALL = (root) NOPASSWD: service lightdm start But this is probably not useful anyway. lightdm start starts a login prompt, but you only need that if you want to let other users log in graphically. You don't need it if all you want is to start a GUI session. Instead, call startx to start a GUI session from your text mode session. This does not require any extra privilege. You may need to explicitly specify your window manager or desktop environment, as startx might not pick up the same default session type that lightdm uses. startx -- gnome-session
Passing root password safely in a shell script
1,301,414,593,000
Once in a while (often a long while), I have a difficulty where I execute a command that completely screws up a Linux machine. Most recently, I accidentally mounted-again the root partition (thinking it was the new USB drive I had just formatted), and then proceeded to recursively chown the partition to myself (again, just trying to grant myself user-access to the USB drive). As soon as I realized what I had done (in mid-progress), I aborted it, but the damage was done. Many core programs were no longer root owned, so the machine was essentially in a zombified state. Some user functions (ssh, rsync) still functioned, but administration level stuff was totally locked-out. Couldn't mount, umount, reattach to screen sessions, reboot, etc. If the machine were in the living room here with me, "repairing" it (reinstall) would have been trivially easy. But it isn't. It is in my brother's house. He's not big on me walking him through repairs/reinstall, and I understand that. So, I'm going over in a few days to fix the damage I did (and hopefully install something more admin-screwup-resistant). I say all that to ask the question: What are the recommended ways of hardening an install against admin-hamfistedness? Things not considered, or considered and dropped quickly: Harden the administrator to not execute stupid commands: A great idea, but won't work, because as a human, I occasionally will do things that I realize after-the-fact are a bad idea. What I'm looking to do is out-think myself in advance, so when I do something stupid, the machine will refuse, and I'll realize "Oh crap! That could have been Very Bad (TM)! Let's not do that again." Things I've considered: Mount the root partition read-only: Would protect the root from changes, which might have negative affects if parts are expected to be writeable, and aren't. Also wouldn't necessarily protect the partition from being mounted again somewhere else as read-write. Use a compressed-readonly root image of some sort with a union-like writeable layer above it so no changes are ever really made to root, and a reboot clears any screw-ups: This would be OK/good if no changes ever need to be made to root, and maybe /etc could be reloaded/populated from a persistent file somewhere else. Use btrfs with regular (daily, maybe) snapshots, so that if an error is made, recovery is easier: Might still be sub-optimal as it would require direct user intervention, and I don't know that I could walk someone else through the changes to roll back the oops. Use a more "live"/"embedded" Linux/BSD distro designed more with stability/predictability/security in mind instead of a more generic distro As things stand now, I'm likely to use option 4 to install a somewhat more limited system than the full Debian install I had been using. But as just a file server and torrent client, it should work fine, and as a remote machine, defending the machine from myself is a pretty big asset.
Run your installation in a virtual machine. Take a snapshot of a known good state. Take snapshots before doing anything risky. Do almost nothing in the host environment. If you screw up, connect to the host environment and restore the snapshot.
What are the recommended ways of defending a remote *nix install from a hamfisted admin?
1,301,414,593,000
I installed Emacs using sudo apt-get install emacs. The problem is that when I launch Emacs from the command line (e.g. emacs main.c) it opens Emacs with a GUI. I prefer the command line version which runs in the terminal emulator. How can I install (or change some default) so that Emacs will open in the command line instead of a GUI?
Installing emacs-nox instead of emacs should do the trick.
How to install/default to the command line version of Emacs?
1,301,414,593,000
I had swap from a swapfile working for quite some time, but for some reason it stopped working. sudo fallocate -l 4G /home/.swap/swapfile sudo chmod 600 /home/.swap/swapfile sudo mkswap /home/.swap/swapfile # /etc/fstab /home/.swap/swapfile swap swap defaults 0 0 sudo swapon -a swapon: /home/.swap/swapfile: swapon failed: Invalid argument I'm running the newest version of Fedora, so is it maybe possible something has changed with an update or what could be the reason?
Please try replacing fallocate -l 4G /home/.swap/swapfile with dd if=/dev/zero of=/home/.swap/swapfile bs=1M count=4096
Swapfile Swapon invalid argument
1,301,414,593,000
With a bash script, can I read the mac address of my eth0 and print it to a file?
ifconfig will output information about your interfaces, including the MAC address: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:289748093 errors:0 dropped:0 overruns:0 frame:0 TX packets:232688719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3264330708 (3.0 GiB) TX bytes:4137701627 (3.8 GiB) Interrupt:17 The HWaddr is what you want, so you can use awk to filter it: $ ifconfig eth0 | awk '/HWaddr/ {print $NF}' 00:11:22:33:44:55 Redirect that into a file: $ ifconfig eth0 | awk '/HWaddr/ {print $NF}' > filename
Print mac address to file