date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,396,055,476,000
While following instructions, I loaded a module which creates an input device "Monitor of Null Output" and an output device "Null Output" using this command: pactl load-module module-null-sink sink_name=rmd This is not what I wanted. How do I remove these two devices? pactl list shows: Sink #2 State: IDLE Name: rmd Description: Null Output Driver: module-null-sink.c Sample Specification: s16le 2ch 48000Hz Channel Map: front-left,front-right Owner Module: 24 Mute: no Volume: front-left: 65536 / 100% / 0.00 dB, front-right: 65536 / 100% / 0.00 dB balance 0.00 Base Volume: 65536 / 100% / 0.00 dB Monitor Source: rmd.monitor Latency: 1569 usec, configured 40000 usec Flags: DECIBEL_VOLUME LATENCY Properties: device.description = "Null Output" device.class = "abstract" device.icon_name = "audio-card" Formats: pcm I tried pactl unload-module rmd pactl unload-module sink_name=rmd pactl unload-module "Null Output" all of which respond with: Failed to unload module: Module Null Output not loaded etc. I can run pactl unload-module module-null-sink but this removes all devices loaded with that module. How do I remove the device or unload the module which created the device specified above?
pactl unload-module gives a hint: You have to specify a module index or name as does the manpage: unload-module ID|NAME Unload the module instance identified by the specified numeric index or unload all modules by the specified name. The ID is shown in this line (pactl list): Owner Module: 24 Just run pactl unload-module 24 to remove the respective devices.
Remove PulseAudio device
1,396,055,476,000
I am using a redis Database and would like to explore the contents of the RAM the application is using. I feel the explanation of why I want to do this will make more sense then the question I would ask. Redis is a simple key value store that stores binary data. I think it would be a good place to explore things like encoding and it would be interesting to me to do things like skimming over the RAM looking for binary sets of data, doing things like looking for simple patterns; maybe explore the idea of writing a baby query language that searched in RAM. I had gotten this idea after reading the chapter in SICP about query languages. Any thoughts on where to start? Initially, I want to ask "Give me the address space this application is running in, please" to the system.
You can use gdb to access the memory of a process. Also, you should have a look at the "/proc" filesystem - it contains pseudo files for every process; some of them may contain interesting information
Exploring RAM contents
1,396,055,476,000
For example, on OSX, it's even less than 512k. Is there any recommended size, having in mind, that the app does not use recursion and does not allocate a lot of stack variables? I know the question is too broad and it highly depends on the usage, but still wanted to ask, as I was wondering if there's some hidden/internal/system reason behind this huge number. I was wondering, as I intend to change the stack size to 512 KiB in my app - this still sounds like a huge number for this, but it's much smaller than 8MiB - and will lead to significantly decreased virtual memory of the process, as I have a lot of threads (I/O). I also know this doesn't really hurt, well explained here: Default stack size for pthreads
As others have said, and as is mentioned in the link you provide in your question, having an 8MiB stack doesn’t hurt anything (apart from consuming address space — on a 64-bit system that won’t matter). Linux has used 8MiB stacks for a very long time; the change was introduced in version 1.3.7 of the kernel, in July 1995. Back then it was presented as introducing a limit, previously there wasn’t one: Limit the stack by to some sane default: root can always increase this limit if needed.. 8MB seems reasonable. On Linux, the stack limit also affects the size of program arguments and the environment, which are limited to one quarter of the stack limit; the kernel enforces a minimum of 32 pages for the arguments and environment. For threads, if the stack limit (RLIMIT_STACK) is unlimited, pthread_create applies its own limits to new threads’ stacks — and on most architectures, that’s less than 8MiB.
Why on modern Linux, the default stack size is so huge - 8MB (even 10 on some distributions)
1,396,055,476,000
I am unsure if my wifi card supports 802.11ac. How can I find out this information?
If you run iw list, look for the lines specifying VHT. VHT Capabilities (0x038071a0): Max MPDU length: 3895 Supported Channel Width: neither 160 nor 80+80 short GI (80 MHz) TX STBC SU Beamformee VHT RX MCS set: 1 streams: MCS 0-9 2 streams: MCS 0-9 3 streams: not supported 4 streams: not supported 5 streams: not supported 6 streams: not supported 7 streams: not supported 8 streams: not supported VHT RX highest supported: 0 Mbps VHT TX MCS set: 1 streams: MCS 0-9 2 streams: MCS 0-9 3 streams: not supported 4 streams: not supported 5 streams: not supported 6 streams: not supported 7 streams: not supported 8 streams: not supported VHT TX highest supported: 0 Mbps This section will be totally missing if your card does not support 802.11ac. Hence, on a card that doesn't support 802.11ac: $ iw list | grep VHT On a card that does support 802.11ac: $ iw list | grep VHT VHT Capabilities (0x038071a0): VHT RX MCS set: VHT RX highest supported: 0 Mbps VHT TX MCS set: VHT TX highest supported: 0 Mbps
How can I find if my wifi card supports 802.11ac?
1,396,055,476,000
I was experimenting a bit and noticed something strange: setting the setuid bit on a copy of bash located at /usr/bin/bash-test seemed to have no effect. When I ran an instance of bash-test, my home directory was not set to /root and when I ran the whoami command from bash-test, my username was not reported as being root, suggesting that bash-test was not running as root. However, if I set the setuid bit on whoami, I was reported as being root in any shell, as expected. I tried setting the setuid bit on /usr/bin/bash as well and observed the same behavior. Why is bash not running as root when I set the setuid bit on it? Could selinux have something to do with this?
The explanation is kind of annoying: bash itself is the reason. strace is our friend (must be SUID root itself for this to work): getuid() = 1000 getgid() = 1001 geteuid() = 0 getegid() = 1001 setuid(1000) = 0 setgid(1001) = 0 bash detects that it has been started SUID root (UID!=EUID) and uses its root power to throw this power away, resetting EUID to UID. And later even FSUID, just to be sure...: getuid() = 1000 setfsuid(1000) = 1000 getgid() = 1001 setfsgid(1001) = 1001 In the end: no chance. You have to start bash with UID root (i.e. sudo). Edit 1 The man page says this: If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, no startup files are read, shell functions are not inherited from the environment, the SHELLOPTS, BASHOPTS, CDPATH, and GLOBIGNORE variables, if they appear in the environment, are ignored, and the effective user id is set to the real user id. If the -p option is supplied at invocation, the startup behavior is the same, but the effective user id is not reset. But this does not work for me. -p isn't even mentioned among the startup options. I also tried --posix; didn't work either.
Setuid bit seems to have no effect on bash
1,396,055,476,000
I have compiled a custom linux kernel in BusyBox. BusyBox init does not support runlevels. When the kernel boots up in BusyBox, it first executes init which looks for the specified runlevel in /etc/inittab. BusyBox init works just fine without /etc/inittab. When no inittab is found it has the following behavior: ::sysinit:/etc/init.d/rcS This part is very clear to me, but I would like to know how to manage daemons that start up networking, create serial ports, or start java processes. I have looked in the scripts that reside in /etc/init.d/ but I don't understand how to manage them. I am looking for a good tutorial or solution to control these services myself without an automated tool like buildroot. I want to understand how these scripts work and how to create devices in /dev/ (right now I only have console and ttyAM0).
For buildroot all your scripts must be placed in $path_to_buildroot/output/target/etc/init.d before build image. In my case this directory contains rcS and few scripts named S[0-99]script_name. So you can create your own start\stop script. rcS: #!/bin/sh # Start all init scripts in /etc/init.d # executing them in numerical order. # for i in /etc/init.d/S??* ;do # Ignore dangling symlinks (if any). [ ! -f "$i" ] && continue case "$i" in *.sh) # Source shell script for speed. ( trap - INT QUIT TSTP set start . $i ) ;; *) # No sh extension, so fork subprocess. $i start ;; esac done and for example S40network: #!/bin/sh # # Start the network.... # case "$1" in start) echo "Starting network..." /sbin/ifup -a ;; stop) echo -n "Stopping network..." /sbin/ifdown -a ;; restart|reload) "$0" stop "$0" start ;; *) echo $"Usage: $0 {start|stop|restart}" exit 1 esac exit $?
Create and control start up scripts in BusyBox
1,396,055,476,000
Consider the shared object dependencies of /bin/bash, which includes /lib64/ld-linux-x86-64.so.2 (dynamic linker/loader): ldd /bin/bash linux-vdso.so.1 (0x00007fffd0887000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f57a04e3000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f57a04de000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f57a031d000) /lib64/ld-linux-x86-64.so.2 (0x00007f57a0652000) Inspecting /lib64/ld-linux-x86-64.so.2 shows that it is a symlink to /lib/x86_64-linux-gnu/ld-2.28.so: ls -la /lib64/ld-linux-x86-64.so.2 lrwxrwxrwx 1 root root 32 May 1 19:24 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.so Furthermore, file reports /lib/x86_64-linux-gnu/ld-2.28.so to itself be dynamically linked: file -L /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped I'd like to know: How can the dynamically linker/loader (/lib64/ld-linux-x86-64.so.2) itself be dynamically linked? Does it link itself at runtime? /lib/x86_64-linux-gnu/ld-2.28.so is documented to handle a.out binaries (man ld.so), but /bin/bash is an ELF executable? The program ld.so handles a.out binaries, a format used long ago; ld-linux.so* (/lib/ld-linux.so.1 for libc5, /lib/ld-linux.so.2 for glibc2) han‐ dles ELF, which everybody has been using for years now.
Yes, it links itself when it initialises. Technically the dynamic linker doesn’t need object resolution and relocation for itself, since it’s fully resolved as-is, but it does define symbols and it has to take care of those when resolving the binary it’s “interpreting”, and those symbols are updated to point to their implementations in the loaded libraries. In particular, this affects malloc — the linker has a minimal version built-in, with the corresponding symbol, but that’s replaced by the C library’s version once it’s loaded and relocated (or even by an interposed version if there is one), with some care taken to ensure this doesn’t happen at a point where it might break the linker. The gory details are in rtld.c, in the dl_main function. Note however that ld.so has no external dependencies. You can see the symbols involved with nm -D; none of them are undefined. The manpage only refers to entries directly under /lib, i.e. /lib/ld.so (the libc 5 dynamic linker, which supports a.out) and /lib*/ld-linux*.so* (the libc 6 dynamic linker, which supports ELF). The manpage is very specific, and ld.so is not ld-2.28.so. The dynamic linker found on the vast majority of current systems doesn’t include a.out support. file and ldd report different things for the dynamic linker because they have different definitions of what constitutes a statically-linked binary. For ldd, a binary is statically linked if it has no DT_NEEDED symbols, i.e. no undefined symbols. For file, an ELF binary is statically linked if it doesn’t have a PT_DYNAMIC section (this will change in the release of file following 5.37; it now uses the presence of a PT_INTERP section as the indicator of a dynamically-linked binary, which matches the comment in the code). The GNU C library dynamic linker doesn’t have any DT_NEEDED symbols, but it does have a PT_DYNAMIC section (since it is technically a shared library). As a result, ldd (which is the dynamic linker) indicates that it’s statically linked, but file indicates that it’s dynamically linked. It doesn’t have a PT_INTERP section, so the next release of file will also indicate that it’s statically linked. $ ldd /lib64/ld-linux-x86-64.so.2 statically linked $ file $(readlink /lib64/ld-linux-x86-64.so.2) /lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with file 5.35) $ file $(readlink /lib64/ld-linux-x86-64.so.2) /lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with the currently in-development version of file).
How can the dynamic linker/loader itself be dynamically linked as reported by `file`?
1,396,055,476,000
Recently saw a question that sparked this thought. Couldn't really find an answer here or via the Google machine. Basically, I'm interested in knowing how the kernel I/O architecture is layered. For example, does kjournald dispatch to pdflush or the other way around? My assumption is that pdflush (being more generic to mass storage I/O) would sit at a lower level and trigger the SCSI/ATA/whatever commands necessary to actually perform the writes, and kjournald handles higher level filesystem data structures before writing. I could see it the other way around as well, though, with kjournald directly interfacing with the filesystem data structures and pdflush waking up every now and then to write dirty pagecache pages to the device through kjournald. It's also possible that the two don't interact at all for some other reason. Basically: I need some way to visualize (graph or just an explanation) the basic architecture used for dispatching I/O to mass storage within the Linux kernel.
Before we discuss the specifics regarding pdflush, kjournald, andkswapd`, let's first get a little background on the context of what exactly we're talking about in terms of the Linux Kernel. The GNU/Linux architecture The architecture of GNU/Linux can be thought of as 2 spaces: User Kernel Between the User Space and Kernel Space sits the GNU C Library (glibc). This provides the system call interface that connects the kernel to the user-space applications. The Kernel Space can be further subdivided into 3 levels: System Call Interface Architectural Independent Kernel Code Architectural Dependent Code System Call Interface as its name implies, provide an interface between the glibc and the kernel. The Architectural Independent Kernel Code is comprised of the logical units such as the VFS (Virtual File System) and the VMM (Virtual Memory Management). The Architectural Dependent Code is the components that are processor and platform-specific code for a given hardware architecture. Diagram of GNU/Linux Architecture                                   For the rest of this article, we'll be focusing our attention on the VFS and VMM logical units within the Kernel Space. Subsystems of the GNU/Linux Kernel                                      VFS Subsystem With a high level concept of how the GNU/Linux kernel is structured we can delve a little deeper into the VFS subsystem. This component is responsible for providing access to the various block storage devices which ultimately map down to a filesystem (ext3/ext4/etc.) on a physical device (HDD/etc.). Diagram of VFS This diagram shows how a write() from a user's process traverses the VFS and ultimately works its way down to the device driver where it's written to the physical storage medium. This is the first place where we encounter pdflush. This is a daemon which is responsible for flushing dirty data and metadata buffer blocks to the storage medium in the background. The diagram doesn't show this but there is another daemon, kjournald, which sits along side pdflush, performing a similar task writing dirty journal blocks to disk. NOTE: Journal blocks is how filesystems like ext4 & JFS keep track of changes to the disk in a file, prior to those changes taking place. The above details are discussed further in this paper. Overview of write() steps To provide a simple overview of the I/O sybsystem operations, we'll use an example where the function write() is called by a User Space application. A process requests to write a file through the write() system call. The kernel updates the page cache mapped to the file. A pdflush kernel thread takes care of flushing the page cache to disk. The file system layer puts each block buffer together to a bio struct (refer to 1.4.3, “Block layer” on page 23) and submits a write request to the block device layer. The block device layer gets requests from upper layers and performs an I/O elevator operation and puts the requests into the I/O request queue. A device driver such as SCSI or other device specific drivers will take care of write operation. A disk device firmware performs hardware operations like seek head, rotation, and data transfer to the sector on the platter. VMM Subsystem Continuing our deeper dive, we can now look into the VMM subsystem. This component is responsible for maintaining consistency between main memory (RAM), swap, and the physical storage medium. The primary mechanism for maintaining consistency is bdflush. As pages of memory are deemed dirty they need to be synchronized with the data that's on the storage medium. bdflush will coordinate with pdflush daemons to synchronize this data with the storage medium. Diagram of VMM                  Swap When system memory becomes scarce or the kernel swap timer expires, the kswapd daemon will attempt to free up pages. So long as the number of free pages remains above free_pages_high, kswapd will do nothing. However, if the number of free pages drops below, then kswapd will start the page reclaming process. After kswapd has marked pages for relocation, bdflush will take care to synchronize any outstanding changes to the storage medium, through the pdflush daemons. References & Further Readings Conceptual Architecture of the Linux Kernel The Linux I/O Stack Diagram - ver. 0.1, 2012-03-06 - outlines Linux I/O stack as of Kernel 3.3 Local file systems update - specifically slide #7 Interactive Linux Kernel Map Understanding Virtual Memory In Red Hat Enterprise Linux 4 Linux Performance and Tuning Guidelines - specifically pages 19-24 Anatomy of the Linux kernel The Case for Semantic Aware Remote Replication
How do pdflush, kjournald, swapd, etc interoperate?
1,396,055,476,000
I had to reboot in the middle of a large data import. I only have one mysql database, which has now been corrupted. How can I completely remove mysql and reinstall it? I've tried apt-get purge mysql-server, then removing /var/lib/mysql/* and reinstalling, but apt-get does not prompt me for a root name and password nor does it recreate the /var/lib/mysql files. How can I reinstall?
You should do the following: apt-get purge mysql-server mysql-common mysql-client-<version> rm -rf /var/lib/mysql rm -rf /etc/mysql* Then you can reinstall in full.
How can I completely reinstall mysql?
1,396,055,476,000
I was going through a tutorial on setting up a custom initramfs where it states: The only thing that is missing is /init, the executable in the root of the initramfs that is executed by the kernel once it is loaded. Because sys-apps/busybox includes a fully functional shell, this means you can write your /init binary as a simple shell script (instead of making it a complicated application written in Assembler or C that you have to compile). and gives an example of init as a shell script that starts with #!/bin/busybox sh So far, I was under the impression that init is the main process that is launched and that all the other user space process are eventually children of init. However, in the given example, the first process is actually bin/busybox/ sh from which later init is spawned. Is this a correct interpertation? If I were, for example, have a available interpreter available at that point, I could write init as a Python script etc.?
init is not "spawned" (as a child process), but rather exec'd like this: # Boot the real thing. exec switch_root /mnt/root /sbin/init exec replaces the entire process in place. The final init is still the first process (pid 1), even though it was preceded with those in the Initramfs. The Initramfs /init, which is a Busybox shell script with pid 1, execs to Busybox switch_root (so now switch_root is pid 1); this program changes your mount points so /mnt/root will be the new /. switch_root then again execs to /sbin/init of your real root filesystem; thereby it makes your real init system the first process with pid 1, which in turn may spawn any number of child processes. Certainly it could just as well be done with a Python script, if you somehow managed to bake Python into your Initramfs. Although if you don't plan to include busybox anyway, you would have to painstakingly reimplement some of its functionality (like switch_root, and everything else you would usually do with a simple command). However, it does not work on kernels that do not allow script binaries (CONFIG_BINFMT_SCRIPT=y), or rather in such a case you'd have to start the interpreter directly and make it load your script somehow.
Can the init process be a shell script in Linux?
1,396,055,476,000
I'm trying to adapt a tutorial into an easy to use script http://qt-project.org/wiki/RaspberryPi_Beginners_guide. I'm trying to reduce the need for sudo/root as much as possible (preferably, only for the final dd step). My questions are: Can I mount the ext4 file system without sudo/root privileges in any way? Can I mount it in a way so that users/groups on the FS are ignored in some way (so I can run make install without sudo)? The Filesystem is currently mounted loopback from an offset of a file (i.e. -o loop,offset=62914560). This file is the image that will be copied onto the SD card used to boot the Raspberry Pi.
You want libguestfs. You can use it via guestfish or guestmount, or use the library directly through its C interface or any of the many language bindings, like Python.
Mounting and modifying a loopback file without sudo/root, is it possible? [duplicate]
1,396,055,476,000
I ran the following to replace a term used in all files in the current working directory: $ find . -type f -print0 | xargs -0 sed -i'.bup' -e's/Ms. Johnson/Mrs. Melbin/g' This performed the word substitution but it also created .bup files of files that never had the Ms. Johnson string. How do I perform the substitution without creating all these unnecessary backups?
You could check the files' contents to make sure that a substitution will take place when sed operates on them: find . \ -type f \ -exec grep -q 'Ms. Johnson' {} \; \ -print0 | xargs -0 sed -i'.bup' -e's/Ms. Johnson/Mrs. Melbin/g' If you want to be really clever about it, you could forego using find at all: grep -Z -l -r 'Ms. Johnson' | xargs -0 sed -i'.bup' -e's/Ms. Johnson/Mrs. Melbin/g'
How to perform a sed in-place substitution that only creates backups of files that were changed?
1,396,055,476,000
Based on /etc/shadow(5) documentation on the second (password) field: encrypted password If the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means). My question is whether there is a linux command to disable the user password,i.e. set a "*" or a "!" on password field.
You are looking for passwd -l user. From man passwd: Options: [...] -l, --lock lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a '!' at the beginning of the password).
Disable password on linux user with command
1,396,055,476,000
I've been curious lately about the various Linux kernel memory based filesystems. Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title. Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here? Also regarding hugepages: uname -a 3.13.3-1-MANJARO \ #1 SMP PREEMPT \ x86_64 GNU/Linux tail -n8 /proc/meminfo HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8223772 kB DirectMap2M: 16924672 kB DirectMap1G: 2097152 kB (Here are full-text versions of /proc/meminfo and /proc/cpuinfo ) What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages? Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something? Only a little more... Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this? Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?
There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory. See: Linux tmpfs. The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing: # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for # POSIX shared memory (shm_open, shm_unlink). shm /dev/shm tmpfs nodev,nosuid,noexec 0 0 which came out of the linux kernel File Systems documentation Quoting: tmpfs has the following uses: There is always a kernel internal mount which you will not see at all. This is used for shared anonymous mappings and SYSV shared memory. This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not set, the user visible part of tmpfs is not build. But the internal mechanisms are always present. glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for POSIX shared memory (shm_open, shm_unlink). Adding the following line to /etc/fstab should take care of this: tmpfs /dev/shm tmpfs defaults 0 0 Remember to create the directory that you intend to mount tmpfs on if necessary. This mount is not needed for SYSV shared memory. The internal mount is used for that. (In the 2.3 kernel versions it was necessary to mount the predecessor of tmpfs (shm fs) to use SYSV shared memory) Some people (including me) find it very convenient to mount it e.g. on /tmp and /var/tmp and have a big swap partition. And now loop mounts of tmpfs files do work, so mkinitrd shipped by most distributions should succeed with a tmpfs /tmp. And probably a lot more I do not know about :-) tmpfs has three mount options for sizing: size: The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock since the OOM handler will not be able to free that memory. nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE. nr_inodes: The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower. From the Transparent Hugepage Kernel Doc: Transparent Hugepage Support maximizes the usefulness of free memory if compared to the reservation approach of hugetlbfs by allowing all unused memory to be used as cache or other movable (or even unmovable entities). It doesn't require reservation to prevent hugepage allocation failures to be noticeable from userland. It allows paging and all other advanced VM features to be available on the hugepages. It requires no modifications for applications to take advantage of it. Applications however can be further optimized to take advantage of this feature, like for example they've been optimized before to avoid a flood of mmap system calls for every malloc(4k). Optimizing userland is by far not mandatory and khugepaged already can take care of long lived page allocations even for hugepage unaware applications that deals with large amounts of memory. New Comment after doing some calculations: HugePage Size: 2MB HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above. DirectMap4k: 8.03Gb DirectMap2M: 16.5Gb DirectMap1G: 2Gb Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.
On system memory... specifically the difference between `tmpfs,` `shm,` and `hugepages...`
1,396,055,476,000
Hi i'm trying to set up an old laptop as a 'server' for testing purposes. As such, I don't want the screen on all day, however i do want the cpu running 24x7. Can the 'lid close' switch be configured somehow to simply turn off the screen but otherwise the laptop is running as normal? FYI: I'm running coreos, but i'm willing to switch to another docker container OS if it makes life easier.
I'm not sure how you missed it in the docs, because when I looked it was plainly there. Place this in logind.conf: HandleLidSwitch=ignore
Configure linux laptop to switch off screen but otherwise remain running when lid closed
1,396,055,476,000
I recently downloaded IntelliJ IDEA and start the app by running . idea.sh. The app appears in the launcher while I'm running it, but for some reason when I right click on it I don't get a 'Lock to Launcher' option like I do with other apps. How do I attach it to the launcher? Is it because I'm running a script and not an executable directly that disables that option?
There looks to be 2 ways you can do this. Method #1: manually create .desktop file Yes you need to create a custom .desktop launcher for it. Here are the general steps: Create *.desktop file in /usr/local/share/applications (or /usr/share/applications depending upon your system). $ gksudo gedit <insert-path-to-new-file.desktop> Paste below text [Desktop Entry] Type=Application Terminal=false Name=IntelliJ IDEA Icon=/path/to/icon/icon.svg Exec=/path/to/file/idea.sh Edit Icon= and Exec= and Name=. Also Terminal=True/false determines weather the terminal opens a window and displays output or runs in the background. Put the .desktop file into the Unity Launcher panel. For this step you'll need to navigate in a file browser to where the .desktop file is that you created in the previous steps. After locating the file, drag the file to the Unity Launcher bar on the side. After making doing this you may need to run the following command to get your system to recognize the newly added .desktop file. $ sudo update-desktop-database Method #2: GUI method Instead of manually creating the .desktop file you can summon a GUI to help assist in doing this. install gnome-panel $ sudo apt-get install --no-install-recommends gnome-panel launch the .desktop GUI generator $ gnome-desktop-item-edit ~/Desktop/ --create-new                        References How to add a shell script to launcher as shortcut
Ubuntu / Unity attach script to Launcher
1,396,055,476,000
What is the allowed range of characters in Linux network interfaces names? I've searched around but did not find any definition or clarification. Are uppercase characters allowed? Are uppcase and lowercase letters different?
Trying some experiments with such names as in ip link set XXX name test\\[]{}.,ä@€ (where XXX is the previous/original name of the network interface), it seems as if Linux will happily accept anything, as long as it is not an embedded \0. So there don't seem to be any restrictions on what chars can be used, even with UTF-8 encoding you could store Unicode ... but then, not all tools might properly deal with UTF-8 but instead only see the byte soup.
Allowed chars in Linux network interface names?
1,396,055,476,000
I am using ZFS since a while now without problems. I am still excited about it, and I highly trust it. But from time to time, new questions come to my mind (in particular after having read some documentation, which sometimes increases the number of questions instead of reducing it). In this case, I have added a new vdev (a mirror) to a root pool, and therefore have read the zpool manual (man zpool). At the end of the section zpool add, it states: -o property=value Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. The only property supported at the moment is ashift. Do note that some properties (among them ashift) are not inherited from a previous vdev. They are vdev specific, not pool specific. That means that the ashift property is not pool specific, but vdev specific. But I have not been able to find any command or option which would allow me to view that property (or any other vdev specific property) per vdev. In other words, for example, if I have a pool which contains one vdev with ashift=12 and one vdev with ashift=10, how can I verify this? What I have already tried: root@cerberus:~# zpool list -v -o ashift rpool ASHIFT 12 mirror 928G 583G 345G - 27% 62% ata-ST31000524NS_9WK21HDM - - - - - - ata-ST31000524NS_9WK21L15 - - - - - - mirror 928G 74.4M 928G - 0% 0% ata-ST31000524NS_9WK21FXE - - - - - - ata-ST31000524NS_9WK21KC1 - - - - - - root@cerberus:~# zpool get all rpool NAME PROPERTY VALUE SOURCE rpool size 1.81T - rpool capacity 31% - rpool altroot - default rpool health ONLINE - rpool guid 3899811533678330272 default rpool version - default rpool bootfs rpool/stretch local rpool delegation on default rpool autoreplace off default rpool cachefile - default rpool failmode wait default rpool listsnapshots off default rpool autoexpand off default rpool dedupditto 0 default rpool dedupratio 1.00x - rpool free 1.24T - rpool allocated 583G - rpool readonly off - rpool ashift 12 local rpool comment - default rpool expandsize - - rpool freeing 0 default rpool fragmentation 13% - rpool leaked 0 default rpool feature@async_destroy enabled local rpool feature@empty_bpobj active local rpool feature@lz4_compress active local rpool feature@spacemap_histogram active local rpool feature@enabled_txg active local rpool feature@hole_birth active local rpool feature@extensible_dataset enabled local rpool feature@embedded_data active local rpool feature@bookmarks enabled local rpool feature@filesystem_limits enabled local rpool feature@large_blocks enabled local So neither zpool list nor zpool get show any property in a vdev specific manner. Any ideas?
In order to view the current value of a specific setting like ashift, you will need to use the zdb command instead of the zpool command. Running zdb on its own with no arguments will give you a view of any pools found on the system, and their vdevs, and disks within the vdevs. root@pve1:/home/tim# zdb pm1: version: 5000 name: 'pm1' state: 0 txg: 801772 pool_guid: 13783858310243843123 errata: 0 hostid: 2831164162 hostname: 'pve1' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 13783858310243843123 children[0]: type: 'raidz' id: 0 guid: 13677153442601001142 nparity: 2 metaslab_array: 34 metaslab_shift: 33 ashift: 9 asize: 1600296845312 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 4356695485691064080 path: '/dev/disk/by-id/ata-DENRSTE251M45-0400.C_A181B011241000542-part1' whole_disk: 1 not_present: 1 DTL: 64 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 14648277375932894482 path: '/dev/disk/by-id/ata-DENRSTE251M45-0400.C_A181B011241000521-part1' whole_disk: 1 DTL: 82 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 11362800770521042303 path: '/dev/disk/by-id/ata-DENRSTE251M45-0400.C_A181B011241000080-part1' whole_disk: 1 DTL: 59 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 10494331395233532833 path: '/dev/disk/by-id/ata-DENRSTE251M45-0400.C_A181B011241000517-part1' whole_disk: 1 DTL: 58 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data or, for just ashift with some context: root@pve1:/home/tim# sudo zdb | egrep 'ashift|vdev|type' | grep -v disk vdev_children: 1 vdev_tree: type: 'root' type: 'raidz' ashift: 9 Here is an old blog post about zdb that is still very informative about the origins and intent, and the information that comes out of zdb. A quick google also reveals many posts that may be more specifically relevant to ZFS on Linux.
With ZFS on Linux, how do I list device (vdev) specific properties?
1,396,055,476,000
Do mainstream Linux distributions typically log system temperature data, e.g. CPU or HDD temperature? If so, where can those logs be found?
I'm not aware of a mainstream Linux distribution which logs this type of information by default. Most mainstream Linux distributions do include various packages which can log temperatures, and some of these packages are set up to log by default. Taking Debian as an example, sensord will periodically log all the information it knows about (system temperatures, voltages etc.) to the system log, but it needs to be configured manually before it can log anything useful; hddtemp can be set up to periodically log hard drive temperatures. Many other tools can retrieve this type of information (using IPMI, SNMP, etc.) but again in most cases they need to be configured, either to be able to access the information in the first place, or to be able to interpret it, or both. This configuration requirement means that it would be difficult to set up a generic distribution which logs temperatures by default in a meaningful way. (Most of the systems I've seen have at least one, invalid, monitoring entry which would set off alarms if it was auto-configured!) Of course it's entirely possible to set up an installer image for your own systems since you know what they are and how they're configured... Once you've configured the various tools required to extract temperature information, you'd be better off using a proper monitoring tool (such as Munin) to log the temperatures instead of relying on the system logs. That way you can also set up alerts to be notified when things start going wrong. Expanding on the sensord example, you can find its output in the system log, with sensord as the process name; so either look for sensord in /var/log/syslog (by default), or run journalctl -u sensord. You'll see periodic logs like the following (I've removed the date and hostname): sensord[2489]: Chip: acpitz-virtual-0 sensord[2489]: Adapter: Virtual device sensord[2489]: temp1: 27.8 C sensord[2489]: temp2: 29.8 C sensord[2489]: Chip: coretemp-isa-0000 sensord[2489]: Adapter: ISA adapter sensord[2489]: Physical id 0: 33.0 C sensord[2489]: Core 0: 29.0 C sensord[2489]: Core 1: 30.0 C sensord[2489]: Core 2: 26.0 C sensord[2489]: Core 3: 29.0 C sensord[2489]: Chip: nct6776-isa-0a30 sensord[2489]: Adapter: ISA adapter sensord[2489]: in0: +1.80 V (min = +1.60 V, max = +2.00 V) sensord[2489]: in1: +1.86 V (min = +1.55 V, max = +2.02 V) sensord[2489]: in2: +3.41 V (min = +2.90 V, max = +3.66 V) sensord[2489]: in3: +3.39 V (min = +2.83 V, max = +3.66 V) sensord[2489]: in4: +1.50 V (min = +1.12 V, max = +1.72 V) sensord[2489]: in5: +1.26 V (min = +1.07 V, max = +1.39 V) sensord[2489]: in6: +1.04 V (min = +0.80 V, max = +1.20 V) sensord[2489]: in7: +3.31 V (min = +2.90 V, max = +3.66 V) sensord[2489]: in8: +3.22 V (min = +2.50 V, max = +3.60 V) sensord[2489]: fan1: 1251 RPM (min = 200 RPM) sensord[2489]: fan2: 0 RPM (min = 0 RPM) sensord[2489]: fan3: 299 RPM (min = 200 RPM) sensord[2489]: fan4: 1315 RPM (min = 0 RPM) sensord[2489]: fan5: 628 RPM (min = 200 RPM) sensord[2489]: SYSTIN: 32.0 C (limit = 80.0 C, hysteresis = 70.0 C) sensord[2489]: CPUTIN: 33.0 C (limit = 85.0 C, hysteresis = 80.0 C) sensord[2489]: AUXTIN: 24.0 C (limit = 80.0 C, hysteresis = 75.0 C) sensord[2489]: PECI Agent 0: 31.0 C (limit = 95.0 C, hysteresis = 92.0 C) sensord[2489]: PCH_CHIP_CPU_MAX_TEMP: 57.0 C (limit = 95.0 C, hysteresis = 90.0 C) sensord[2489]: PCH_CHIP_TEMP: 0.0 C sensord[2489]: PCH_CPU_TEMP: 0.0 C sensord[2489]: beep_enable: Sound alarm enabled sensord[2489]: Chip: jc42-i2c-9-18 sensord[2489]: Adapter: SMBus I801 adapter at 0580 sensord[2489]: temp1: 32.8 C (min = 0.0 C, max = 60.0 C) sensord[2489]: Chip: jc42-i2c-9-19 sensord[2489]: Adapter: SMBus I801 adapter at 0580 sensord[2489]: temp1: 33.5 C (min = 0.0 C, max = 60.0 C) sensord[2489]: Chip: jc42-i2c-9-1a sensord[2489]: Adapter: SMBus I801 adapter at 0580 sensord[2489]: temp1: 34.0 C (min = 0.0 C, max = 60.0 C) sensord[2489]: Chip: jc42-i2c-9-1b sensord[2489]: Adapter: SMBus I801 adapter at 0580 sensord[2489]: temp1: 33.2 C (min = 0.0 C, max = 60.0 C) To get this I had to determine which modules were needed (using sensors-detect): by default the system only knew about the ACPI-reported temperatures, which don't actually correspond to anything (they never vary). coretemp gives the CPU core temperatures on Intel processors, nct6776 is the motherboard's hardware monitor, and jc42 is the temperature monitor on the DIMMs. To make it useful for automated monitoring, I should at least disable the ACPI values and re-label the fans, and correct fan4's minimum value. There are many other configuration possibilities, lm_sensors' example configuration file gives some idea.
Does Linux typically log system temperature data?
1,396,055,476,000
I have several Debian Squeeze (6.0.6 up to date) used as routers. When a link is down, they send ICMP redirects to local hosts. This is the default behaviour of Debian and several others. So once the link comes back to life, the hosts can't reach it until reboot. I don't want any ICMP redirect to be sent from those routers. I tested echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects and sysctl -w net.ipv4.conf.all.send_redirects=0 and putting net.ipv4.conf.all.send_redirects=0 into /etc/sysctl.d/local.conf Every of those solution put the right value into /proc/sys/net/ipv4/conf/all/send_redirects But... the kernel keep sending ICMP redirects. Even after a reboot : $ tcpdump -n -i eth0 00:56:17.186995 IP 192.168.0.254 > 192.168.0.100: ICMP redirect 10.10.13.102 to host 192.168.0.1, length 68 And the routing table of local hosts (Windows computers) are polluted. I can prevent this with netfilter : iptables -t mangle -A POSTROUTING -p icmp --icmp-type redirect -j DROP Any idea about why the usual method doesn't work ? And how to prevent ICMP redirect to be sent, without using netfilter ?
The right command is : echo 0 | tee /proc/sys/net/ipv4/conf/*/send_redirects Because you must have 0 on 'all' and on 'interface_name' to disable it. Into /etc/sysctl.conf or similar file, you have to set 'all' + 'default' (or 'all' + 'interface' but the interface may not exists already when this file is processed).
Linux always send ICMP redirect
1,396,055,476,000
I'm planning on writing an app that I would like to be able to run on any Linux installation without having to rewrite any of the code in doing so (except maybe the interface, GNOME vs KDE, etc). I'm not very experienced in the minutiae of the differences between distros, and I also can't provide details about the project as it's only just entered the planning stage other than it's going to be poking around deep inside the kernel in order to interact with as much of the computer's hardware as possible.
Some points to keep in mind when developing, Use a standard build system Avoid hard coding library paths use tools like pkg-config to find the external packages instead. If your application has a GUI, use some frameworks like wxWidgets which can render native UI elements depending on where you run. Avoid creating dependencies with packages which won't run on other distributions. The only way to fully ensure your application works on all distributions is to actually run and test on it. One way you could do this is by creating virtual machines for each distributions. VirtualBox can be used to do this. I have around 8 virtual machines on my box for this kind of testing. I think you can't generalize too much on deploying the application as each distribution uses different way of installing packages. Debian uses deb and fedora rpm.
What do I need to be aware of if I want to write an application that will run on any Linux distro?
1,396,055,476,000
My Debian Wheezy systems use the deadline scheduler. I'm accustomed to using ionice to reschedule the I/O priority of disk-intensive jobs at busy times, and anecdotally this seems to help (but I don't have any hard evidence). The ionice manpage, kernel documentation and this OpenSUSE document all suggest that only the cfq scheduler takes into account ionice interventions. They don't explicitly state that other schedulers ignore it, but the only one they mention is cfq. Do other schedulers, in particular deadline, work with ionice?
No. ionice is a mechanism for specifying priorities. But deadline ignores priorities and instead simply imposes an expiration time on each I/O operation and then ensures that the operation succeeds before the expiration time is met. More information here: the Deadline I/O scheduler The main goal of the Deadline scheduler is to guarantee a start service time for a request. It does so by imposing a deadline on all I/O operations to prevent starvation of requests. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number. Before serving the next request, the deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue. By default, read requests have an expiration time of 500 ms, write requests expire in 5 seconds.
Does ionice work with the deadline scheduler?
1,536,964,576,000
I've got new laptop at work (Lenovo A485) and there are few issues with it. It prints AMD-Vi: IOAPIC[32] not in IVRS table and kernel panic after that. So far I've figured few ways to get the system up and running. noapic - terrible performance and high temperature, so not really a good way to do it amd_iommu=off - not ideal either ivrs_ioapic[32]=00:14.0 ivrs_ioapic[33]=00:00.2 - this seems to work fine iommu=soft My questions are about iommu=soft. I'm not sure what exactly it does. What are the implications of this mode? What is preferable, overriding the IVRS table or iommu=soft?
iommu=soft tells the kernel to use a software implementation to remap memory for applications that can't read above the 4GB limit. The kernel documentation for these options is here: https://www.kernel.org/doc/Documentation/x86/x86_64/boot-options.txt What's preferable is a solution that satisfies your expectations for performance, system temperature, battery life, etc, etc. If iommu=soft give you satisfactory performance, temperature, and battery life, then I would say go with that.
What are the implication of iommu=soft?
1,536,964,576,000
I wrote a service/single binary app that I'm trying to run on Fedora 24, it runs using systemd, the binary is deployed to /srv/bot this service/app I wrote needs to create/open/read and rename files in this directory. I first started creating a new policy based on SELinux: allow a process to create any file in a certain directory but when my app needed to rename, the output had a warning: #!!!! WARNING: 'var_t' is a base type. allow init_t var_t:file rename; I googled around and I found out I should use a more specific SELinux label than a base type, but all the examples online show you existing labels from httpd/nginx/etc. Is there a way I can create a custom label just for my own app? My idea is to create something like myapp_var_t, use semanage fcontext -a -t my_app_var_t '/srv/bot(/.*)?' restorecon -R -v /srv/bot and a custom .pp file that will use this custom type If there is a better way to solve it, that works too. Thanks Update After more searching around I think the proper term for what I want to do is to create new types which led me to https://docs.fedoraproject.org/en-US/Fedora/13/html/SELinux_FAQ/index.html#id3036916 which basically says, run sepolgen /path/to/binary and I was able to get a template that I can then compile into a pp file and load, still get some errors but looks like I'm closer to what I want to do. If I get it to work, I'll update this post
With the starting point of running sepolgen /path/to/binary which gives you: app.fc app.sh app.if app.spec app.te To create a new SELinux file context to apply to a parent directory that holds files your program/daemon will modify, you edit the app.te file and add : type app_var_t; files_type(app_var_t) The first line declares the new type and the second line calls a macro that does some magic and makes it a file type (turns out you cannot use a process context line app_exec_t on a file or directory), see "SELinux Types Revisited" for more info on the different types Once you have the type declared, you need to tell SELinux that your app is allowed to use it, in my case I added allow app_t app_var_t:dir { add_name remove_name write search}; allow app_t app_var_t:file { unlink create open rename write read }; Those two lines basically say, allow the app_t type which is the domain of my app, to write/search/etc directories with the context app_var_t and allow it to create/open/delete/etc files with the context app_var_t The last part of the puzzle is to somehow tell SELinux which folder(s) and file(s) should get each type, you do this by editing the app.fc file (fc => file context) this file only has two lines in my case: /srv/bot/app -- gen_context(system_u:object_r:app_exec_t,s0) /srv/bot(/.*)? gen_context(system_u:object_r:app_var_t,s0) the first line points straight to the binary as deployed on my servers, so this one gets the app_exec_t context. The second line means: Apply app_var_t to the directory /srv/bot and also to all files inside the dir /srv/bot Note how the first line has -- between the path and the call to gen_context. -- means, apply this to only files. on the second case we don't have anything (just spaces), which means, apply to all matching directories and files, which is what I wanted, another option is to have -d to apply just directories. I now have a working policy, I can deploy my app with a custom policy and it all works. (my policy has a lot more entries in the .te file but it is outside the scope of this question.) Extra reading material that helped me get to this solution: Making things easier with sepolgen Think before you just blindly audit2allow -M mydomain SELinux FOR RED HAT DEVELOPERS (Long PDF) An SElinux module (1): types and rules Sample policy (specially the postgresql) Understanding the File Contexts Files
how to create a custom SELinux label
1,536,964,576,000
I have fully updated Linux Mint 17.3. I need to add a bunch of applications to startup. The problem is, the following dialog doesn't work - It won't add me an application from the list, nor it will add any custom command. Anyway, there must be other way I can add those applications manually, probably by editing some startup file?
I found it at: ~/.config/autostart/
How to manually add startup applications on Mint 17.3
1,536,964,576,000
I'm new to Linux. I'm trying to build Chromium OS and run it on QEMU. Meanwhile I came across Linux KVM, Virtualbox and VMWare. So I have basically two questions about virtualization in Linux: What are the top popular open source virtualization systems that are used in the industry today? Do I have more choices for example when running another distro on top of my Ubuntu box? If someone has experience with virtualization in Linux would you please share some hints when to use what? Which ones are used to set up a cloud?
Are there more popular virtualization systems than the ones I mentioned above? You listed almost all popular virtualization systems, except 'Xen'. When to use what? Since you are using Ubuntu box, I suggest qemu/kvm for you. You can start with 'virt-manager', which is 'GUI front' of libvirt/qemu/kvm, and looks very similar to 'vBox or VmWare Workstation' on windows.
Linux-KVM, QEMU, Virtualbox, VMWare [closed]
1,536,964,576,000
I was perusing the Apache httpd manual online and came across a directive for enabling this. Found a description in the man page for tcp: TCP_DEFER_ACCEPT (since Linux 2.4) Allow a listener to be awakened only when data arrives on the socket. Takes an integer value (seconds), this can bound the maximum number of attempts TCP will make to complete the connection. This option should not be used in code intended to be portable. Then I found this article but I'm still unclear what kind of workloads this would be useful for. I'm assuming that if httpd has an option specifically for this, it must have some relevance to web servers. I'm also assuming from the fact it's an option and not just how httpd does network connections, that there are use cases where you want it and others where you don't. Even after reading the article, I'm unclear on what the advantage to waiting for the three way handshake to complete would be. It would seem advantageous to ensure it's not going to need to swap-in the relevant httpd instance by doing so while the handshake is still going on instead of potentially causing that delay after a connection is formed. For the article, it would also seem to me that no matter the TCP_DEFER_ACCEPT status of a socket, you're still going to need four packets (handshake then data in each case). I don't know how they get the count down to three, nor how that provides a meaningful enhancement. So my question is basically: Is this just an old obsolete option or is there an actual use case for this option?
(to summarise my comments on the OP) The three-way handshake that they are refering to is part of the TCP connection establishment, the option in question doesn't relate specifically to this. Also note that data exchange is not part of the three way handshake, this just creates the TCP connection in the open/established state. Regarding the existance of this option, this is not the traditional behaviour of a socket, normally the socket handler's thread is woken up when the connection is accepted (which is still after the three way handshake completes), and for some protocols activity starts here (e.g. an SMTP server sends a 220 greeting line), but for HTTP the first message in the conversation is the web browser sending its GET/POST/etc line, and until this happens the HTTP server has no interest in the connection (other than timing it out), thus waking up the HTTP process when the socket accept completes is a wasteful activity as the process will immediately fall asleep again waiting for the necessary data. While there is certainly argument that waking up idle processes can make them 'ready' for further processing (I specifically remember waking up login terminals on very old machines and having them chug in from swap), but you can also argue that any machine that has swapped out said process is already making demands on its resources, and making further unnecessary demands might overall reduce system performance - even if your individual thread's apparent performance improves (which it also may not, an extremely busy machine would have bottlenecks on disk IO which would slow other things down if you swapped in, and if its that busy, the immediate sleep might swap it right back out). It seems to be a gamble, and ultimately the 'greedy' gamble doesn't necessarily pay off on a busy machine, and certainly causes extra unnecessary work on a machine that already had the process swapped in - your approach optimises for a machine with a large memory set of processes that are mostly dormant, and swapping one dormancy for another is no big deal, however a machine with a large memory set of active processes will suffer from extra IO, and any machine that isn't memory limited, suffers, any CPU bound machine will be worse off. My general advice regarding that level of performance tuning would be to not make programatic decisions about what is best anyway, but to allow the system administrator and operating system to work together to deal with the resource management issues - that is their job and they are much better suited to understanding the workloads of the entire system and beyond. Give options and configuration choices. To specifically answer the question, the option is beneficial on all configurations, not to the level you'd ever likely notice except under an extreme load of HTTP traffic, but it's theoretically the "right" way to do it. It's an option because not all Unix (not even all Linux) flavours have that capability, and thus for portability it can be configured not to be inclided.
Real-World Use of TCP_DEFER_ACCEPT?
1,536,964,576,000
This question Unix & Linux: permissions 755 on /home/ covers part of my question but: Default permissions on a home directory are 755 in many instances. However that lets other users wander into your home folder and look at stuff. Changing the permissions to 711 (rwx--x--x) means they can traverse folders but not see anything. This is required if you have authorized_keys for SSH - without it the SSH gives errors when trying to access the system using a public key. Is there some way to set up the folders / directories so SSH can access authorized_keys, postfix / mail can access files it requires, the system can access config files but without all and sundry walking the system? I can manually make the folder 711, set ~/.ssh/authorized_keys to 644 but remembering to do that every time for every config is prone to (my) mistakes. I would have thought by default all files were private unless specifically shared but with two Ubuntu boxes (admittedly server boxes) everyone can read all newly created files. That seems a little off as a default setting.
As noted in the manual by default home folders made with useradd copy the /etc/skel folder so if you change it's subfolder rights all users created after in with default useradd will have the desired rights. Same for adduser. Editing "UMASK" in /etc/login.defs will change the rights when creating home folders. If you want more user security you can encrypt home folders and put ssh keys in /etc/ssh/%u instead of /home/%u/.ssh/authorized_keys .
Default permissions on Linux home directories
1,536,964,576,000
I can use a variety of tools to measure the volume of disk I/O currently flowing through the system (such as iotop and iostat) but I'm curious if it's possible to easily detect if a disk is seeking a lot with only a small amount of I/O. I know it;s possible to extract this information using blktrace and then decode it using btt but these are somewhat unwieldy and I was hoping there was a simpler alternative?
The ratio (rkB/s + wkB/s)/%util of the iostat -x output should give you some insight: Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.04 3.65 7.16 6.37 150.82 212.38 53.71 0.03 1.99 0.82 3.31 0.76 1.03 I'm not sure how exactly this ratio corresponds to the disk seek. But the idea is that, if the disk is busy and does not have a high throughput it is probably seeking. However, it's not guaranteed. Broken disks sometimes show a high utilisation and have almost no throughput. But it's at least an indicator. You can also provide a number to iostat (e.g. iostat -x 5) to specify the update interval. That way you can monitor continuously.
How to detect if a disk is seeking?
1,536,964,576,000
I have a directory with log files and I'm putting logs from script launched by users into them. Logging with syslog doesn't seem possible in this case. (non-daemon rsync) I want the users to have only write permissions on log files. The problem is, that write permissions must be further restricted, so that users (script) can only append to that files. The underlying filesystem is XFS. The following doesn't work: # chattr +a test.log chattr: Inappropriate ioctl for device while reading flags on test.log Is there any other solution for this? Thank you for your hints.
The chattr utility is written for ext2/ext3/ext4 filesystems. It emits ioctls on the files, so it's up to the underlying filesystem to decide what to do with them. The XFS driver in newer Linux kernels supports the same FS_IOC_SETFLAGS ioctl as ext[234] to control flags such as append-only, but you may be running an older kernel where it doesn't (CentOS?). Try using the xfs_io utility instead: echo chattr +a | xfs_io test.log Note that, for XFS like for ext[234], only root can change the append-only flag (more precisely, you need the CAP_LINUX_IMMUTABLE capability).
Restrict file access to append only
1,536,964,576,000
I'm looking for a way to filter/redirect rsync output in a manner where it can be fed to the "dialog --gauge" command, so I can get a nice looking progressbar during file sync. Currently I have only tested it directly at the prompt, but I'm planning to do this in a (bash) shell script. I have looked around the internet and found bits and pieces, but I'm still missing something to make it work (Disclaimer: This might be a totally wrong approach, and is a redirect/piping monstrosity) What I have currently put together: rsync -avz --progress -e "ssh" user@server:/home/user/data/ /home/user/data | awk -f /home/user/rsync.awk | sed 's/\([0-9]*\).*/\1/' | dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70 First I have the actual rsync command with the --progress option Output from rsync is piped into awk and uses the followng awk filter: { if (index($0, "to-check=") > 0) { split($0, pieces, "to-check=") split(pieces[2], term, ")"); split(term[1], division, "/"); print (1-(division[1]/division[2]))*100 } # else # { # print "#"$0; # } fflush(); } This filters out rsync output and provides the percentage in the following format: 53.7037 55.5556 57.4074 59.2593 61.1111 62.963 So to get rid of the decimal numbers, I feed the output to sed: sed 's/\([0-9]*\).*/\1/' Which gives the following output: 64 66 68 70 72 74 75 77 Those numbers are piped into dialog like this: dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70 As far as I know, "dialog --gauge" etc. should accept this, but it just displays progress to be 0% until it suddenly reaches 100% Can someone point me in the right direction here? Am I far away from a working progressbar? Is there a better way to achieve this? Regards, Christer EDIT: After taking @lynxlynxlynx' answer into account, the correct command line is: rsync -avz --progress -e "ssh" user@server:/home/user/data/ /home/user/data \ | awk -f /home/user/rsync.awk \ | sed --unbuffered 's/([0-9]*).*/\1/' \ | dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70
for i in 10 20 30; do echo $i; sleep 1; done | dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70 works fine, so @Shadur is right and there is buffering at play. Adding the sed stripper into the mix shows it is the culprit (only shows 0 and 30): for i in 10 20 30; do echo $i; sleep 1; done | sed 's/\([0-9]*\).*/\1/' | dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70 Now that the problem is known, you have multiple options. The cleanest would be to round/cut the percentage in awk with either math or string manipulation, but since you have GNU sed, just adding -u or --unbuffered should do the trick. However for completeness' sake, a simple test case shows awk also does buffering: for i in 10 20 30; do echo $i; sleep 1; done | awk '{print $0}' | sed -u 's/\([0-9]*\).*/\1/' | dialog --title "My Gauge" --gauge "Hi, this is a gauge widget" 20 70 But you already handle that with fflush, so I don't expect problems.
Making a progressbar with "dialog" from rsync output
1,536,964,576,000
I've been reading the Advanced Linux Programming book and it mentions about virtual terminals which, if I understood it correctly, it is a Linux-specific feature (not in Unix) to allow multiple login consoles in a non X11 system. You create virtual terminals by pressing ALT-F2. I'm running Linux Mint and in my /dev folder though I see many tty devices and I don't know what they are for. Here is the grepped output: crw-rw-rw- 1 root tty 5, 0 Jan 2 19:45 tty crw--w---- 1 root tty 4, 0 Jan 2 19:10 tty0 crw-rw---- 1 root tty 4, 1 Jan 2 19:10 tty1 crw--w---- 1 root tty 4, 10 Jan 2 19:10 tty10 crw--w---- 1 root tty 4, 11 Jan 2 19:10 tty11 crw--w---- 1 root tty 4, 12 Jan 2 19:10 tty12 crw--w---- 1 root tty 4, 13 Jan 2 19:10 tty13 crw--w---- 1 root tty 4, 14 Jan 2 19:10 tty14 crw--w---- 1 root tty 4, 15 Jan 2 19:10 tty15 crw--w---- 1 root tty 4, 16 Jan 2 19:10 tty16 crw--w---- 1 root tty 4, 17 Jan 2 19:10 tty17 crw--w---- 1 root tty 4, 18 Jan 2 19:10 tty18 crw--w---- 1 root tty 4, 19 Jan 2 19:10 tty19 crw-rw---- 1 root tty 4, 2 Jan 2 19:10 tty2 crw--w---- 1 root tty 4, 20 Jan 2 19:10 tty20 crw--w---- 1 root tty 4, 21 Jan 2 19:10 tty21 crw--w---- 1 root tty 4, 22 Jan 2 19:10 tty22 crw--w---- 1 root tty 4, 23 Jan 2 19:10 tty23 crw--w---- 1 root tty 4, 24 Jan 2 19:10 tty24 crw--w---- 1 root tty 4, 25 Jan 2 19:10 tty25 crw--w---- 1 root tty 4, 26 Jan 2 19:10 tty26 crw--w---- 1 root tty 4, 27 Jan 2 19:10 tty27 crw--w---- 1 root tty 4, 28 Jan 2 19:10 tty28 crw--w---- 1 root tty 4, 29 Jan 2 19:10 tty29 crw-rw---- 1 root tty 4, 3 Jan 2 19:10 tty3 crw--w---- 1 root tty 4, 30 Jan 2 19:10 tty30 crw--w---- 1 root tty 4, 31 Jan 2 19:10 tty31 crw--w---- 1 root tty 4, 32 Jan 2 19:10 tty32 crw--w---- 1 root tty 4, 33 Jan 2 19:10 tty33 crw--w---- 1 root tty 4, 34 Jan 2 19:10 tty34 crw--w---- 1 root tty 4, 35 Jan 2 19:10 tty35 crw--w---- 1 root tty 4, 36 Jan 2 19:10 tty36 crw--w---- 1 root tty 4, 37 Jan 2 19:10 tty37 crw--w---- 1 root tty 4, 38 Jan 2 19:10 tty38 crw--w---- 1 root tty 4, 39 Jan 2 19:10 tty39 crw-rw---- 1 root tty 4, 4 Jan 2 19:10 tty4 crw--w---- 1 root tty 4, 40 Jan 2 19:10 tty40 crw--w---- 1 root tty 4, 41 Jan 2 19:10 tty41 crw--w---- 1 root tty 4, 42 Jan 2 19:10 tty42 crw--w---- 1 root tty 4, 43 Jan 2 19:10 tty43 crw--w---- 1 root tty 4, 44 Jan 2 19:10 tty44 crw--w---- 1 root tty 4, 45 Jan 2 19:10 tty45 crw--w---- 1 root tty 4, 46 Jan 2 19:10 tty46 crw--w---- 1 root tty 4, 47 Jan 2 19:10 tty47 crw--w---- 1 root tty 4, 48 Jan 2 19:10 tty48 crw--w---- 1 root tty 4, 49 Jan 2 19:10 tty49 crw-rw---- 1 root tty 4, 5 Jan 2 19:10 tty5 crw--w---- 1 root tty 4, 50 Jan 2 19:10 tty50 crw--w---- 1 root tty 4, 51 Jan 2 19:10 tty51 crw--w---- 1 root tty 4, 52 Jan 2 19:10 tty52 crw--w---- 1 root tty 4, 53 Jan 2 19:10 tty53 crw--w---- 1 root tty 4, 54 Jan 2 19:10 tty54 crw--w---- 1 root tty 4, 55 Jan 2 19:10 tty55 crw--w---- 1 root tty 4, 56 Jan 2 19:10 tty56 crw--w---- 1 root tty 4, 57 Jan 2 19:10 tty57 crw--w---- 1 root tty 4, 58 Jan 2 19:10 tty58 crw--w---- 1 root tty 4, 59 Jan 2 19:10 tty59 crw-rw---- 1 root tty 4, 6 Jan 2 19:10 tty6 crw--w---- 1 root tty 4, 60 Jan 2 19:10 tty60 crw--w---- 1 root tty 4, 61 Jan 2 19:10 tty61 crw--w---- 1 root tty 4, 62 Jan 2 19:10 tty62 crw--w---- 1 root tty 4, 63 Jan 2 19:10 tty63 crw--w---- 1 root tty 4, 7 Jan 2 19:10 tty7 crw--w---- 1 root tty 4, 8 Jan 2 19:10 tty8 crw--w---- 1 root tty 4, 9 Jan 2 19:10 tty9
These are specifically virtual console devices, in Linux terminolgy. Supporting virtual consoles on the same physical device isn't unique to Linux (for example, BSD calls them “hardware terminal ports”). Linux doesn't have a mechanism for creating console devices on demand. The 63 consoles are not always active (you need to activate ttyN in order to switch to it with (Ctrl+)Alt+FN), but to activate one requires opening the console device (the openvt command does that, as do getty and the X server). So the device entry must exist all the time, or else it has to be created manually before it can be used. Modern Linux systems (with udev or devtmpfs) create device entries for every device that is present on the system. All the virtual consoles are always present (whether they're active or not), so all entries are created. Most users don't need nearly that many — in fact most users never see anything but the virtual console that X is running on. But there are a few who do, and need to patch their kernel to allow more than 63 consoles, because they run large machines with many hardware consoles).
Why are there so many virtual terminal devices?
1,536,964,576,000
I have three questions related to entropy on UNIX systems: I check entropy on Linux using: cat /proc/sys/kernel/random/entropy_avail. Is this the standard place with information about available entropy defined in POSIX? What is the correct available I should expect? I heard that entropy should be equal or more than 100 and that there may be a problem if entropy is constantly below 100. Is this entropy used by /dev/random or does it also has anything to do with /dev/urandom?
/dev/random is not standardized. POSIX doesn't provide any way of generating cryptographically secure random data and doesn't have any notion of entropy. The Linux kernel's entropy calculation corresponds to an information-theoretic model of entropy which is not relevant to practical use. The only case where this is relevant is on a new device which has never had time to accumulate entropy (this includes live distributions; installed systems save their entropy from one boot to the next). Apart from this situation, there is always enough entropy, because entropy does not deplete. Since Linux's /dev/random blocks when it thinks it doesn't have enough entropy, use /dev/urandom, which never blocks. Using /dev/urandom is good for everything including generating cryptographic keys (except, as mentioned above, on a freshly minted device). In summary: No, this is not standard. You don't care. Use /dev/urandom. Many, but not all unix systems have /dev/urandom and /dev/random. See the Wikipedia page for a more detailed discussion.
What is the correct available entropy on UNIX systems?
1,536,964,576,000
utility atop shows: ATOP - MyServer 2013/01/07 00:03:57 10 seconds elapsed PRC | sys 2.18s | user 8.33s | #proc 141 | #zombie 0 | #exit 0 | CPU | sys 21% | user 139% | irq 0% | idle 228% | wait 11% | cpu | sys 5% | user 40% | irq 0% | idle 51% | cpu002 w 3% | cpu | sys 5% | user 35% | irq 0% | idle 56% | cpu001 w 3% | cpu | sys 7% | user 30% | irq 0% | idle 61% | cpu000 w 2% | cpu | sys 4% | user 34% | irq 0% | idle 61% | cpu003 w 1% | CPL | avg1 1.00 | avg5 1.12 | avg15 1.25 | csw 389208 | intr 223367 | MEM | tot 23.6G | free 136.3M | cache 6.7G | buff 66.5M | slab 205.1M | SWP | tot 0.0M | free 0.0M | | vmcom 21.8G | vmlim 11.8G | DSK | sdc | busy 12% | read 70 | write 109 | avio 6 ms | DSK | sde | busy 4% | read 37 | write 131 | avio 2 ms | DSK | sdd | busy 3% | read 38 | write 144 | avio 1 ms | NET | transport | tcpi 160 | tcpo 171 | udpi 0 | udpo 0 | NET | network | ipi 188 | ipo 172 | ipfrw 0 | deliv 160 | NET | vnet1 0% | pcki 510 | pcko 442 | si 60 Kbps | so 26 Kbps | NET | eth0 0% | pcki 449 | pcko 527 | si 27 Kbps | so 65 Kbps | NET | vnet0 0% | pcki 0 | pcko 44 | si 0 Kbps | so 3 Kbps | NET | vnet7 0% | pcki 1 | pcko 44 | si 0 Kbps | so 3 Kbps | NET | vnet2 0% | pcki 0 | pcko 43 | si 0 Kbps | so 3 Kbps | NET | vnet3 0% | pcki 0 | pcko 43 | si 0 Kbps | so 3 Kbps | NET | vnet6 0% | pcki 0 | pcko 43 | si 0 Kbps | so 3 Kbps | NET | vnet5 0% | pcki 0 | pcko 5 | si 0 Kbps | so 0 Kbps | NET | vnet4 0% | pcki 0 | pcko 5 | si 0 Kbps | so 0 Kbps | NET | vnet8 0% | pcki 0 | pcko 5 | si 0 Kbps | so 0 Kbps | NET | bond0 ---- | pcki 449 | pcko 527 | si 27 Kbps | so 65 Kbps | NET | br0 ---- | pcki 157 | pcko 126 | si 12 Kbps | so 17 Kbps | NET | lo ---- | pcki 46 | pcko 46 | si 3 Kbps | so 3 Kbps | My questions are following: 1)All is white, only line with SWP is RED. I have 24GB RAM and I don't use swap. How may I fix this? Is it big problem? I'm working on without problems, but who knows if is it bad or not? 2)What does vmcom and vmlim means? CPU is Quad core. 3HDDs in RAID5. I have Debian Squeeze x64 and using KVM and MySQL. Thank you for answer
Answer to main query is further below - but first a warning regarding Mirra's suggestion: Be careful with this: In : /proc/sys/vm/overcommit_memory I try put there 2. In that case, even when physical memory is available, all processes requesting memory from OS when vmcom greater than vmlim will receive errors (I`ve got a lot of errors and fails with basic system applications like compiz). And because of: vmlim = SWAP_size + 0.5 * RAM_size. where 0.5 (50%) is the default value for /proc/sys/vm/overcommit_ratio parameter you can easily get a lot of errors like me. Answer to main question: it is better to revert changes in overcommit_memory parameter back to the default value (0 for me (Ubuntu 12.04 LTS), but can be 1 for other OS`es). According to the great article we can calculate memory actually used by processes: MemoryUsed ~ tot - (cache + buff + free) ~ 23.6G - ( 6.7G + 0.067G + 0.136G) ~ 16.7G So only 16.7G is actually used by processes (from 23.6G installed RAM) and red line in atop output may be ignored.
ATOP shows red line vmcom and vmlim. What does it mean?
1,536,964,576,000
man 4 random has a very vague description of Linux kernel entropy sources: The random number generator gathers environmental noise from device drivers and other sources into an entropy pool. The paper Entropy transfers in the Linux Random Number Generator isn't much more specific, either. It lists: add_disk_randomness(), add_input_randomness(), and add_interrupt_randomness(). These functinos are from random.c, which includes following comment: Sources of randomness from the environment include inter-keyboard timings, inter-interrupt timings from some interrupts, and other events which are both (a) non-deterministic and (b) hard for an outside observer to measure. Further down, there is a function add_hwgenerator_randomness(...) indicating support for hardware random number generators. All those information are rather vague (or, in the case of the source code, require deep knowledge of the Linux kernel to understand). What are the actual entropy sources used, and does the Linux kernel support any hardware random number generators out-of-the-box?
Most commodity PC hardware has a random number generator these days. VIA Semiconductor has put them in their processors for many years; the Linux kernel has the via-rng driver for that. I count 34 source modules in the drivers/char/hw_random/ directory in the latest source tree, including drivers for Intel and AMD hardware, and for systems that have a TPM device. You can run the rng daemon (rngd) to push random data to the kernel entropy pool.
What entropy sources are used by the Linux kernel?
1,536,964,576,000
I love (the way) how Linux & Co. lets users install many packages from different repositories. AFAIK, they come also with source-packages, so you can compile them by yourself. But why even bother to "keep/offer" pre-compiled packages, when you could just compile them yourself? What are the intentions of keeping/offering them? Is it possible to configure Linux, to only download source packages and let the OS do the rest? (Just like a pre-compiled package installation?) Thank you for your answers.
It’s a trade-off: distributions which provide pre-built packages spend the time building them once (in all the configurations they support), and their users can then install them without spending the time to build them. The users accept the distributions’ binaries as-is. If you consider the number of package installations for some of the larger distributions, the time saved by not requiring recompilation everywhere is considerable. There are some distributions which ship source and the infrastructure required to build it, and rely on users to build everything locally; see for example Gentoo. This allows users to control exactly how their packages are built. If you go down this path, even with the time savings you can get by simplifying package builds, you should be ready to spend a lot of time building packages. I don’t maintain the most complex packages in Debian, but one of my packages takes over two hours to build on 64-bit x86 builders, and over twelve hours on slower architectures!
Why are there pre-compiled packages in repositories?
1,536,964,576,000
I ran this command: grep -i 'bro*' shows.csv and got this as output 1845307,2 Broke Girls,2011,138,6.7,89093 1702042,An Idiot Abroad,2010,21,8.3,29759 903747,Breaking Bad,2008,62,9.5,1402577 2249364,Broadchurch,2013,24,8.4,89378 1733785,Bron/Broen,2011,38,8.6,56357 2467372,Brooklyn Nine-Nine,2013,145,8.4,209571 7569592,Chilling Adventures of Sabrina,2018,36,7.6,69041 7221388,Cobra Kai,2018,31,8.7,72993 1355642,Fullmetal Alchemist: Brotherhood,2009,69,9.1,111111 118360,Johnny Bravo,1997,67,7.2,32185 455275,Prison Break,2005,91,8.3,465246 115341,Sabrina the Teenage Witch,1996,163,6.6,33458 1312171,The Umbrella Academy,2019,20,8,140800 3339966,Unbreakable Kimmy Schmidt,2015,51,7.6,61891 Where is bro in breaking bad? In fact, o doesn't even appear in "Breaking bad". I tried it once more, and got the same result. It is not accounting for the last character. Is there something wrong in the way I am writing it? You can download the file shows.csv from https://cdn.cs50.net/2021/x/seminars/linux/shows.csv
In your code o* means "zero or more occurrences of o". It seems you confused regular expressions with glob syntax (where o* means "one o and zero or more whatever characters"). In Breaking Bad there is exactly zero o characters after Br, so it matches bro* (case-insensitively). grep -i bro shows.csv will do what (I think) you want.
Is there a problem with grep command? I am getting characters that don't match my regular expression
1,536,964,576,000
I want to call a Linux syscall (or at least the libc wrapper) directly from a scripting language. I don't care what scripting language - it's just important that it not be compiled (the reason basically has to do with not wanting a compiler in the dependency path, but that's neither here nor there). Are there any scripting languages (shell, Python, Ruby, etc) that allow this? In particular, it's the getrandom syscall.
Perl allows this with its syscall function: $ perldoc -f syscall syscall NUMBER, LIST Calls the system call specified as the first element of the list, passing the remaining elements as arguments to the system call. If ⋮ The documentation also gives an example of calling write(2): require 'syscall.ph'; # may need to run h2ph my $s = "hi there\n"; syscall(SYS_write(), fileno(STDOUT), $s, length $s); Can't say I've ever used this feature, though. Well, before just now to confirm the example does indeed work. This appears to work with getrandom: $ perl -E 'require "syscall.ph"; $v = " "x8; syscall(SYS_getrandom(), $v, length $v, 0); print $v' | xxd 00000000: 5790 8a6d 714f 8dbe W..mqO.. And if you don't have getrandom in your syscall.ph, then you could use the number instead. It's 318 on my Debian testing (amd64) box. Beware that Linux syscall numbers are architecture-specific.
Call a Linux syscall from a scripting language
1,536,964,576,000
I'm a bit lost with virt-manager / libvirt / KVM. I've got a working KVM VM (Windows XP) which works nicely. The VM is backed by a 4GB file or so (a .img). Now I want to do something very simple: I want to duplicate my VM. I thought "OK, no problem, let's copy the 4GB file and copy the XML" file. But then the libvirt FAQ states in all uppercase: "you SHOULD NOT CARE WHERE THE XML IS STORED" libvirt FAQ OK fine, I shouldn't care. But then how do I duplicate my VM? I want to create a new VM that is a copy of that VM.
virsh will allow your to edit, export, and import the XML definition for your servers. I would use virt-clone to generate a cloned image file, and export the XML. To be safe I would remove the clone configuration from the original server.
How to create a dupe of a KVM/libvirt/virt-manager VM?
1,536,964,576,000
How do I copy recursively like cp -rf *, but excluding hidden directories (directories starting with .) and their contents?
You could just copy everything with cp -rf and then delete hidden directories at the destination with find -type d -name '.*' -and -not -name '.' -print0 | xargs -0 rm -rf Alternatively, if you have some advanced tar (e.g. GNU tar), you could try to use tar to exclude some patterns. But I am afraid that is not possible to only exclude hidden directories, but include hidden files. For example something like this: tar --exclude=PATTERN -f - -c * | tar -C destination -f - -x Btw, GNU tar has a zoo of exclude style options. My favourite is --exclude-vcs
copy recursively except hidden directory
1,536,964,576,000
Heyo! I'm currently working on a non-lfs system from scratch with busybox as the star. Now, my login says: (none) login: Hence, my hostname is broken. hostname brings me (none) too. The guide I was following told me to throw the hostname to /etc/HOSTNAME. I've also tried /etc/hostname. No matter what I do, hostname returns (none) - unless I run hostname <thename> or hostname -F /etc/hostname. Now obviously, I don't want this to be done every time somebody freshly installed the distro -- so what is the real default file, if not /etc/hostname? Thanks in advance!
The hostname commands in common toolsets, including BusyBox, do not fall back to files when querying the hostname. They report solely what the kernel returns to them as the hostname from a system call, which the kernel initializes to a string such as "(none)", changeable by reconfiguring and rebuilding the kernel. (In systemd terminology this is the dynamic hostname, a.k.a. transient hostname; the one that is actually reported by Linux, the kernel.) There is no "default file". There's usually a single-shot service that runs at system startup, fairly early on, that goes looking in these various files, pulls out the hostname, and initializes the kernel hostname with it. (In systemd terminology this configuration string is the static hostname.) For example: In my toolset I provide an "early" hostname service that runs the toolset's set-dynamic-hostname command after local filesystem mounts and before user login services. The work is divided into stuff that is done (only) when one makes a configuration change, and stuff that is done at (every) system bootstrap: The external configuration import mechanism reads /etc/hostname and /etc/HOSTNAME, amongst other sources (since different operating systems configure this in different ways), and makes an amalgamated rc.conf. The external configuration import mechanism uses the amalgamated rc.conf to configure this service's hostname environment variable. When the service runs, set-dynamic-hostname doesn't need to care about all of the configuration source possibilities and simply takes the environment variable, from the environment configured for the service, and sets the dynamic hostname from it. In systemd this is an initialization action that is hardwired into the code of systemd itself, that runs before service management is even started up. The systemd program itself goes and reads /etc/hostname (and also /proc/cmdline, but not /etc/HOSTNAME nor /etc/default/hostname nor /etc/sysconfig/network) and passes that to the kernel. In Void Linux there is a startup shell script that reads the static hostname from (only) /etc/hostname, with a fallback to the shell variable read from rc.conf, and sets the dynamic hostname from its value. If you are building a system "from scratch", then you'll have to make a service that does the equivalent. The BusyBox and ToyBox tools for setting the hostname from a file are hostname -F "${filename}", so you'll have to make a service that runs that command against /etc/hostname or some such file. BusyBox comes with runit's service management toolset, and a simple runit service would be something along the lines of: #!/bin/sh -e exec 2>&1 exec hostname -F /etc/hostname Further reading Lennart Poettering et al. (2016). hostnamectl. systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2017). "set-dynamic-hostname". User commands manual. nosh toolset. Softwares. Jonathan de Boyne Pollard (2017). "rc.conf amalgamation". nosh Guide. Softwares. Jonathan de Boyne Pollard (2015). "external formats". nosh Guide. Softwares. Rob Landley. hostname. Toybox command list. landley.net. https://unix.stackexchange.com/a/12832/5132
What's the default file for `hostname`?
1,536,964,576,000
Can I take a Linux kernel and use it with, say, FreeBSD and vice versa (FreeBSD kernel in, say, a Debian)? Is there a universal answer? What are the limitations? What are the obstructions?
No, kernels from different implementations of Unix-style operating systems are not interchangeable, notably because they all present different interfaces to the rest of the system (user space) — their system calls (including ioctl specifics), the various virtual file systems they use... What is interchangeable to some extent, at the source level, is the combination of the kernel and the C library, or rather, the user-level APIs that the kernel and libraries expose (essentially, the view at the layer described by POSIX, without considering whether it is actually POSIX). Examples of this include Debian GNU/kFreeBSD, which builds a Debian system on top of a FreeBSD kernel, and Debian GNU/Hurd, which builds a Debian system on top of the Hurd. This isn’t quite at the level of kernel interchangeability, but there have been attempts to standardise a common application binary interface, to allow binaries to be used on various systems without needing recompilation. One example is the Intel Binary Compatibility Standard, which allows binaries conforming to it to run on any Unix system implementing it, including older versions of Linux with the iBCS 2 layer. I used this in the late 90s to run WordPerfect on Linux. See also How to build a FreeBSD chroot inside of Linux.
Are different Linux/Unix kernels interchangeable?
1,536,964,576,000
Let's say I have a machine (Arago dist) with a user password of 12 alphanumerical characters. When I log myself in via ssh using password authentication, I noticed a couple of days ago, that I can either only input 8 of the password characters or the whole password followed with whatever I'd like. The common outcome in both situations is a successful login. Why is this happening? In this particular case, I don't want to use Public key authentication based on multiple reasons. As an additional info, in this distro the files /etc/shadow and /etc/security/policy.conf are missing. Here the server ssh config: [user@machine:~] cat /etc/ssh/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Banner /etc/ssh/welcome.msg #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m PermitRootLogin no #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation no #PermitUserEnvironment no Compression no ClientAliveInterval 15 ClientAliveCountMax 4 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/libexec/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server Here the ssh client output: myself@ubuntu:~$ ssh -vvv [email protected] OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.1 [192.168.1.1] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/myself/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/myself/.ssh/id_rsa type 1 debug1: identity file /home/myself/.ssh/id_rsa-cert type -1 debug1: identity file /home/myself/.ssh/id_dsa type -1 debug1: identity file /home/myself/.ssh/id_dsa-cert type -1 debug1: identity file /home/myself/.ssh/id_ecdsa type -1 debug1: identity file /home/myself/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/myself/.ssh/id_ed25519 type -1 debug1: identity file /home/myself/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH_5* compat 0x0c000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none debug2: kex_parse_kexinit: none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1481/3072 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 91:66:c0:07:e0:c0:df:b7:8e:49:97:b5:36:12:12:ea debug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26 debug3: load_hostkeys: loaded 1 keys debug1: Host '192.168.1.1' is known and matches the RSA host key. debug1: Found key in /home/myself/.ssh/known_hosts:26 debug2: bits set: 1551/3072 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/myself/.ssh/id_rsa (0x802b9240), debug2: key: /home/myself/.ssh/id_dsa ((nil)), debug2: key: /home/myself/.ssh/id_ecdsa ((nil)), debug2: key: /home/myself/.ssh/id_ed25519 ((nil)), debug3: input_userauth_banner debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/myself/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myself/.ssh/id_dsa debug3: no such identity: /home/myself/.ssh/id_dsa: No such file or directory debug1: Trying private key: /home/myself/.ssh/id_ecdsa debug3: no such identity: /home/myself/.ssh/id_ecdsa: No such file or directory debug1: Trying private key: /home/myself/.ssh/id_ed25519 debug3: no such identity: /home/myself/.ssh/id_ed25519: No such file or directory debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: userauth_kbdint: disable: no info_req_seen debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 57 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.1.1 ([192.168.1.1]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x10 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env XDG_VTNR debug3: Ignored env MANPATH debug3: Ignored env XDG_SESSION_ID debug3: Ignored env CLUTTER_IM_MODULE debug3: Ignored env SELINUX_INIT debug3: Ignored env XDG_GREETER_DATA_DIR debug3: Ignored env COMP_WORDBREAKS debug3: Ignored env SESSION debug3: Ignored env NVM_CD_FLAGS debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_MENU_PREFIX debug3: Ignored env VTE_VERSION debug3: Ignored env NVM_PATH debug3: Ignored env GVM_ROOT debug3: Ignored env WINDOWID debug3: Ignored env UPSTART_SESSION debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env NVM_DIR debug3: Ignored env USER debug3: Ignored env LD_LIBRARY_PATH debug3: Ignored env LS_COLORS debug3: Ignored env XDG_SESSION_PATH debug3: Ignored env XDG_SEAT_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env SESSION_MANAGER debug3: Ignored env DEFAULTS_PATH debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env PATH debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env QT_QPA_PLATFORMTHEME debug3: Ignored env NVM_NODEJS_ORG_MIRROR debug3: Ignored env GVM_VERSION debug3: Ignored env JOB debug3: Ignored env PWD debug3: Ignored env XMODIFIERS debug3: Ignored env GNOME_KEYRING_PID debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env gvm_pkgset_name debug3: Ignored env GDM_LANG debug3: Ignored env MANDATORY_PATH debug3: Ignored env IM_CONFIG_PHASE debug3: Ignored env COMPIZ_CONFIG_PROFILE debug3: Ignored env GDMSESSION debug3: Ignored env SESSIONTYPE debug3: Ignored env XDG_SEAT debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GOROOT debug3: Ignored env LANGUAGE debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env gvm_go_name debug3: Ignored env LOGNAME debug3: Ignored env GVM_OVERLAY_PREFIX debug3: Ignored env COMPIZ_BIN_PATH debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env QT4_IM_MODULE debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env PrlCompizSessionClose debug3: Ignored env PKG_CONFIG_PATH debug3: Ignored env GOPATH debug3: Ignored env NVM_BIN debug3: Ignored env LESSOPEN debug3: Ignored env NVM_IOJS_ORG_MIRROR debug3: Ignored env INSTANCE debug3: Ignored env TEXTDOMAIN debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env XDG_CURRENT_DESKTOP debug3: Ignored env GTK_IM_MODULE debug3: Ignored env LESSCLOSE debug3: Ignored env TEXTDOMAINDIR debug3: Ignored env GVM_PATH_BACKUP debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Here the sshd server output: debug1: sshd version OpenSSH_5.6p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. socket: Address family not supported by protocol debug1: Server will not fork when running in debugging mode. debug1: rexec start in 4 out 4 newsock 4 pipe -1 sock 7 debug1: inetd sockets after dupping: 3, 3 Connection from 192.168.1.60 port 53445 debug1: Client protocol version 2.0; client software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user user service ssh-connection method none debug1: attempt 0 failures 0 debug1: userauth_send_banner: sent Failed none for user from 192.168.1.60 port 53445 ssh2 debug1: userauth-request for user user service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file //.ssh/authorized_keys debug1: Could not open authorized keys '//.ssh/authorized_keys': No such file or directory debug1: restore_uid: 0/0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file //.ssh/authorized_keys2 debug1: Could not open authorized keys '//.ssh/authorized_keys2': No such file or directory debug1: restore_uid: 0/0 Failed publickey for user from 192.168.1.60 port 53445 ssh2 debug1: userauth-request for user user service ssh-connection method keyboard-interactive debug1: attempt 2 failures 1 debug1: keyboard-interactive devs debug1: auth2_challenge: user=user devs= debug1: kbdint_alloc: devices '' Failed keyboard-interactive for user from 192.168.1.60 port 53445 ssh2 debug1: Unable to open the btmp file /var/log/btmp: No such file or directory debug1: userauth-request for user user service ssh-connection method password debug1: attempt 3 failures 2 Could not get shadow information for user Accepted password for user from 192.168.1.60 port 53445 ssh2 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. /etc/pam.d/sshd: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # Standard Un*x authentication. auth include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. account include common-accountt # Standard Un*x session setup and teardown. session include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Standard Un*x password updating. password include common-password
In the chat, it turned out the system was using traditional (non-shadow) password storage and traditional Unix password hashing algorithm. Both are poor choices in today's security environment. Since the traditional password hashing algorithm only stores and compares the first 8 characters of the password, that explains the behavior noticed in the original question. The posted sshd output includes the line: Could not get shadow information for user I would assume this means at least sshd (or possibly the PAM Unix password storage library) on this system includes shadow password functionality, but for some reason, the system vendor has chosen not to use it.
ssh - why can I login with partial passwords? [duplicate]
1,536,964,576,000
I am using OpenStack Cloud and using LVM on RHEL 7 to manage volumes. As per my use case, I should be able to detach and attach these volumes to different instances. While updating fstab, I have used defaults,nofail for now but I am not sure what exactly I should be using. I am aware of these options: rw, nofail, noatime, discard, defaults But I don't how to use them. What should be the ideal configuration for my use case ?
As said by @ilkkachu, if you take a look at the mount(8) manpage, all your doubts should go away. Quoting the manpages: -w, --rw, --read-write Mount the filesystem read/write. This is the default. A synonym is -o rw. Means: Not needed at all, since rw is the default, and it is part of the defaults option nofail Do not report errors for this device if it does not exist. Means: If the device is not enable after you boot and mount it using fstab, no errors will be reported. You will need to know if a disk can be ignored if not mounted. Pretty useful on usb drivers, but i see no point on using this on a server... noatime Do not update inode access times on this filesystem (e.g., for faster access on the news spool to speed up news servers). Means: No read operation is a "pure" read operation on filesystems. Even if you only cat file for example, a little write operation will update the last time the inode of this file was accessed. It's pretty useful on some situations(like caching servers), but it can be dangerous if used on sync technologies like Dropbox. I'm no one to judge here what is best for you, if noatime set or ignored... discard/nodiscard Controls whether ext4 should issue discard/TRIM commands to the underlying block device when blocks are freed.This is useful for SSD devices and sparse/thinly -provisioned LUNs, but it is off by default until sufficient testing has been done. Means: TRIM feature from ssds. Take your time to read on this guy, and probe if your ssd support this feature(pretty much all modern ssds suport it). hdparm -I /dev/sdx | grep "TRIM supported" will tell you if trim is supported on your ssd. As for today, you could achieve better performance and data health by Periodic trimming instead of a continuous trimming on your fstab. There is even a in-kernel device blacklist for continuous trimming since it can cause data corruption due to non-queued operations. defaults Use default options: rw, suid, dev, exec, auto, nouser, and async. tl;dr: on your question, rw can be removed(defaults already imply rw), nofail is up to you, noatime is up to you, the same way discard is just up to your hardware features.
When and where to use rw,nofail,noatime,discard,defaults?
1,536,964,576,000
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. I want to develop an application that monitors the internet usage of users. Each user has a fixed IP address. I and some other people are connected to a DES-108 8-Port Fast Ethernet Unmanaged Desktop Switch As said earlier I want to capture all the traffics from all users not only those packets that are belong to me. How should I force my NIC or other components to receive all of packets?
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. Correction: it rejects those packets which their destination MAC address is not equal to its MAC address (or multicast or any additional addresses in its filter. Packet capture utilities can trivially put the network device into promiscuous mode, which is to say that the above check is bypassed and the device accepts everything it receives. In fact, this is usually the default: with tcpdump, you have to specify the -p option in order to not do it. The more important issue is whether the packets you are interested are even being carried down the wire to your sniffing port at all. Since you are using an unmanaged ethernet switch, they almost certainly are not. The switch is deciding to prune packets that don't belong to you from your port before your network device can hope to see them. You need to connect to a specially configured mirroring or monitoring port on a managed ethernet switch in order to do this.
How to capture all incoming packets to NIC even those packets are not belonging to me
1,536,964,576,000
I want to alter the TCP RTO (retransmission timeout) value for a connection, and some reading I have done suggests that I could do this, but does not reveal where and how to change it. I have looked at the /proc/sys/net/ipv4 variables, but none of the variables is related to RTO. I would appreciate it if someone can tell me how to alter this value.
The reason you can't alter the RTO specifically is because it is not a static value. Instead (except for the initial SYN, naturally) it is based on the RTT (Round Trip Time) for each connection. Actually, it is based on a smoothed version of RTT and the RTT variance with some constants thrown into the mix. Hence, it is a dynamic, calculated value for each TCP connection, and I highly recommend this article which goes into more detail on the calculation and RTO in general. Also relevant is RFC 6298 which states (among a lot of other things): Whenever RTO is computed, if it is less than 1 second, then the RTO SHOULD be rounded up to 1 second. Does the kernel always set RTO to 1 second then? Well, with Linux you can show the current RTO values for your open connections by running the ss -i command: State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.0.2.15:52861 216.58.219.46:http cubic rto:204 rtt:4/2 cwnd:10 send 29.2Mbps rcv_space:14600 ESTAB 0 0 10.0.2.15:ssh 10.0.2.2:52586 cubic rto:201 rtt:1.5/0.75 ato:40 cwnd:10 send 77.9Mbps rcv_space:14600 ESTAB 0 0 10.0.2.15:52864 216.58.219.46:http cubic rto:204 rtt:4.5/4.5 cwnd:10 send 26.0Mbps rcv_space:14600 The above is the output from a VM which I am logged into with SSH and has a couple of connections open to google.com. As you can see, the RTO is in fact set to 200-ish (milliseconds). You will note that is not rounded to the 1 second value from the RFC, and you may also think that it's a little high. That's because there are min (200 milliseconds) and max (120 seconds) bounds in play when it comes to RTO for Linux (there is a great explanation of this in the article I linked above). So, you can't alter the RTO value directly, but for lossy networks (like wireless) you can try tweaking F-RTO (this may already be enabled depending on your distro). There are actually two related options related to F-RTO that you can tweak (good summary here): net.ipv4.tcp_frto net.ipv4.tcp_frto_response Depending on what you are trying to optimize for, these may or may not be useful. EDIT: following up on the ability to tweak the rto_min/max values for TCP from the comments. You can't change the global minimum RTO for TCP (as an aside, you can do it for SCTP - those are exposed in sysctl), but the good news is that you can tweak the minimum value of the RTO on a per-route basis. Here's my routing table on my CentOS VM: ip route 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 169.254.0.0/16 dev eth0 scope link metric 1002 default via 10.0.2.2 dev eth0 I can change the rto_min value on the default route as follows: ip route change default via 10.0.2.2 dev eth0 rto_min 5ms And now, my routing table looks like this: ip route 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 169.254.0.0/16 dev eth0 scope link metric 1002 default via 10.0.2.2 dev eth0 rto_min lock 5ms Finally, let's initiate a connection and check out ss -i to see if this has been respected: ss -i State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.0.2.15:ssh 10.0.2.2:50714 cubic rto:201 rtt:1.5/0.75 ato:40 cwnd:10 send 77.9Mbps rcv_space:14600 ESTAB 0 0 10.0.2.15:39042 216.58.216.14:http cubic rto:15 rtt:5/2.5 cwnd:10 send 23.4Mbps rcv_space:14600 Success! The rto on the HTTP connection (after the change) is 15ms, whereas the SSH connection (before the change) is 200+ as before. I actually like this approach - it allows you to set the lower value on appropriate routes rather than globally where it might screw up other traffic. Similarly (see the ip man page) you can tweak the initial rtt estimate and the initial rttvar for the route (used when calculating the dynamic RTO). While it's not a complete solution in terms of tweaking, I think most of the important pieces are there. You can't tweak the max setting, but I think that is not going to be as useful generally in any case.
Changing the TCP RTO value in Linux
1,536,964,576,000
I have data in multiple files. I want to find some text that matches in all files. Can I use the grep command for that? If yes then how?
If you do not know where exactly the files are located, but know their names, you can use find: find . \( -name "filename1" -o -name "filename2" \) -exec grep "<grepstatement>" '{}' \; -print Assuming that the files are in this directory somewhere.
Use "grep" to match text in multiple files
1,536,964,576,000
I read description of these command from a book, passwd: Changes the password for an existing user. chpasswd: Reads a file of login name and password pairs, and updates the passwords. It seems these command doing same jobs. Is there a difference between them? EDIT: I want to learn when we use them which file(s) change. Do they change same file or different file? If they change different file(s), what are they?
From man chpasswd: 'This command is intended to be used in a large system environment where many accounts are created at a single time.' passwd is (in my experience) normally used interactively for a single user.
What's the difference between 'passwd' and 'chpasswd'?
1,536,964,576,000
I tried to run an example java program using the following command line. However, I do not know what is the trailing part < /dev/null & used for? java -cp /home/weka.jar weka.classifiers.trees.J48 –t train_file >& log < /dev/null &
< /dev/null is used to instantly send EOF to the program, so that it doesn't wait for input (/dev/null, the null device, is a special file that discards all data written to it, but reports that the write operation succeeded, and provides no data to any process that reads from it, yielding EOF immediately). & is a special type of command separator used to background the preceding process. Without knowing the program being called, I do not directly know why it is required to run it in this way.
the usage of < /dev/null & in the command line
1,536,964,576,000
If you run ls -l on a file that contains one letter, it will list as 2B in size. If your file system is in 4k blocks, I thought it rounded files up to the block size? Is it because ls -l actually reads the byte count from the inode? In what circumstances do you get rounded up to block answers vs actual byte count answers in Linux 2.6 Kernel GNU utils?
I guess you got that one letter into the file with echo a > file or vim file, which means, you'll have that letter and an additional newline in it (two characters, thus two bytes). ls -l shows file size in bytes, not blocks (to be more specific: file length): $ echo a > testfile $ ls -l testfile -rw-r--r-- 1 user user 2 Apr 28 22:08 testfile $ cat -A testfile a$ (note that cat -A displays newlines as $ character) In contrast to ls -l, du will show the real size occupied on disk: $ du testfile 4 (actually, du shows size in 1kiB units, so here the size is 4×1024 bytes = 4096 bytes = 4 kiB, which is the block size on this file system) To have ls show this, you'll have to use the -s option instead of/in addition to -l: $ ls -ls testfile 4 -rw-r--r-- 1 user user 2 Apr 28 22:08 testfile The first column is the allocated size, again in units of 1kiB. Last can be changed by specifying --block-size, e.g. $ ls -ls --block-size=1 testfile 4096 -rw-r--r-- 1 aw aw 2 Apr 28 22:08 testfile
EXT3: If block size is 4K, why does ls -l show file sizes below that?
1,536,964,576,000
I am trying to upgrade my ubuntu server 22.10 Kinetic Kudu to 23.10 mantic. I was able to upgrade my Kinetic release up to the latest updates by using this sources.list: # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://old-releases.ubuntu.com/ubuntu kinetic main restricted # deb-src http://old-releases.ubuntu.com/ubuntu kinetic main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://old-releases.ubuntu.com/ubuntu kinetic-updates main restricted # deb-src http://old-releases.ubuntu.com/ubuntu kinetic-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://old-releases.ubuntu.com/ubuntu kinetic universe # deb-src http://old-releases.ubuntu.com/ubuntu kinetic universe deb http://old-releases.ubuntu.com/ubuntu kinetic-updates universe # deb-src http://old-releases.ubuntu.com/ubuntu kinetic-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://old-releases.ubuntu.com/ubuntu kinetic multiverse # deb-src http://old-releases.ubuntu.com/ubuntu kinetic multiverse deb http://old-releases.ubuntu.com/ubuntu kinetic-updates multiverse # deb-src http://old-releases.ubuntu.com/ubuntu kinetic-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://old-releases.ubuntu.com/ubuntu kinetic-backports main restricted universe multiverse # deb-src http://old-releases.ubuntu.com/ubuntu kinetic-backports main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu kinetic-security main restricted # deb-src http://old-releases.ubuntu.com/ubuntu kinetic-security main restricted deb http://old-releases.ubuntu.com/ubuntu kinetic-security universe When i execute the do-release-upgrade, i get the following message: An upgrade from 'kinetic' to 'mantic' is not supported with this tool Is it indeed impossible to perform the upgrade path i am trying to do?
Finally i was able to do the upgrade with the following steps: Modify your current /etc/apt/sources.list and replace the urls starting with http://archive... by http://old-releases Once done, execute sudo apt update then sudo apt upgrade and sudo apt dist-upgrade Then reboot Modify the /etc/apt/sources.list and put back archive instead of old-releases Modify the /etc/apt/sources.list and replace all occurences of kinetic by lunar Then perform sudo apt update, sudo apt upgrade and sudo apt dist-upgrade. This will ensure that you upraded to 23.04 Then reboot Then do a sudo do-release-upgrade to upgrade to 23.10
Upgrading Ubuntu Server 22.10 Kinetic Kudu to 23.10 Mantic
1,536,964,576,000
I am trying to setup a passwordless login from machineA to machineB for my user david which already exits. This is what I did to generate the authentication keys: david@machineA:~$ ssh-keygen -t rsa ........ david@machineB:~$ ssh-keygen -t rsa ........ After that I copied id_rsa.pub (/home/david/.ssh/id_rsa.pub) key of machineA into machineB authorized_keys file (/home/david/.ssh/authorized_keys) key. And then I went back to machineA login screen and ran below command and it worked fine without any issues. So I was able to login into machineB as david user without asking for any password. david@machineA:~$ ssh david@machineB Question: Now I created a new user on machineA and machineB both by running this command only useradd golden. And now I want to ssh passwordless from this golden user into machineB from machineA. I did same exact step as above but it doesn't work. david@machineA:~$ sudo su - golden golden@machineA:~$ ssh-keygen -t rsa ........ david@machineB:~$ sudo su - golden golden@machineB:~$ ssh-keygen -t rsa ........ And then I copied id_rsa.pub key /home/golden/.ssh/id_rsa.pub for golden user from machineA to machineB authorized_keys file /home/golden/.ssh/authorized_keys. And when I try to ssh, it gives me: golden@machineA:~$ ssh golden@machineB Connection closed by 23.14.23.10 What is wrong? It doesn't work only for golden user which I created manually through this command useradd. I am running Ubuntu 14.04. Is there any settings that I need to enable for this manual user which I created? In the machineB auth.log file, below is what I am seeing when I run this command from machineA ssh -vvv golden@machineB to login Jan 3 17:56:59 machineB sshd[25664]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key Jan 3 17:56:59 machineB sshd[25664]: pam_access(sshd:account): access denied for user `golden' from `machineA' Jan 3 17:56:59 machineB sshd[25664]: pam_sss(sshd:account): Access denied for user golden: 10 (User not known to the underlying authentication module) Jan 3 17:56:59 machineB sshd[25664]: fatal: Access denied for user golden by PAM account configuration [preauth] Is there anything I am missing? Below is how my directory structure looks like: golden@machineA:~$ pwd /home/golden golden@machineA:~$ ls -lrtha total 60K -rw------- 1 golden golden 675 Nov 22 12:26 .profile -rw------- 1 golden golden 3.6K Nov 22 12:26 .bashrc -rw------- 1 golden golden 220 Nov 22 12:26 .bash_logout drwxrwxr-x 2 golden golden 4.0K Nov 22 12:26 .parallel drwxr-xr-x 2 golden golden 4.0K Nov 22 12:34 .vim drwxr-xr-x 7 root root 4.0K Dec 22 11:56 .. -rw------- 1 golden golden 17K Jan 5 12:51 .viminfo drwx------ 2 golden golden 4.0K Jan 5 12:51 .ssh drwx------ 5 golden golden 4.0K Jan 5 12:51 . -rw------- 1 golden golden 5.0K Jan 5 13:14 .bash_history golden@machineB:~$ pwd /home/golden golden@machineB:~$ ls -lrtha total 56K -rw------- 1 golden golden 675 Dec 22 15:10 .profile -rw------- 1 golden golden 3.6K Dec 22 15:10 .bashrc -rw------- 1 golden golden 220 Dec 22 15:10 .bash_logout drwxr-xr-x 7 root root 4.0K Jan 4 16:43 .. drwx------ 2 golden golden 4.0K Jan 5 12:51 .ssh -rw------- 1 golden golden 9.9K Jan 5 12:59 .viminfo drwx------ 6 golden golden 4.0K Jan 5 12:59 . -rw------- 1 golden golden 4.6K Jan 5 13:10 .bash_history Update: In machineA: cat /etc/passwd | grep golden golden:x:1001:1001::/home/golden:/bin/bash In machineB: cat /etc/passwd | grep golden golden:x:1001:1001::/home/golden:/bin/bash
The issue is with PAM stack configuration. Your host is configured with pam_access and default configuration is not allowing external/SSH access for the new user golden ,even though your keys are setup properly. Adding golden user into /etc/security/access.conf as below fixed the issue. +:golden:ALL To see more information readman access.conf which explains each field of this file. Look at examples section to understand the order and meanings of LOCAL, ALL etc
Access denied for a particular user by PAM account configuration
1,536,964,576,000
If I run iostat I got sda0, sda1, I sort of know that those are the "hard disks". then there is dm-0, dm-1? I wanted to check the documentation. I checked http://linux.die.net/man/1/iostat it's not mentioned at all. Also my mount command shows this: /dev/mapper/VolGroup-lv_root / ext4 usrjquota=quota.user,jqfmt=vfsv0 1 1 UUID=1450c2bf-d431-4621-9e8e-b0be57fd79b6 /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_home /home ext4 usrjquota=quota.user,jqfmt=vfsv0 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /usr/tmpDSK /tmp ext3 defaults,noauto 0 0 /dev/sdb1 /home2 auto auto,defaults 0 0 /dev/sdc1 /home3 auto auto,defaults 0 0 /dev/sdd1 /home4 auto auto,defaults 0 0 It looks like dm-0, corresponde to one of /dev/mapper/VolGroup-lv. Not sure which one.
sda0, sda1 are the partitions of the hard drive (sda) attached to your machine. dm-0 & dm-1 are the Logical volume managers' logical volumes you would have created while installing or configuring your machine You can read more about it at Wiki
Where is the documentation for what sda, sdb, dm-0, dm-1 mean
1,536,964,576,000
I'm having two separate directories. The user loads a file into the first. Theres a cronjob running in the background which copies the files every 5 minutes over to the second directory. What happens if the user has not completed his upload and the cronjob copies the files? Note that the two directories are owned by different users, the cronjob is performed as root.
cp does not know about opened files. So if first user uploads big file and cronjob (or any other process) starts copying this file, it will only copy as much as was already written. You can think about this in this way - cp makes copy of what is currently on the disk, no matter if the file is complete. Otherwise, you could not copy log files for example.
How does "cp" handle open files?
1,536,964,576,000
I am running KVM on RHEL6, and I have created several virtual machines in it. Issuing ifconfig command to the host system command line shows a list of virbr0, virbr1... and vnet0, vnet2... Are they the IP addresses of the the guest OS? What are the differences between virbr# and vnet#?
Those are network interfaces, not IP addresses. A network interface can have packets from any protocol exchanged on them, including IPv4 or IPv6, in which case they can be given one or more IP addresses. virbr are bridge interfaces. They are virtual in that there's no network interface card associated to them. Their role is to act like a real bridge or switch, that is switch packets (at layer 2) between the interfaces (real or other) that are attached to it just like a real ethernet switch would. You can assign an IP address to that device, which basically gives the host an IP address on that subnet which the bridge connects to. It will then use the MAC address of one of the interfaces attached to the bridge. The fact that their name starts with vir doesn't make them any different from any other bridge interface, it's just that those have been created by libvirt which reserves that name space for bridge interfaces vnet interfaces are other types of virtual interfaces called tap interfaces. They are attached to a process (in this case the process runnin the qemu-kvm emulator). What the process writes to that interface will appear as having been received on that interface by the host and what the host transmits on that interface is available for reading by that process. qemu typically uses it for its virtualized network interface in the guest. Typically, a vnet will be added to a bridge interface which means plugging the VM into a switch.
What is the difference between virbr# and vnet#?
1,536,964,576,000
Is it necessary to defrag drives in Ubuntu? If so, how do I do it and how often should it be done?
Defragment is (or was) recommended under Windows because it had a poor filesystem implementation. Simple techniques such as allocating blocks for files in groups rather than one by one keep fragmentation down under Linux. Typical Linux filesystems only gain significantly from defragmentation on a nearly-full filesystem or with unusual write patterns. Most users don't need it, though heavy file sharers could benefit from it (filling a file in little bits in the middle is not the case ext3 was optimized for; if you're concerned about fragmentation and your bittorrent or other file sharing client offers that option, tell it to preallocate all files before starting to download). At the moment, there is no production-ready defragmentation tool for the common filesystems on Linux (ext3 and ext4). If you installed Ubuntu 9.10 or newer, or converted an existing installation, you have an ext4 filesystem, which supports extents, further reducing fragmentation. For those cases where fragmentation does arise, an ext4 defragmentation tool is in the works, but it's not ready yet. Note that in general, the Linux philosophy and especially the Ubuntu philosophy is that common maintenance tasks should happen automatically without your needing to intervene.
How can I de-fragment a drive using Ubuntu?
1,536,964,576,000
When I issue a command to change my password like this: sudo passwd huahsin The system prompt me: Current Kerberos password: I don't know what I have done to the system configuration, how could I eliminate this Kerberos thing when I change my password?
This issue seems likely to be a problem with the installation of a Active Directory (AD) integration product for authentication called LikeWise. This product is no longer available, to my knowledge. You can read more about it her in this articled titled: How to join Linux server into Active Directory on SBS 2008 network. It's also listed here in the Wikipedia page on products that support SMB as well as here on the Active Directory Wikipedia page. Here are two methods for identifying this product's been setup. 1. Lsass error messages 20111006152006:ERROR:Lsass Error [ERROR_BAD_NET_NAME] Network name not found.. Failure to lookup a domain name ending in “.local” may be the result of configuring the local system’s hostname resolution (or equivalent) to use Multi-cast DN 2. Modified nsswitch.conf And these modifications to your /etc/nsswitch.conf file. passwd: compat winbind lsass group: compat winbind lsass shadow: compat Working around? You should be able to safely leave it installed and change your Name Service Switch configuration file (nsswitch.conf) so that it uses just your local files for authentication. passwd: files group: files shadow: files I also dug up this Launchpad bug that covers uninstalling LikeWise-open. There are some things that it doesn't do to revert your system when you uninstall it. They're covered in this bug along with how to manually undo the install. Likewise uninstall, Lock login to system
How could I eliminate Kerberos for passwd?
1,536,964,576,000
I have a service running software that generates some configuration files if they don't exist, and read them if they do exist. The problem I have been facing is that these files sometimes get corrupt, making the software unable to start, and thus making the service fail. In this case I would like to remove these files and restart the service. I tried creating a service that should get executed in case of failure, by doing this: [Service] ExecStart=/bin/run_program OnFailure=software-fail.service where this service is: [Service] ExecStart=/bin/rm /file/to/delete ExecStop=systemctl --user start software.service The problem, however, is that this service doesn't start, even when the service has failed. I tried doing systemctl --user enable software-fail.service but then it starts every time the system starts, just like any other service. My temporary solution is to use ExecStopPost=/bin/rm /file/to/delete but this is not a satisfying way of solving it, as it will always delete the file upon stopping the service, no matter if it was because of failure or not. Output when failing: ● software.service - Software Loaded: loaded (/home/trippelganger/.config/systemd/user/software.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2018-05-04 09:05:26 CEST; 5s ago Process: 1839 ExecStart=/bin/run_program (code=exited, status=1/FAILURE) Main PID: 1839 (code=exited, status=1/FAILURE) May 04 09:05:26 trippelganger systemd[595]: software.service: Main process exited, code=exited, status=1/FAILURE May 04 09:05:26 trippelganger systemd[595]: software.service: Unit entered failed state. May 04 09:05:26 trippelganger systemd[595]: software.service: Failed with result 'exit-code'. Output of systemctl --user status software-fail.service is: ● software-fail.service - Delete corrupt files Loaded: loaded (/home/trippelganger/.config/systemd/user/software-fail.service; disabled; vendor preset: enabled) Active: inactive (dead)
NOTE: You probably want to use ExecStopPost= instead of OnFailure= here (see my other answer), but this is trying to address why your OnFailure= setup is not working. The problem with OnFailure= not starting the unit might be because it's in the wrong section, it needs to be in the [Unit] section and not [Service]. You can try this instead: # software.service [Unit] Description=Software OnFailure=software-fail.service [Service] ExecStart=/bin/run_program And: # software-fail.service [Unit] Description=Delete corrupt files [Service] ExecStart=/bin/rm /file/to/delete ExecStop=/bin/systemctl --user start software.service I can make it work with this setup. But note that using OnFailure= is not ideal here, since you can't really tell why the program failed, and chaining another start of it in ExecStop= by calling /bin/systemctl start directly is pretty hacky... The solution using ExecStopPost= and looking at the exit status is definitely superior. If you define OnFailure= inside [Service], systemd (at least version 234 from Fedora 27) complains with: software.service:6: Unknown lvalue 'OnFailure' in section 'Service' Not sure if you're seeing that in your logs or not... (Maybe this was added in recent systemd?) That should be a hint of what is going on there.
Proper way to use OnFailure in systemd
1,536,964,576,000
Is there a way to know which cores currently have a process pinned to them? Even processes run by other users should be listed in the output. Or, is it possible to try pinning a process to a core but fail in case the required core already has a process pinned to it? PS: processes of interest must have bin pinned to the given cores, not just currently running on the given core PS: this is not a duplicate, the other question is on how to ensure exclusive use of one CPU by one process. Here we are asking how to detect that a process was pinned to a given core (i.e. cpuset was used, not how to use it).
Answer to myself: hwloc-bind from Linux (and homebrew for Macs) package hwloc. Cf. https://www.open-mpi.org/projects/hwloc/tutorials/20130115-ComPAS-hwloc-tutorial.pdf for some doc.
Linux: how to know which processes are pinned to which core?
1,390,425,720,000
If an application crashes in Windows we can check the Event Viewer in the Administration tools to see what has crashed. Sometimes it has useful information others not, but it is a start. In linux if an application (any) crashes how one starts to trace what happened? Is there e.g. some central log or something similar?
Is there e.g. some central log or something similar? The normal place for system logs is /var/log/. What gets put in each log depends on the syslog configuration, but commonly everything except logins goes to /var/log/syslog. This is no guarantee that individual applications will have left any clue there in the event of a problem. But they, or the shell, will likely spit something to the standard out/standard error streams, and if you run a troublesome application in the foreground from a terminal you'll be able to see that stuff.
How can we trace problems of crashing programs in Linux?
1,390,425,720,000
Is it possible to automatically rename a file when it's placed in a specific directory? For example I have a directory named "dir0".I move or copy a file named "file1" to "dir0".then "file1" should rename to "file1_{current timestamp}"
Usually you would do this programatically at the time you create or move the file, but it is possible to trigger a script whenever a file gets created or moved to a folder using incron. Set up your tab file using incrontab -e with a line like this, but with your paths of course: /path/to/dir0 IN_MOVED_TO,IN_CREATE /path/to/script $@/$# Then in /path/to/script write a quick rename action. Be aware that the script will also get called for the new file that you create, so it has to test whether the file has been appropriately named already or not. In this example it checks to see if the file has a ten-digit number for seconds from epoch as the last part of the file name, and if it doesn't, it adds it: #!/bin/bash echo $1 | grep -qx '.*_[0-9]\{10\}' || mv "$1" "$1_$(date +%s)" Edit: When I first wrote this up I was short on time and couldn't figure out how to make bash do the pattern matching here. Gilles pointed out how to do this without invoking grep using ERE matching in bash: #!/bin/bash [[ ! ( $1 =~ _[0-9]{10}$ ) ]] && mv "$1" "$1_$(date +%s)"
Automatically rename files when they are placed in a specific directory
1,390,425,720,000
To Create a Fake Ethernet dummy Interface On Linux we First initialize the dummy interface driver using the below command: /sbin/modprobe dummy. Then we Assign Ethernet Interface alias To Dummy Driver we just initialized above. But it gives the following Fatal error saying: FATAL: Module dummy not found. Also, at the path cd /sys/devices/virtual/net# , we can see that there are virtual interfaces present by the following names: dummy0/ lo/ sit0/ tunl0/ ifconfig -a dummy0: Link encap:Ethernet HWaddr aa:3a:a6:cd:91:2b BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo: Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:111 errors:0 dropped:0 overruns:0 frame:0 TX packets:111 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8303 (8.1 KiB) TX bytes:8303 (8.1 KiB) sit0: Link encap:UNSPEC HWaddr 00-00-00-00-FF-00-00-00-00-00-00-00-00-00-00-00 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) tunl0: Link encap:IPIP Tunnel HWaddr NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So, the modprobe command is not able to load the kernel module. How can we load a kernel module using modprobe or insmod to initialize a dummy interface driver? Can we create multiple dummy interfaces on a single loaded module?
The usual way to add several dummy interfaces is to use iproute2: # ip link add dummy0 type dummy # ip link add dummy1 type dummy # ip link list ... 5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 22:4e:84:26:c5:98 brd ff:ff:ff:ff:ff:ff 6: dummy1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 9e:3e:48:b5:d5:1d brd ff:ff:ff:ff:ff:ff But the error message FATAL: Module dummy not found indicates that you may have a kernel where the dummy interface module is not enabled, so make sure to check your kernel configuration, and recompile the kernel if necessary.
How can we create multiple dummy interfaces on Linux?
1,390,425,720,000
The standard files/tools that report memory seem to have different formats on different Linux distributions. For example, on Arch and Ubuntu. Arch $ free total used free shared buff/cache available Mem: 8169312 3870392 2648348 97884 1650572 4110336 Swap: 16777212 389588 16387624 $ head /proc/meminfo MemTotal: 8169312 kB MemFree: 2625668 kB MemAvailable: 4088520 kB Buffers: 239688 kB Cached: 1224520 kB SwapCached: 17452 kB Active: 4074548 kB Inactive: 1035716 kB Active(anon): 3247948 kB Inactive(anon): 497684 kB Ubuntu $ free total used free shared buffers cached Mem: 80642828 69076080 11566748 3063796 150688 58358264 -/+ buffers/cache: 10567128 70075700 Swap: 20971516 5828472 15143044 $ head /proc/meminfo MemTotal: 80642828 kB MemFree: 11565936 kB Buffers: 150688 kB Cached: 58358264 kB SwapCached: 2173912 kB Active: 27305364 kB Inactive: 40004480 kB Active(anon): 7584320 kB Inactive(anon): 4280400 kB Active(file): 19721044 kB So, how can I portably (across Linux distros only) and reliably get the amount of memory—excluding swap—that is available for my software to use at a particular time? Presumably that's what's shown as "available" and "MemAvailable" in the output of free and cat /proc/meminfo in Arch but how do I get the same in Ubuntu or another distribution?
MemAvailable is included in /proc/meminfo since version 3.14 of the kernel; it was added by commit 34e431b0a. That's the determining factor in the output variations you show. The commit message indicates how to estimate available memory without MemAvailable: Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the "low" watermarks from /proc/zoneinfo. The low watermarks are the level beneath which the system will swap. So in the absence of MemAvailable you can at least add up the values given for MemFree, Active(file), Inactive(file) and SReclaimable (whichever are present in /proc/meminfo), and subtract the low watermarks from /proc/zoneinfo. The latter also lists the number of free pages per zone, that might be useful as a comparison... The complete algorithm is given in the patch to meminfo.c and seems reasonably easy to adapt: sum the low watermarks across all zones; take the identified free memory (MemFree); subtract the low watermark (we need to avoid touching that to avoid swapping); add the amount of memory we can use from the page cache (sum of Active(file) and Inactive(file)): that's the amount of memory used by the page cache, minus either half the page cache, or the low watermark, whichever is smaller; add the amount of memory we can reclaim (SReclaimable), following the same algorithm. So, putting all this together, you can get the memory available for a new process with: awk -v low=$(grep low /proc/zoneinfo | awk '{k+=$2}END{print k}') \ '{a[$1]=$2} END{ print a["MemFree:"]+a["Active(file):"]+a["Inactive(file):"]+a["SReclaimable:"]-(12*low); }' /proc/meminfo
How can I get the amount of available memory portably across distributions?
1,390,425,720,000
As all know, we can send broadcast message to all users on a Linux machine. But how to send message only to the specific user? For example: #who rodegc pts/1 2015-05-04 04:23 (10.4.72.1) dwwar pts/3 2015-05-03 00:56 (10.4.72.2) tzcsar pts/5 2015-05-03 22:32 (10.4.72.6) . . . . . In this case how to send broadcast message only to the user rodegc? FROM MAN PAGE: WALL(1) Linux User's Manual WALL(1) NAME wall -- send a message to everybodyגs terminal. SYNOPSIS wall [-n] [ message ] From the man page, I can't see any option to send to a specific user.
With write: write <user> Some text goes here CTRL-D (eof) Alternative: echo "Some text goes here" | write <user> See man write.
Linux + send wall message only to the specific user
1,390,425,720,000
I can see the names of other users on the remote machine with the who command... I'd also like know the IP address of those users... I was trying with the commands /sbin/ifconfing and netstat but I could not get positive results... I need this solution compatible both with Linux and Unix... Is there a command with that utility? Do I need to write a script or use a kind of pipes?
Try the w command, part of the procps package. $ w 21:12:09 up 6 days, 7:42, 1 user, load average: 0.27, 1.08, 1.64 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT h3xx pts/11 192.168.1.3 21:12 2.00s 0.04s 0.04s -bash
How I can know the IP address of other users logged at the same remote machine?
1,390,425,720,000
Is there any lightwight X11 alternative suited for old systems? (Say, 1GHz and 256-314MB RAM)
The XFree86 implementation of the X server includes TinyX, which is part of many small Linux distributions e.g. Damn Small Linux or embedded Linux distributions. TinyX perfectly fits your requirements.
Lightweight X11 alternative available?
1,390,425,720,000
I have a Server (Debian) that is serving some folders trough NFS and a Client (Debian) that connects to the NFS Server (With NFSv4) and mounts that exported folder. So far everything is fine, I can connect and modify the content of the folders. But the users are completely messed up. From what I understand this is due to NFS using the UIDs to set the permissions, and as the UIDs of the users from the Client and the Server differ, then this happens, which is still expected. But from what I understood, by enabling NFSv4, IDMAPD should kick in and use the username instead of the UIDs. The users do exist on the Server and Client side, they just have different UIDs. But for whatever reason IDMAPD doesn't work or doesn't seem to do anything. So here is what I've done so far: On Server Side: installed nfs-kernel-server populated the /etc/exports with the proper export settings --> /rfolder ip/24(rw,sync,no_subtree_check,no_root_squash) and changed /etc/default/nfs-common to have NEED_IDMAPD=yes On the Client Side installed nfs-common and changed /etc/default/nfs-common to have NEED_IDMAPD=yes and mount the folder with "mount -t nfs4 ip:/rfolder /media/lfolder" Rebooted and restarted both several times, but still nothing. When I create from the Server a folder with user A, on the Client I see that the folder owner is some user X. When I create a file from the Client with user A, on the Server side it says its from some user Y. I checked with HTOP that the rpc.idmap process is running on the Server and it is indeed. Although on the Client it doesn't appears to be running. By trying to manually start the service on the Client I just got an error message stating that IDMAP requires the nfs-kernel-server dependency to run. So I installed it on the Client side, and now I have the rpc.idmap process running on both Client and Server. Restarted both, and the issue still persists. Any idea what is wrong here? Or how to configure this properly?
There are a couple of things to note when using NFSv4 id mapping on mounts which use the default AUTH_SYS authentication (sec=sys mount option) instead of Kerberos. NOTE: With AUTH_SYS idmapping only translates the user/group names. Permissions are still checked against local UID/GID values. Only way to get permissions working with usernames is with Kerberos. On recent kernels, only the server uses rpc.idmapd (documented in man rpc.idmapd). When using idmap, the user names are transmitted in user@domain format. Unless a domain name is configured in /etc/idmapd.conf, idmapd uses the system's DNS domain name. For idmap to map the users correctly, the domain name needs to be same on the client and on the server. Secondly, kernel disables id mapping for NFSv4 sec=sys mounts by default. Setting nfs4_disable_idmapping parameter to false enables id mapping for sec=sys mounts. On server: echo "N" > /sys/module/nfsd/parameters/nfs4_disable_idmapping and on client(s): echo "N" > /sys/module/nfs/parameters/nfs4_disable_idmapping You need to clear idmap cache with nfsidmap -c on clients for the changes to be visible on mounted NFSv4 file systems. To make these changes permanent, create configuration files in /etc/modprobe.d/, on server (modprobe.d/nfsd.conf): options nfsd nfs4_disable_idmapping=N on client(s) (modprobe.d/nfs.conf): options nfs nfs4_disable_idmapping=N
How to get NFSv4 idmap working with sec=sys?
1,390,425,720,000
So, I have a folder with alot of files, they are generated by second and need to be kept in I only can delete them after 90 days, so as you may guess, I would generate tons of files and then after the 90 days I'm able to delete those files that is older than 90 days. But, I'm stuck in the part where I search for those files, since I have alot, the system complains that the list is too large and thus I can't remove them. What is the best solution so I can pass this? The file names are in timestamp mode, so I could start by it, but I want to make sure all files are deleted after some time.... I have tried these methods rm -rf * find /path/to/files/ -type f -name '*.ts' -mtime +90 -exec rm {} \; I have also managed to create a script to delete by filename, but with this method I have no guarantee that all the files are deleted.
If the files are not modified after initial creation, you could delete if they have not been modified in over 90 days: find /path/to/folder ! -type d -mtime +90 -delete or find /path/to/folder ! -type d -mtime +90 -exec rm -f {} + (for versions of find which do not support the -delete action). As a matter of safety, you should use a non-destructive version of this command first and ensure it will delete exactly what you want deleted, especially if you intend to automate this action via cron or similar, e.g.: find /path/to/folder ! -type d -mtime +90 -print Note that -mtime +90 checks the modification time of the files and returns true if the age, rounded down to an integer number of days (86400 second units) is strictly greater than 90, so matches on files that have been last-modified 91 days ago or earlier.
Best way to delete large amount of files by date
1,390,425,720,000
I want to do stuff like this: echo myserver:~/dir/2/ | rsync -r HERE /local/path/ I want the output redirected to a specified location in the command. The echo stuff goes "HERE". What's the easiest way to do this?
You can use xargs or exactly this requirement. You can use the -I as place-holder for the input received from the pipeline, do echo "myserver:${HOME}/dir/2/" | xargs -I {} rsync -r "{}" /local/path/ (or) use ~ without double-quotes under which it does not expand to the HOME directory path. echo myserver:~/dir/2/ | xargs -I {} rsync -r "{}" /local/path/
Specify pipe output position
1,390,425,720,000
I have an application which runs as a daemon and is controlled by a script in /etc/init.d Sometimes we need to change some parameters of startup/control of these scripts and then restart the daemon. These scripts only have write permission for the root user, so when editing these scripts I need root privileges. What I was thinking is that should I make a non-root user the owner of those scripts. This way only root and a special user can edit these scripts. Is it acceptable to keep some non-root owned files under /etc/init.d directories? Or it is absurd, disturbing the natural order of the system?
What immediately comes to mind is an underprivileged user being able to run things on boot as root, which is desirable to crackers that: Want to escalate privileges of other accounts Want to use your server to host a rogue service Want to start IRC/Spam bots if the server reboots Want to ping a mother ship to say "I'm up again" and perhaps download a new payload Want to clean up their tracks ... other badness. This is possible if your underprivileged user is somehow compromised, perhaps through another service (http/etc). Most attackers will quickly run an ls or find on/of everything in /etc just to see if such possibilities exist, there's shells written in various languages they use that makes this simple. If you manage the server remotely, mostly via SSH, there's a very good chance that you won't even see this unless you inspect the init script, because you won't see the output at boot (though, you should be using something that checks hashes of those scripts against known hashes to see if something changed, or version control software, etc) You definitely don't want that to happen, root really needs to own that init script. You could add the development user to the list of sudoers so that it's convenient enough to update the script, but I'd advise not allowing underprivileged write access to anything in init.d
How secure is keeping non-root owned scripts in /etc/init.d?
1,390,425,720,000
What command can be used to force release everything in swap partition back to memory ? Presume that I have enough memory.
From this Ask Ubuntu question: You can also clear your swap by running swapoff -a and then swapon -a as root instead of rebooting to achieve the same effect. Thus: $ free -tm ... Swap: 6439 196 6243 ... $ sudo swapoff -a $ sudo swapon -a $ free -tm ... Swap: 6439 0 6439 ... As noted in a comment, if you don't have enough memory, swapoff will result in "out of memory" errors and on the kernel killing processes to recover RAM.
What command can be used to force release everything in swap partition back to memory?
1,390,425,720,000
I know that when a page cache page is modified, it is marked dirty and requires a writeback, but what happens when: Scenario: The file /apps/EXE, which is an executable file, is paged in to the page cache completely (all of its pages are in cache/memory) and being executed by process P Continuous release then replaces /apps/EXE with a brand new executable. Assumption 1: I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Assumption 2: I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Question 1: If you mlock all of the pages of the file with something like vmtouch does that change the scenario at all? Question 2: If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) Please correct or validate my 2 assumptions and answer my 2 questions. Let's assume this is a CentOS 7.6 box with some kind of 3.10.0-957.el7 kernel Update: Thinking about it further, I wonder if this scenario is no different than any other dirty page scenario.. I suppose the process that writes the new binary will do a read and get all cache pages since it’s all paged in, and then all those pages will be marked dirty. If they are mlocked, they will just be useless pages occupying core memory after the ref count goes to zero. I suspect when the currently-executing programs end, anything else will use the new binary. Assuming that’s all correct, I guess it’s only interesting when only some of the file is paged in.
Continuous release then replaces /apps/EXE with a brand new executable. This is the important part. The way a new file is released is by creating a new file (e.g. /apps/EXE.tmp.20190907080000), writing the contents, setting permissions and ownership and finally rename(2)ing it to the final name /apps/EXE, replacing the old file. The result is that the new file has a new inode number (which means, in effect, it's a different file.) And the old file had its own inode number, which is actually still around even though the file name is not pointing to it anymore (or there are no file names pointing to that inode anymore.) So, the key here is that when we talk about "files" in Linux, we're most often really talking about "inodes" since once a file has been opened, the inode is the reference we keep to the file. Assumption 1: I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Correct. Assumption 2: I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Incorrect. The old inode is still around, so page faults from the process using the old binary will still be able to find those pages on disk. You can see some effects of this by looking at the /proc/${pid}/exe symlink (or, equivalently, lsof output) for the process running the old binary, which will show /app/EXE (deleted) to indicate the name is no longer there but the inode is still around. You can also see that the diskspace used by the binary will only be released after the process dies (assuming it's the only process with that inode open.) Check output of df before and after killing the process, you'll see it drop by the size of that old binary you thought wasn't around anymore. BTW, this is not only with binaries, but with any open files. If you open a file in a process and remove the file, the file will be kept on disk until that process closes the file (or dies.) Similarly to how hardlinks keep a counter of how many names point to an inode in disk, the filesystem driver (in the Linux kernel) keeps a counter of how many references exist to that inode in memory, and will only release the inode from disk once all references from the running system have been released as well. Question 1: If you mlock all of the pages of the file with something like vmtouch does that change the scenario This question is based on the incorrect assumption 2 that not locking the pages will cause segfaults. It won't. Question 2: If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) It's meant to work the same way and most of the time it does, but there are some "gotchas" with NFS. Sometimes you can see the artifacts of deleting a file that's still open in NFS (shows up as a hidden file in that directory.) You also have some way to assign device numbers to NFS exports, to make sure those won't get "reshuffled" when the NFS server reboots. But the main idea is the same. NFS client driver still uses inodes and will try to keep files around (on the server) while the inode is still referenced.
What happens when a file that is 100% paged in to the page cache gets modified by another process
1,390,425,720,000
I'd like to know more about the advanced uses of the /proc and /sys virtual filesystems, but I don't know where to begin. Can anyone suggest any good sources to learn from? Also, since I think sys has regular additions, what's the best way to keep my knowledge current when a new kernel is released.
Read this blog post: Solving problems with proc There are a few tips what you can do with the proc filesystem. Among other things, there is a tip how to get back a deleted disk image or how to staying ahead of the OOM killer. Don't forget to read the comments, there are good tips, too.
How do I learn what I can do with /proc and /sys [closed]
1,390,425,720,000
Suppose a program asks for some memory, but there is not enough free memory left. There are several different ways Linux could respond. One response is to select some other used memory, which has not been accessed recently, and move this inactive memory to swap. However, I see many articles and comments that go beyond this. They say even when there is a large amount of free memory, Linux will sometimes decide to write inactive memory to swap. Writing to swap in advance means that when we eventually want to use this memory, we do not have to wait for a disk write. They say this is a deliberate strategy to optimize performance. Are they right? Or is it a myth? Cite your source(s). Please understand this question using the following definitions: swap free memory - the "free" memory displayed by the free command. This is the MemFree value from /proc/meminfo. /proc/meminfo is a virtual text file provided by the kernel. See proc(5), or RHEL docs. even when there is a large amount of free memory - for the purpose of argument, imagine there is more than 10% free memory. References Here are some search terms: linux "opportunistic swapping" OR (swap "when the system has nothing better to do" OR "when it has nothing better to do" OR "when the system is idle" OR "during idle time") In the second-highest result on Google, a StackExchange user asks "Why use swap when there is more than enough free space in RAM?", and copies the results of the free command showing about 20% free memory. In response to this specific question, I see this answer is highly voted: Linux starts swapping before the RAM is filled up. This is done to improve performance and responsiveness: Performance is increased because sometimes RAM is better used for disk cache than to store program memory. So it's better to swap out a program that's been inactive for a while, and instead keep often-used files in cache. Responsiveness is improved by swapping pages out when the system is idle, rather than when the memory is full and some program is running and requesting more RAM to complete a task. Swapping does slow the system down, of course — but the alternative to swapping isn't not swapping, it's having more RAM or using less RAM. The first result on Google has been marked as a duplicate of the question above :-). In this case, the asker copied details showing 7GB MemFree, out of 16GB. The question has an accepted and upvoted answer of its own: Swapping only when there is no free memory is only the case if you set swappiness to 0. Otherwise, during idle time, the kernel will swap memory. In doing this the data is not removed from memory, but rather a copy is made in the swap partition. This means that, should the situation arise that memory is depleted, it does not have to write to disk then and there. In this case the kernel can just overwrite the memory pages which have already been swapped, for which it knows that it has a copy of the data. The swappiness parameter basically just controls how much it does this. The other quote does not explicitly claim the swapped data is retained in memory as well. But it seems like you would prefer that approach, if you are swapping even at times when you have 20% free memory, and the reason you are doing so is to improve performance. As far as I know, Linux does support keeping a copy of the same data in both main memory and swap space. I also noticed the common claim that "opportunistic swapping" happens "during idle time". I understand it's supposed to help reassure me that this feature is generally good for performance. I don't include this in my definition above, because I think it already has enough details to make a nice clear question. I don't want to make this more complicated than it needs to be. Original motivation atop shows `swout` (swapping) when I have gigabytes of free memory. Why? There are a couple of reports like this, of Linux writing to swap when there is plenty of free memory. "Opportunistic swapping" might explain these reports. At the same time, at least one alternative cause was suggested. As a first step in looking at possible causes: Does Linux ever perform "opportunistic swapping" as defined above? In the example I reported, the question has now been answered. The cause was not opportunistic swapping.
Linux does not do "opportunistic swapping" as defined in this question. The following primary references do not mention the concept at all: Understanding the Linux Virtual Memory Manager. An online book by Mel Gorman. Written in 2003, just before the release of Linux 2.6.0. Documentation/admin-guide/sysctl/vm.rst. This is the primary documentation of the tunable settings of Linux virtual memory management. More specifically: 10.6 Pageout Daemon (kswapd) Historically kswapd used to wake up every 10 seconds but now it is only woken by the physical page allocator when the pages_low number of free pages in a zone is reached. [...] Under extreme memory pressure, processes will do the work of kswapd synchronously. [...] kswapd keeps freeing pages until the pages_high watermark is reached. Based on the above, we would not expect any swapping when the number of free pages is higher than the "high watermark". Secondly, this tells us the purpose of kswapd is to make more free pages. When kswapd writes a memory page to swap, it immediately frees the memory page. kswapd does not keep a copy of the swapped page in memory. Linux 2.6 uses the "rmap" to free the page. In Linux 2.4, the story was more complex. When a page was shared by multiple processes, kswapd was not able to free it immediately. This is ancient history. All of the linked posts are about Linux 2.6 or above. swappiness This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. This quote describes a special case: if you configure the swappiness value to be 0. In this case, we should additionally not expect any swapping until the number of cache pages has fallen to the high watermark. In other words, the kernel will try to discard almost all file cache before it starts swapping. (This might cause massive slowdowns. You need to have some file cache! The file cache is used to hold the code of all your running programs :-) What are the watermarks? The above quotes raise the question: How large are the "watermark" memory reservations on my system? Answer: on a "small" system, the default zone watermarks might be as high as 3% of memory. This is due to the calculation of the "min" watermark. On larger systems the watermarks will be a smaller proportion, approaching 0.3% of memory. So if the question is about a system with more than 10% free memory, the exact details of this watermark logic are not significant. The watermarks for each individual "zone" are shown in /proc/zoneinfo, as documented in proc(5). An extract from my zoneinfo: Node 0, zone DMA32 pages free 304988 min 7250 low 9062 high 10874 spanned 1044480 present 888973 managed 872457 protection: (0, 0, 4424, 4424, 4424) ... Node 0, zone Normal pages free 11977 min 9611 low 12013 high 14415 spanned 1173504 present 1173504 managed 1134236 protection: (0, 0, 0, 0, 0) The current "watermarks" are min, low, and high. If a program ever asks for enough memory to reduce free below min, the program enters "direct reclaim". The program is made to wait while the kernel frees up memory. We want to avoid direct reclaim if possible. So if free would dip below the low watermark, the kernel wakes kswapd. kswapd frees memory by swapping and/or dropping caches, until free is above high again. Additional qualification: kswapd will also run to protect the full lowmem_reserve amount, for kernel lowmem and DMA usage. The default lowmem_reserve is about 1/256 of the first 4GiB of RAM (DMA32 zone), so it is usually around 16MiB. Linux code commits mm: scale kswapd watermarks in proportion to memory [...] watermark_scale_factor: This factor controls the aggressiveness of kswapd. It defines the amount of memory left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep. The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory. A high rate of threads entering direct reclaim (allocstall) or kswapd going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate that the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knob can then be used to tune kswapd aggressiveness accordingly. proc: meminfo: estimate available memory more conservatively The MemAvailable item in /proc/meminfo is to give users a hint of how much memory is allocatable without causing swapping, so it excludes the zones' low watermarks as unavailable to userspace. However, for a userspace allocation, kswapd will actually reclaim until the free pages hit a combination of the high watermark and the page allocator's lowmem protection that keeps a certain amount of DMA and DMA32 memory from userspace as well. Subtract the full amount we know to be unavailable to userspace from the number of free pages when calculating MemAvailable. Linux code It is sometimes claimed that changing swappiness to 0 will effectively disable "opportunistic swapping". This provides an interesting avenue of investigation. If there is something called "opportunistic swapping", and it can be tuned by swappiness, then we could chase it down by finding all the call-chains that read vm_swappiness. Note we can reduce our search space by assuming CONFIG_MEMCG is not set (i.e. "memory cgroups" are disabled). The call chain goes: vm_swappiness mem_cgroup_swappiness get_scan_count shrink_node_memcg shrink_node shrink_node_memcg is commented "This is a basic per-node page freer. Used by both kswapd and direct reclaim". I.e. this function increases the number of free pages. It is not trying to duplicate pages to swap so they can be freed at a much later time. But even if we discount that: The above chain is called from three different functions, shown below. As expected, we can divide the call-sites into direct reclaim v.s. kswapd. It would not make sense to perform "opportunistic swapping" in direct reclaim. /* * This is the direct reclaim path, for page-allocating processes. We only * try to reclaim pages from zones which will satisfy the caller's allocation * request. * * If a zone is deemed to be full of pinned pages then just give it a light * scan then give up on it. */ static void shrink_zones * kswapd shrinks a node of pages that are at or below the highest usable * zone that is currently unbalanced. * * Returns true if kswapd scanned at least the requested number of pages to * reclaim or if the lack of progress was due to pages under writeback. * This is used to determine if the scanning priority needs to be raised. */ static bool kswapd_shrink_node * For kswapd, balance_pgdat() will reclaim pages across a node from zones * that are eligible for use by the caller until at least one zone is * balanced. * * Returns the order kswapd finished reclaiming at. * * kswapd scans the zones in the highmem->normal->dma direction. It skips * zones which have free_pages > high_wmark_pages(zone), but once a zone is * found to have free_pages <= high_wmark_pages(zone), any page in that zone * or lower is eligible for reclaim until at least one usable zone is * balanced. */ static int balance_pgdat So, presumably the claim is that kswapd is woken up somehow, even when all memory allocations are being satisfied immediately from free memory. I looked through the uses of wake_up_interruptible(&pgdat->kswapd_wait), and I am not seeing any wakeups like this.
Does Linux perform "opportunistic swapping", or is it a myth?
1,390,425,720,000
I'm calling a url with wget: /usr/bin/wget --read-timeout=7200 https://site_url/s Wget performs a GET request every 15 minutes in this case, despite the timeout being set, why does this happen? The call should only be made once, how can I set wget to NOT Retry? I know you can set t=n but 0 is infinite and 1 is 1 more than I want.
Read the man page again: -t number --tries=number Set number of tries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried. Use -t to define the number of tries (attempts), not retries.
Wget, abort retrying after failure or timeout
1,390,425,720,000
I would like to try compile mmu-less kernel. From what I found in configuration there is no option for such a thing. Is it possible to be done?
You can compile a Linux kernel without MMU support on most processor architectures, including x86. However, because this is a rare configuration only for users who know what they are doing, the option is not included in the menu displayed by make menuconfig, make xconfig and the like, except on a few architectures for embedded devices where the lack of MMU is relatively common. You need to edit the .config file explicitly to change CONFIG_MMU=y to CONFIG_MMU=n. Alternatively, you can make the option appear in the menu by editing the file in arch/*/Kconfig corresponding to your architecture and replacing the stanza starting with CONFIG MMU by config MMU bool "MMU support" default y ---help--- Say yes. If you say no, most programs won't run. Even if you make the option appear in the menus, you may need to tweak the resulting configuration to make it internally consistent. MMU-less x86 systems are highly unusual. The easiest way to experiment with an MMU-less system would be to run a genuine MMU-less system in an emulator, using the Linux kernel configuration provided by the hardware vendor or with the emulator. In case this wasn't clear, normal Linux systems need an MMU. The Linux kernel can be compiled for systems with no MMU, but this introduces restrictions that prevent a lot of programs from running. Start by reading No-MMU memory mapping support. I don't think you can use glibc without an MMU, µClibc is usually used instead. Documentation from the µClinux project may be relevant as well (µClinux was the original project for a MMU-less Linux, though nowadays support for MMU-less systems has been integrated into the main kernel tree so you don't need to use µClinux).
MMU-less kernel?
1,390,425,720,000
On Ubuntu you can use something like this: export DEBIAN_FRONTEND=noninteractive sudo -E apt-get update Which will prevent things that require input (choosing grub versions or conflicts between configuration files, or even prompting for a mysql root password during install) I checked the man page for yum but didn't see anything related to non-interactive usage other than check-update which "Returns exit value of 100 if there are packages available for an update" Does Yum have an equivalent to apt / aptitude's DEBIAN_FRONTEND=noninteractive ?
By long-standing convention, RPMs themselves never ask for any interactive input. Batch mode is assumed. Some terrible vendor RPMs may attempt anyway, but since they're not supposed to do that, there's never been much call to have an external feature to work around the bad behavior — just avoid or fix those RPMs. Sometimes yum itself asks for confirmation. For this, you can give -y to tell yum to assume "yes".
Does Yum have an equivalent to apt / aptitude's DEBIAN_FRONTEND=noninteractive?
1,390,425,720,000
How to check the version of a XFS filesystem on a system, whether it is V5 or later?
Since version 3.15, the kernel tells you the version of XFS used in each filesystem as it mounts it; dmesg | grep XFS should give you something like [1578018.463269] XFS (loop0): Mounting V5 Filesystem Instead of loop0 on your system you'll get the underlying device, and V5 will be replaced by whatever version your filesystem uses. Older kernels officially supported XFS version 4 filesystems, but could mount version 5 filesystems (since mid 2013); for the latter, the kernel would print Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled! when the filesystem was mounted.
How to check XFS filesystem version?
1,390,425,720,000
I'm running CentOS 5.5. We have several cronjobs stored in /etc/cron.daily/ . We would like the email for some of these cronjobs to go to a particular email address, while the rest of the emails in /etc/cron.daily/ should go to the default email address (root@localhost). Cronjobs in /etc/cron.daily/ are run from the /etc/crontab file. /etc/crontab specifies a 'MAILTO' field. Can I override this by setting MAILTO in my /etc/cron.daily/foo cronjob? What's the best way to handle this?
Setting [email protected] in /etc/cron.daily/foo does not work. The script output is not sent to [email protected] . The page at http://www.unixgeeks.org/security/newbie/unix/cron-1.html also suggests a simple solution: The file /etc/cron.daily/foo now contains the following: #!/bin/sh /usr/bin/script 2>&1 | mailx -s "$0" [email protected] This will send an email to '[email protected]' with the subject which is equal to the full path of the script (e.g. /etc/cron.daily/foo). Here's what Unixgeeks.org says about this: Output from cron As I've said before, the output from cron gets mailed to the owner of the process, or the person specified in the MAILTO variable, but what if you don't want that? If you want to mail the output to someone else, you can just pipe the output to the command mail. e.g. cmd | mail -s "Subject of mail" user Sometimes, I only want to receive the errors from a cronjob, not the stdout, so I use this trick. The syntax may look wrong at first glance, but rest assured it works. The following cronjob will send STDOUT to /dev/null, and will then handle STDERR via the pipeline. doit 2>&1 >/dev/null | mailx -s "$0" [email protected] Same thing, but send to syslog: doit 2>&1 >/dev/null | /usr/bin/logger -t $ME Also see my answer on ServerFault to Cronjob stderr to file and email
/etc/cron.daily/foo : Send email to a particular user instead of root?
1,390,425,720,000
Recently, I started using tmux. I find it nice, but I'm still having issues understanding this application. One of the basic questions I have is: How do I know (from the command line) what is the name of the tmux session I'm logged to? If I'm logged to some tmux session, it will tell me its name. But if I'm not logged to a tmux session, it will print either nothing or some sort of an error.
The name of the session is stored in the tmux variable #S, to access it in a terminal, you can do tmux display-message -p "#S" If you want to use it in .tmux.conf, it's simply #S. Note that the -p option will print the message on stdout, otherwise the message is displayed in the tmux status line. If the above command is called inside a session, it returns the name of the session. If it is called outside any session, it still returns the name of the last still running session. I couldn't find a tmux command to check, if one is inside a session or not, so I had to come up with this work around: tmux list-sessions | sed -n '/(attached)/s/:.*//p' tmux list-sessions shows all sessions, if one is attached, it shows (attached) at the end. With sed we suppress all output (option -n) except where we find the keyword (attached), at this line we cut away everyhing after a :, which leaves us with the name of the session. This works for me inside and outside a session, as opposed to tmux display-message -p "#S". Of course this works only if there is no : and no (attached) in the name of the session. As commented by Chris Johnsen, a way to check if one is inside a tmux session is to see if its environment variable is set: [[ -n "${TMUX+set}" ]] && tmux display-message -p "#S"
How do I know the name of a tmux session?
1,390,425,720,000
If I choose to allow all traffic on the OUTPUT chain (iptables -P OUTPUT ACCEPT) mail sends fine. As soon as I lock down my server with these rules, outgoing mail stops working. All else works though, which is strange. Does anyone see anything in here that would keep my outgoing mail from sending? I am stumped, have looked at these rules over and over and tried lots of different versions. iptables -F iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --sport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 25 -j ACCEPT iptables -A OUTPUT -p tcp --dport 587 -j ACCEPT iptables -A OUTPUT -p tcp --sport 25 -j ACCEPT iptables -A OUTPUT -p tcp --sport 587 -j ACCEPT iptables -A OUTPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT iptables -N LOGGING iptables -A INPUT -j LOGGING iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 iptables -A LOGGING -j DROP
You have a rule to let the traffic out, but you don't have a rule to let the return traffic in. I'm guessing you meant for these 2 rules to be -A INPUT instead: iptables -A OUTPUT -p tcp --sport 25 -j ACCEPT iptables -A OUTPUT -p tcp --sport 587 -j ACCEPT However using the source port as a method of allowing return traffic in is a bad way to secure the system. All someone has to do is use one of these source ports and your firewall ruleset becomes useless. A much better idea would be to remove all the -A INPUT ... --sport rules and use just this single rule instead: iptables -I INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT The way this rule works is that when your system makes an outbound connection, the kernel records the connection in a tracking table. Then when packets from the remote system come back in, it looks to see if those packets are associated with any connections in the tracking table. The ESTABLISHED bit is the one that allows traffic directly related to the session. This will be TCP packets coming back on the stream. The RELATED bit lets traffic that's related to the connection, but isn't part of the connection itself, through. This can be things like ICMP packets, such as "ICMP can't fragment". These packets aren't part of the TCP stream, but are vitally important to keeping the stream alive (which is also another thing your ruleset doesn't cover, and without which you will see odd connection issues and loss). This rule also works for UDP traffic, but because UDP is stateless, it's not quite the same. Instead the kernel has to keep track of UDP packets that go out, and just assumes that when UDP packets come back on the same host/port combination, and it's within a short time frame, that they're related.
How to Allow Outgoing SMTP on iptables Debian Linux
1,390,425,720,000
It seems I am missing something blindingly obvious, but still: ps -f -u myuser --ppid 1 Seems to only take a look at parent pid of the process, and returns all the processes that have parent pid of 1 - even when the user is not myuser. The -u alone works correctly (selecting only processes of myuser): ps -f -u myuser What am I missing? Is there some built-in way to filter by several conditions in ps? EDIT: My current workaround: ps -f -p $(join <(ps h --ppid 1 -o pid | sort) <(ps h -u myuser -o pid | sort))
ps is annoying that way. Fortunately, there is pgrep, which has similar selection options, but requires them all to match and then outputs the matching pids. By default it outputs one per line, but it can be asked to use a different delimiter so that it will work with ps: ps -f -p"$(pgrep -d, -u $USER -P 1)"
Is there a way to select by several conditions in `ps`?
1,390,425,720,000
When developing a solution that requires a real-time operating system, what advantages would an operating system such an QNX or VxWorks have over Linux? Or to put it another way, since these operating system are designed specifically for real-time, embedded use - as opposed to Linux which is a more general system that can be tailored to real-time use - when would you need to use one of these operating systems instead of Linux?
Some embedded systems (a) need to meet difficult real-time requirements, and yet (b) have very limited hardware (which makes it even more difficult to meet those requirements). If you can't change the hardware, then there are several situations where you are forced to rule out Linux and use something else instead: Perhaps the CPU doesn't even have a MMU, which makes it impossible to run Linux (except uClinux, and as far as I know uClinux is not real-time). Perhaps the CPU is relatively slow, and the worst-case interrupt latency in Linux fails to meet some hard requirement, and some other RTOS tuned for extremely low worst-case interrupt latency can meet the requirement. Perhaps the system has very little RAM. A few years ago, a minimal Linux setup required around 2 MB of RAM; a minimal eCos setup (with a compatibility layer letting it run some applications originally designed to run on Linux) required around 20 kB of RAM. Perhaps there is no port of Linux to your hardware, and there isn't enough time to port Linux before you need to launch (pun!) your system. Many of the simpler RTOSes take much less time to port to new hardware than Linux.
Advantages of using a RTOS such as QNX or VxWorks instead of Linux?
1,390,425,720,000
Noticing that for a VM box, getting in the logs that rsyslogd process gets HUPed. Finding no ideas except a few post in some forums saying this is for logrorate. Any ideas how to fix/troubleshoot this.. messages-20141011:2014-10-10T04:02:02.054134-06:00 udr-oradl01 rsyslogd: [origin software="rsyslogd" swVersion="5.8.12" x-pid="364" x-info="http://www.rsyslog.com"] rsyslogd was HUPed messages-20141011:2014-10-11T04:02:02.079917-06:00 udr-oradl01 rsyslogd: [origin software="rsyslogd" swVersion="5.8.12" x-pid="739" x-info="http://www.rsyslog.com"] rsyslogd was HUPed messages-20150124:2015-01-24T04:02:01.497596-07:00 udr-oradl01 rsyslogd: [origin software="rsyslogd" swVersion="5.8.12" x-pid="819" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
There's nothing to fix. This is normal operation. When the logfile is rotated by logrotate, it tells rsyslog to close and re-open its logfile file handles by sending it a HUP signal. If rsyslog doesn't do that, then it will continue to write its log output to the OLD logfiles, not to the new ones because the file handles will still connect to the old files. This is true even if the old files are deleted (because, on unix, a file isn't truly deleted until the last file handle holding it open is closed)....which is why files don't always release the space they are using up back to the filesystem immediately if some process has an open file handle to them.
rsyslogd HUPed repeatedly
1,390,425,720,000
I want to do some experiments with buffer overflows on my various virtual machines, including (but not limited to) Debian 6, Ubuntu 12.04, Fedora 16, but every time I try to execute the buffer overflow exploit I get the following message: stack smashing detected (core dumped) After doing my research I read that it is a feature called buffer overflow protection implemented in the compiler. GCC for instance uses GCC Stack-Smashing Protector (ProPolice), Clang/LLVM uses two buffer overflow detectors, SafeCode and AddressSanitizer. My question is: Since I really want to check out buffer overflow attacks on my machines is there a way (a compiler flag, perhaps? a linux config file?) to deactivate the buffer overflow protection?
GCC On gcc (man gcc) the checks are enabled by -fstack-protector Emit extra code to check for buffer overflows, such as stack smashing attacks. >This is done by adding a guard variable to functions with vulnerable objects. This includes functions that call alloca, and functions with >buffers larger than 8 bytes. The guards are initialized when a function is entered and then checked when the function exits. If a guard check >fails, an error message is printed and the program exits. -fstack-protector-all Like -fstack-protector except that all functions are protected. You can disable both by prepending no- to the option name -fno-stack-protector -fno-stack-protector-all LLVM/Clang On LLVM/Clang (http://clang.llvm.org/docs/UsersManual.html#commandline) to enable/disable AdressSanitizer: -f[no-]address-sanitizer: Turn on AddressSanitizer, a memory error detector. and SAFECode (http://safecode.cs.illinois.edu/docs/UsersGuide.html) -f[no-]memsafety
Is there a way to deactivate Buffer overflow protection on my machine?
1,390,425,720,000
I have a machine running the cgroups v2 (unified) hierarchy, so systemd is responsible for managing all cgroups and delegation to the systemd user instance works. I'd like to perform resource control on a group of processes, so I need them together in a unit — presumably a systemd scope. Normally, systemd-run would do this — but unfortunately these processes are already running and I don't want to restart them. How can I create a systemd scope out of already existing processes? The Control Group Interface documentation tells me it's possible, but I haven't been able to find a way from the command line. Neither systemctl nor systemd-run seem able to do this. Is there a way from the command line? I am running systemd v241 if it matters.
There are various command-line tools to make dbus calls; systemd comes with one called busctl. So you can call StartTransientUnit from the command line. The command The syntax is positively annoying, but it looks like this (for one process id, 14460): busctl call --user org.freedesktop.systemd1 /org/freedesktop/systemd1 \ org.freedesktop.systemd1.Manager StartTransientUnit 'ssa(sv)a(sa(sv))' \ 'SCOPE-NAME.scope' fail 1 PIDs au 1 14460 0 Explanation That is positively opaque (and took some tries to get right, and ultimately using dbus-monitor to see how systemd-run did it — only on the system manager though, systemd-run --user seems not to go through dbus). So an explanation, parameter by parameter: busctl call --user # use user session dbus, not system org.freedesktop.systemd1 # dbus service name /org/freedesktop/systemd1 # dbus object in that service org.freedesktop.systemd1.Manager # interface name in that service StartTransientUnit # method we're going to call 'ssa(sv)a(sa(sv))' # signature of method, see below 'SCOPE-NAME.scope' # first argument, name of scope fail # second argument, how to handle conflicts (see below) 1 # start of third argument, number of systemd properties for unit PIDs # name of first property au # data type of first property, (a)rray [aka list] of (u)nsigned integers 1 # count of array — that is, number of pids 14460 # first pid 0 # fourth argument: array size=0 (unused parameter) Adding to the command More properties To add another systemd property to the unit, you'd increase the number of properties and add it on. Note that each property is at least three additional command-line arguments: the key, the value-type, and the value. As an example, adding a Slice property would go from: … fail 1 PIDs au 1 14460 0 to … fail 2 PIDs au 1 14460 Slice s whatever.slice 0 ^ ^^^^^ ^ ^^^^^^^^^^^^^^ count key type value Type "s" is string. The list of them can be found in the D-Bus specification’s “Type system" chapter You can of course change the count to 3 and add a third property. Etc. More pids Similar to more properties, but this time it's the count buried inside the "PIDs" property value. An example should make it clearer: … fail 1 PIDs au 1 14460 0 becomes … fail 1 PIDs au 2 14460 14461 0 ^ ^^^^^ internal count second pid if you add PID 14461 as well as 14460. You can add a third, fourth, etc. PID in the same way. Combining them You can of course combine additional properties with additional pids. Just remember that the list of pids is a property value, so it needs to stay together. You can't mix pid arguments with other properties. The right way is to change: … fail 1 PIDs au 1 14460 0 to: … fail 2 PIDs au 2 14460 14461 Slice s whatever.slice 0 (the order doesn't matter, you could put the Slice block before the PIDs block). Where does the signature come from? The signature is obtained either from the systemd dbus API documentation or, probably more easily, by using dbus introspection: $ busctl introspect org.freedesktop.systemd1 /org/freedesktop/systemd1 \ org.freedesktop.systemd1.Manager | grep1 StartTransientUnit NAME TYPE SIGNATURE RESULT/VALUE FLAGS .StartTransientUnit method ssa(sv)a(sa(sv)) o - (for grep1, see https://unix.stackexchange.com/a/279518) There are a lot of methods and dbus-properties listed, over 180 here. So don't omit the grep. What does “fail” handling conflicts mean? What else is there? According to the systemd documentation (look under "CreateUnit"), the useful values are fail and replace. fail means your scope will fail to start if there is some conflict. replace means systemd will get rid of the conflicting unit. Note that this seems to only be for units that are currently starting or scheduled to (it does say "queued") — replace won't, for example, replace an already-running scope with the same name.
How do I create a systemd scope for an already-existing process from the command line?
1,390,425,720,000
I am running a Red Hat Enterprise Linux Server release 7.1 (Maipo) on Intel(R) Xeon(R) CPU X5690 @ 3.47GHz I keep getting this error in abrt-watch-log. root 888 1 0 Aug03 ? 00:00:00 /usr/bin/abrt-watch-log -F BUG: WARNING: at WARNING: CPU: INFO: possible recursive locking detected ernel BUG at list_del corruption list_add corruption do_IRQ: stack overflow: ear stack overflow (cur: eneral protection fault nable to handle kernel ouble fault: RTNL: assertion failed eek! page_mapcount(page) went negative! adness at NETDEV WATCHDOG ysctl table check failed : nobody cared IRQ handler type mismatch Machine Check Exception: Machine check events logged divide error: bounds: coprocessor segment overrun: invalid TSS: segment not present: invalid opcode: alignment check: stack segment: fpu exception: simd exception: iret exception: /var/log/messages -- /usr/bin/abrt-dump-oops -xtD
The process abrt-watch-log takes strings to watch for and then runs a command. So what you're seeing as an error is just the strings to look for in /var/log/messages, which if found, is then sent to /usr/bin/abrt-dump-oops. $ man abrt-watch-log: NAME abrt-watch-log - Watch log file and run command when it grows or is replaced SYNOPSIS abrt-watch-log [-vs] [-F STR] ... FILE PROG [ARGS]
CPU warning - abrt-watch-log
1,390,425,720,000
When I execute route -n, from where exactly (from which structs) is the information displayed retrieved? I tried executing strace route -n but I didn't help me finding the right place it's stored.
The route or the ip utility get their information from a pseudo filesystem called procfs. It is normally mounted under /proc. There is a file called /proc/net/route, where you can see the kernel's IP routing table. You can print the routing table with cat instead, but the route utility formats the output human readable, because the IP adresses are stored in hex. That file is not just a normal file. It is always generated at exactly the moment when opening it with an attempt to read, as all files in the proc filesystem. I you are interessted how that file is written, then you need to look at the kernel sources: That function outputs the routeing table. You see at line 2510, the header of the routing table is printed. The routing table appears to be mostly in the struct fib_info that is defined in the header file ip_fib.h, line 98.
Where is routing table stored internally in the Linux kernel?
1,390,425,720,000
I ran the following command to give the wheel group rwx permissions on new files and subdirectories created: [belmin@server1]$ ls -la total 24 drwxr-sr-x+ 2 guards guards 4096 Aug 27 15:30 . drwxr-xr-x 104 root root 12288 Aug 27 15:19 .. [belmin@server1]$ sudo setfacl -m d:g:wheel:rwX . [belmin@server1]$ getfacl . # file: . # owner: guards # group: guards # flags: -s- user::rwx group::r-x group:wheel:rwx other::r-x default:user::rwx default:group::r-x default:group:wheel:rwx default:mask::rwx default:other::r-x However, when I create a file as root, I am not completely clear how the effective permissions are calculated: [belmin@server1]$ sudo touch foo [belmin@server1]$ getfacl foo # file: foo # owner: root # group: guards user::rw- group::r-x #effective:r-- group:wheel:rwx #effective:rw- group:guards:rwx #effective:rw- mask::rw- other::r-- Can someone elaborate on what this means?
effective permissions are formed by ANDing the actual (real?) permissions with the mask.  Since the mask of your file is rw-, all the effective permissions have the x bit turned off.
How does ACL calculate the effective permissions on a file?
1,390,425,720,000
I want to write an application in Python that you can use with your default keyboard and a specially designed one for the application. I will design it simply by using a small numerical keyboard with stickers to give actions to the different keys. Both keyboards will be attached by USB. However, when these keys are pressed, just their regular signals (numbers, operators and enters), will be send to Python and it will not be able to distinguish between the signals from the main keyboard and the special keyboard. Because Python has (as far as I could find) no method for making this distinguishment, I want to do it on the OS itself. I will be programming it for the Raspberry Pi, so it will be Linux. So, the main question: How can I remap the keys of a specific keyboard to other keycodes. I thought about using the F-keys which I won't use for other purposes; or just some characters that are not present on any keyboard (supposing that there are such). Is this possible in Linux/Unix? And if so, how can I do it?
If you're using Linux, the best way to distinguish between input devices is to use the Linux Event Interface. After a device's hardware-specific input is decoded, it's converted to an intermediate Linux-specific event structure and made available by reading one or more of the character devices under /dev/input/. This is completely independent of the programming language you use, by the way. Each hardware device gets its own /dev/input/eventX device, and there are also aggregates (e.g. /dev/input/mice which represents the motion of all mice in the system). Your system may also have /dev/input/by-path and /dev/input/by-id. There's an ioctl called EVIOCGNAME which returns the name of the device as a humanly-readable string, or you can use something like /dev/input/by-id/usb-Logitech_USB_Gaming_Mouse-mouse. You open the device, and every time an event arrives from the input hardware, you'll get a packet of data. If you can read C, you can study the file /usr/include/linux/input.h which shows exactly how this stuff works. If you don't, you could read this question which provides all the information you need. The good thing about the event interface is that you just find out what device you need, and you can read input from that input device only, ignoring all others. You'll also get notifications about keys, buttons and controls you normally wouldn't by just reading the ‘cooked’ character stream from a terminal: even dead keys like Shift, etc. The bad thing is that the event interface doesn't return ‘cooked’ characters, it just uses numeric codes for keys (the codes corresponding to each key are found in the aforementioned header file — but also in the Python source of event.py. If your input device has unusual keys/buttons, you may need to experiment a bit till you get the right numbers.
How to distinguish input from different keyboards?
1,390,425,720,000
I wrote a simple script that echo-es its own PID every half a second: #/bin/sh while true; do echo $$ sleep .5 done I'm running said script (it says 3844 over and over) in one terminal and trying to tail the file descriptor in another one: $ tail -f /proc/3844/fd/1 It doesn't print anything to the screen and hangs until ^c. Why? Also, all of the STD file descriptors (IN/OUT/ERR) link to the same pts: $ ls -l /proc/3844/fd/ total 0 lrwx------ 1 mg mg 64 sie 29 13:42 0 -> /dev/pts/14 lrwx------ 1 mg mg 64 sie 29 13:42 1 -> /dev/pts/14 lrwx------ 1 mg mg 64 sie 29 13:42 2 -> /dev/pts/14 lr-x------ 1 mg mg 64 sie 29 13:42 254 -> /home/user/test.sh lrwx------ 1 mg mg 64 sie 29 13:42 255 -> /dev/pts/14 Is this normal? Running Ubuntu GNOME 14.04.
Make a strace of tail -f, it explains everything. The interesting part: 13791 fstat(3, {st_mode=S_IFREG|0644, st_size=139, ...}) = 0 13791 fstatfs(3, {...}) = 0 13791 inotify_init() = 4 13791 inotify_add_watch(4, "/path/to/file", IN_MODIFY|IN_ATTRIB|IN_DELETE_SELF|IN_MOVE_SELF) = 1 13791 fstat(3, {st_mode=S_IFREG|0644, st_size=139, ...}) = 0 13791 read(4, 0xd981c0, 26) = -1 EINTR (Interrupted system call) What it does? It sets up an inotify handler to the file, and then waits until something happens with this file. If the kernel says tail through this inotify handler, that the file changed (normally, was appended), then tail 1) seeks 2) reads the changes 3) writes them out to the screen. /proc/3844/fd/1 on your system is a symbolic link to /dev/pts/14, which is a character device. There is no such thing as some like a "memory map", which could be accessed by that. Thus, there is nothing whose changes could be signed to the inotify, because there is no disk or memory area which could be accessed by that. This character device is a virtual terminal, which practically works as as if it were a network socket. Programs running on this virtual terminal are connecting to this device (just as if you telnet-ted into a tcp port), and writing what they want to write into. There are complexer things as well, for example locking the screen, terminal control sequences and such, these are normally handled by ioctl() calls. I think, you want to somehow watch a virtual terminal. It can be done on linux, but it is not so simple, it needs some network proxy-like functionality, and a little bit of tricky usage of these ioctl() calls. But there are tools which can do that. Currently I can't remember, which debian package has the tool for this goal, but with a little googling you could find that probably easily. Extension: as @Jajesh mentioned here (give him a +1 if you gave me), the tool is named watch. Extension #2: @kelnos mentioned, a simple cat /dev/pts/14 were also enough. I tried that, and yes, it worked, but not correctly. I didn't experimented a lot with that, but it seems to me as if an output going into that virtual terminal gone either to the cat command, or to its original location, and never to both. But it is not sure.
Why can't I `tail -f /proc/$pid/fd/1`?
1,390,425,720,000
I'm familiar with lshw, /proc/cpuinfo, etc. But is there a method for getting at a CPU's CPUID opcode?
There's a tool called cpuid that one can use to query for much more detailed information than is typically present in lshw or /proc/cpuinfo. On my Fedora 19 system I was able to install the package with the following command: $ sudo yum install cpuid Once installed, cpuid is a treasure trove of details about ones underlying CPU. Multiple versions There are at least 2 versions of a tool called cpuid. On Debian/Ubuntu: $ dpkg -p cpuid Package: cpuid Priority: optional Section: admin Installed-Size: 68 Maintainer: Ubuntu MOTU Developers <[email protected]> Architecture: amd64 Version: 3.3-9 Depends: libc6 (>= 2.5-0ubuntu1) Size: 11044 Description: Intel and AMD x86 CPUID display program This program displays the vendor ID, the processor specific features, the processor name string, different kinds of instruction set extensions present, L1/L2 Cache information, and so on for the processor on which it is running. . Homepage: http://www.ka9q.net/code/cpuid/ Original-Maintainer: Aurélien GÉRÔME <[email protected]> While on CentOS/Fedora/RHEL: $ rpm -qi cpuid Name : cpuid Version : 20130610 Release : 1.fc19 Architecture: x86_64 Install Date: Wed 29 Jan 2014 09:48:17 PM EST Group : System Environment/Base Size : 253725 License : MIT Signature : RSA/SHA256, Sun 16 Jun 2013 12:30:11 PM EDT, Key ID 07477e65fb4b18e6 Source RPM : cpuid-20130610-1.fc19.src.rpm Build Date : Sun 16 Jun 2013 05:39:24 AM EDT Build Host : buildvm-13.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://www.etallen.com/cpuid.html Summary : Dumps information about the CPU(s) Description : cpuid dumps detailed information about x86 CPU(s) gathered from the CPUID instruction, and also determines the exact model of CPU(s). It supports Intel, AMD, and VIA CPUs, as well as older Transmeta, Cyrix, UMC, NexGen, and Rise CPUs. NOTE: The output below will focus exclusively on Todd Allen's implementation of cpuid, i.e. the Fedora packaged one. Example The upper section is pretty standard stuff. $ cpuid -1 | less CPU: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0x5 (5) stepping id = 0x5 (5) extended family = 0x0 (0) extended model = 0x2 (2) (simple synth) = Intel Core i3 / i5 / i7 (Clarkdale K0) / Pentium U5000 Mobile / Pentium P4505 / U3405 / Celeron Mobile P4000 / U3000 (Arrandale K0), 32nm miscellaneous (1/ebx): process local APIC physical ID = 0x1 (1) cpu count = 0x10 (16) CLFLUSH line size = 0x8 (8) brand index = 0x0 (0) brand id = 0x00 (0): unknown But the lower sections are much more enlightening. feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = true machine check exception = true CMPXCHG8B inst. = true APIC on chip = true SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = true processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = true therm. monitor = true IA64 = false pending break event = true It'll show you details about your cache structure: cache and TLB information (2): 0x5a: data TLB: 2M/4M pages, 4-way, 32 entries 0x03: data TLB: 4K pages, 4-way, 64 entries 0x55: instruction TLB: 2M/4M pages, fully, 7 entries 0xdd: L3 cache: 3M, 12-way, 64 byte lines 0xb2: instruction TLB: 4K, 4-way, 64 entries 0xf0: 64 byte prefetching 0x2c: L1 data cache: 32K, 8-way, 64 byte lines 0x21: L2 cache: 256K MLC, 8-way, 64 byte lines 0xca: L2 TLB: 4K, 4-way, 512 entries 0x09: L1 instruction cache: 32K, 4-way, 64-byte lines Even more details about your CPU's cache: deterministic cache parameters (4): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false extra threads sharing this cache = 0x1 (1) extra processor cores on this die = 0x7 (7) system coherency line size = 0x3f (63) physical line partitions = 0x0 (0) ways of associativity = 0x7 (7) WBINVD/INVD behavior on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets - 1 (s) = 63 The list goes on. References cpuid - Linux tool to dump x86 CPUID information about the CPU(s) CPUID - Wikipedia sandpile.org - The world's leading source for technical x86 processor information
Is there a way to dump a CPU's CPUID information?
1,390,425,720,000
I'm not sure if these have a name, but on most computers I use the interface prefixes are usually: eth- : Ethernet/Wired wlan- : Wireless/WiFi However, on my ASUS RT-N56U, I have the following: br0 : 'Ethernet' - Bridge? eth2 : 'Ethernet', IPv6 (where are 0 and 1?) eth3 : 'Ethernet', IPv4 (the one with my WAN IP) lo : 'Local Loopback' - What's this for? ra0 : 'Ethernet' - ? rai0 : 'Ethernet' - ? Are there others? What do they mean?
From the ASUS RT-N56U wiki page: What are the existing network interfaces (transcript naming interfaces)? br0 = LAN + WLAN + AP-Client + WDS eth2 = Ethernet interface GMAC1, that connected to the switch (trunk port). eth2.1 = LAN (VLAN VID1) eth2.2 = WAN (VLAN VID2) ra0 = WLAN 5GHz ra1 = WLAN 5GHz Guest rai0 = WLAN 2.4GHz rai1 = WLAN 2.4GHz Guest apcli0 = AP-Client 5GHz apclii0 = AP-Client 2.4GHz wds0-wds3 = WDS 5GHz wdsi0-wdsi3 = WDS 2.4GHz In the no-VLAN firmware eth2 = LAN eth3 = WAN
What are the interface prefix meanings in ifconfig?
1,390,425,720,000
I know, that there are several websites, that will list the changelog of kernel versions (e.g. what is new in 4.17) (KernelNewbies, heise.de), but where do I find information about a minor change (e.g. 4.17.1 -> 4.17.2)? (I try to hunt a bug, that appears in a very old kernel version, but not in a slightly newer one, so I'm interested in the changes, but I do not want to crawl the whole Git log.)
The changelogs are on kernel.org. The URLs have a predictable pattern. The current kernel change log is at: https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.17.8 So, to read the changes from 4.17.1, you would go to: https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.17.2
Where to find the Linux changelog of minor versions
1,390,425,720,000
I intend to play with the linux insults and add a few. However, i only could figure how to add a single insult but not a list or the location of the file that contains the insults.
To edit the list of insults, you will need to edit the source and recompile. The insults are stored in plugins/sudoers/ins_*.h (4 files). If you add a new file, you will need to add its definition to plugins/sudoers/insults.h. That's it.
/etc/sudoers - Insults - How to add a list of insults?
1,390,425,720,000
I have installed mtp-tools on my debian7.8. mtp-files can list all my files on android phone. How can I mount all my files on android phone on the directory /tmp/myphone? root@debian:/home/debian# jmtpfs -l Device 0 (VID=0b05 and PID=0c02) is UNKNOWN. Please report this VID/PID and the device model to the libmtp development team Available devices (busLocation, devNum, productId, vendorId, product, vendor): 1, 8, 0x0c02, 0x0b05, UNKNOWN, UNKNOWN root@debian:/home/debian# jmtpfs /tmp/myphone Device 0 (VID=0b05 and PID=0c02) is UNKNOWN. Please report this VID/PID and the device model to the libmtp development team ignoring usb_claim_interface = -6ignoring usb_claim_interface = -5PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device Android device detected, assigning default bug flags fuse: bad mount point `/tmp/myphone': Input/output error The jmptfs can't mount my phone. chmod 777 /tmp/myphone After chmod. root@debian:/home/debian# jmtpfs /tmp/myphone Device 0 (VID=0b05 and PID=0c02) is UNKNOWN. Please report this VID/PID and the device model to the libmtp development team Android device detected, assigning default bug flags root@debian:/home/debian# jmtpfs -l Device 0 (VID=0b05 and PID=0c02) is UNKNOWN. Please report this VID/PID and the device model to the libmtp development team Available devices (busLocation, devNum, productId, vendorId, product, vendor): 1, 5, 0x0c02, 0x0b05, UNKNOWN, UNKNOWN
Install jmtpfs (aptitude install jmtpfs) which allows to mount MTP devices. Create the directory if it doesn't exist already (mkdir /tmp/myphone). Then, the following will mount your phone: jmtpfs /tmp/myphone jmtpfs will use the first available device. If you've got more than one connected at a time, you can do jmtpfs -l to find out which one is your phone, and use the -device flag to specify it. As an alternative, you can try go-mtpfs instead.
Mount my mtp in my android phone on a directory?
1,390,425,720,000
I compiled a Linux by doing make menuconfig then make and now I have compiled the most recent version of Linux. How can I load the kernel into QEMU?
From qemu's help: Linux/Multiboot boot specific: -kernel bzImage use 'bzImage' as kernel image -append cmdline use 'cmdline' as kernel command line -initrd file use 'file' as initial ram disk -dtb file use 'file' as device tree image A quick test here using Arch's kernel/initrd (qemu -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img) worked (dropped me into a recovery shell since I didn't provide a root device).
Load Linux bzImage in QEMU?
1,390,425,720,000
Commands like ps come with a lot of parameters, especially because they give the use an option to choose between Unix and BSD style flags. I hope you get my point here. So, when there's such an option available, which should I choose for maximum compatibility across all linux systems? (max. compatibility is one of the priorities for instance) I know that Unix style is quite obvious, but BSD commands for some reason include more readable information (column titles for example, CPU column etc). Of course, please correct me if I am wrong, but that's what I felt.
Pretty much all Linuxes use GNU versions of the original core Unix commands like ps, which, as you've noted, supports both BSD and AT&T style options. Since your stated goal is only compatibility among Linuxes, that means the answer is, "It doesn't matter." Embedded and other very small variants of Linux typically use BusyBox instead of the GNU tools, but in the case of ps, it really doesn't affect the answer, since the BusyBox version is so stripped down it can be called neither AT&Tish nor BSDish. Over time, other Unixy systems have reduced the ps compatibility differences. Mac OS X — which derives indirectly from BSD Unix and in general behaves most similarly to BSD Unix still — accepts both AT&Tish and BSDish ps flags. Solaris/OpenIndiana behaves this way, too, though this is less surprising because it has a mixed BSD and AT&T history. FreeBSD, OpenBSD, and NetBSD still hew to the BSD style exclusively. The older a Unix box is, the more likely it is that it accepts only one style of flags. You can paper over the differences on such a box the same way we do now: install the GNU tools, if they haven't already been installed. That said, there are still traps. ps output generally shouldn't be parsed in scripts that need to be portable, for example, since Unixy systems vary in what columns are available, the amount of data the OS is willing to make visible to non-root users, etc. (By the way, note that it's "BSD vs. AT&T", not "BSD vs. Unix". BSD Unix is still UNIX®. BSD Unix shares a direct development history with the original AT&T branch. That sharing goes both ways, too: AT&T and its successors brought BSD innovations back home at several points in its history. This unification over time is partly due to the efforts of The Open Group and its predecessors.)
Which to choose - BSD or Unix-style commands where available?
1,390,425,720,000
Here is a Linux audio MP3 puzzle that has been bugging me for awhile: How to trim the beginning few seconds off an MP3 audio file? (I can't get ffmpeg -ss to work with either 00:01 or 1.000 format) So far, to do what I want, I resort doing it in a GUI manner which is maybe slower for a single file, and definitely slower for a batch of files.
For editing mp3's under linux, I'd recommend sox. It has a simple to use trim effect that will do what you ask for (see man sox for datails - search (press/) for "trim start"). Example: sox input.mp3 output.mp3 trim 1 5 You didn't mention it, but if your aim is just to remove the silence at the beginning of files, you will find silence effect much more useful (man sox, search for "above-periods")
How can you trim mp3 files using `ffmpeg`?
1,390,425,720,000
What is currently the best way to rip scratched audio cds under Linux? What I find complicated, is that there are several tools available but it is not clear if one tool has better error correction features than the other. I mean, there are at least: cdparanoia cdda2waw cdrdao
cdparanoia started as a patch on a cdda2wav from 1997 and never updated the cdda2wav code. Since 2002, there is no visible activity on the project. cdrdao was a similar short running project, founded in 1998 and no new features since at least 2004. There was never special support for bad media. cdda2wav started in 1993 and is still actively maintained. In 2002, the "lib paranoia" was taken, made portable and enhanced over the years. Libparanoia is integrated into the maintained cdda2wav since 2002. I recommend to use: cdda2wav -vall paraopts=proof speed=4 cddb=0 -B and to check the statistical reports for each extracted track. BTW: if your drive supports reading C2 pointers, use: cdda2wav -vall paraopts=proof,c2check speed=4 cddb=0 -B this does a lot more than the latest cdparanoia version did. Please read the man page to understand the error reports from libparanoia. Note: due to a bug in cdparanoia, there are situations, where the error reports from cdparanoia miss problems that are reported by cdda2wav, so do not believe cdparanoia was more successful than cdda2wav just because it reports less problems.
How to rip scratched audio cds?