date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,558,028,264,000
I want to move large file created by external process as soon as it's closed. Is this test command correct? if lsof "/file/name" then # file is open, don't touch it! else if [ 1 -eq $? ] then # file is closed mv /file/name /other/file/name else # lsof failed for some other reason fi fi EDIT: the file represents a dataset and I have to wait until it's complete to move it so another program can act on it. That's why I need to know if the external process is done with the file.
From the lsof man page Lsof returns a one (1) if any error was detected, including the failure to locate command names, file names, Internet addresses or files, login names, NFS files, PIDs, PGIDs, or UIDs it was asked to list. If the -V option is specified, lsof will indicate the search items it failed to list. So that would suggest that your lsof failed for some other reason clause would never be executed. Have you tried just moving the file while your external process still has it open? If the destination directory is on the same filesystem, then there should be no problems with doing that unless you need to access it under the original path from a third process as the underlying inode will remain the same. Otherwise I think mv will fail anyway. If you really need to wait until your external process is finished with the file, you are better to use a command that blocks instead of repeatedly polling. On Linux, you can use inotifywait for this. Eg: inotifywait -e close_write /path/to/file If you must use lsof (maybe for portability), you could try something like: until err_str=$(lsof /path/to/file 2>&1 >/dev/null); do if [ -n "$err_str" ]; then # lsof printed an error string, file may or may not be open echo "lsof: $err_str" >&2 # tricky to decide what to do here, you may want to retry a number of times, # but for this example just break break fi # lsof returned 1 but didn't print an error string, assume the file is open sleep 1 done if [ -z "$err_str" ]; then # file has been closed, move it mv /path/to/file /destination/path fi Update As noted by @JohnWHSmith below, the safest design would always use an lsof loop as above as it is possible that more than one process would have the file open for writing (an example case may be a poorly written indexing daemon that opens files with the read/write flag when it should really be read only). inotifywait can still be used instead of sleep though, just replace the sleep line with inotifywait -e close /path/to/file.
Move file but only if it's closed
1,558,028,264,000
One can find several threads on the Internet such as this: http://www.gossamer-threads.com/lists/linux/kernel/972619 where people complain they cannot build Linux with -O0, and are told that this is not supported; Linux relies on GCC optimizations to auto-inline functions, remove dead code, and otherwise do things that are necessary for the build to succeed. I've verified this myself for at least some of the 3.x kernels. The ones I've tried exit after a few seconds of build time if compiled with -O0. Is this generally considered acceptable coding practice? Are compiler optimizations, such as automatic inlining, predictable enough to rely on; at least when dealing with only one compiler? How likely is it that future versions of GCC might break builds of current Linux kernels with default optimizations (i.e. -O2 or -Os)? And on a more pedantic note: since 3.x kernels cannot compile without optimizations, should they be considered technically incorrect C code?
You've combined together several different (but related) questions. A few of them aren't really on-topic here (e.g., coding standards), so I'm going to ignore those. I'm going to start with if the kernel is "technically incorrect C code". I'm starting here because the answer explains the special position a kernel occupies, which is critical to understanding the rest. Is the Kernel Technically Incorrect C Code? The answer is definitely its "incorrect". There are a few ways in which a C program can be said to be incorrect. Let's get a few simple ones out of the way first: A program which doesn't follow the C syntax (i.e., has a syntax error) is incorrect. The kernel uses various GNU extensions to the C syntax. Those are, as far as the C standard is concerned, syntax errors. (Of course, to GCC, they are not. Try compiling with -std=c99 -pedantic or similar...) A program which doesn't do what its designed to do is incorrect. The kernel is a huge program and, as even a quick check of its changelogs will prove, surely does not. Or, as we'd commonly say, it has bugs. What Optimization means in C [NOTE: This section contains a very lose restatement of the actual rules; for details, see the standard and search Stack Overflow.] Now for the one that takes more explanation. The C standard says that certain code must produce certain behavior. It also says certain things which are syntactically valid C have "undefined behavior"; an (unfortunately common!) example is to access beyond the end of an array (e.g., a buffer overflow). Undefined behavior is powerfully so. If a program contains it, even a tiny bit, the C standard no longer cares what behavior the program exhibits or what output a compiler produces when faced with it. But even if the program contains only defined behavior, C still allows the compiler a lot of leeway. As a trivial example (note: for my examples, I'm leaving out #include lines, etc., for brevity): void f() { int *i = malloc(sizeof(int)); *i = 3; *i += 2; printf("%i\n", *i); free(i); } That should, of course, print 5 followed by a newline. That's what's required by the C standard. If you compile that program and disassemble the output, you'd expect malloc to be called to get some memory, the pointer returned stored somewhere (probably a register), the value 3 stored to that memory, then 2 added to that memory (maybe even requiring a load, add, and store), then the memory copied to the stack and the also a point string "%i\n" put on the stack, then the printf function called. A fair bit of work. But instead, what you might see is as if you'd written: /* Note that isn't hypothetical; gcc 4.9 at -O1 or higher does this. */ void f() { printf("%i\n", 5) } and here's the thing: the C standard allows that. The C standard only cares about the results, not the way they are achieved. That's what optimization in C is about. The compiler comes up with a smarter (generally either smaller or faster, depending on the flags) way to achieve the results required by the C standard. There are a few exceptions, such as GCC's -ffast-math option, but otherwise the optimization level does not change the behavior of technically correct programs (i.e., ones containing only defined behavior). Can You Write a Kernel Using Only Defined Behavior? Let's continue to examine our example program. The version we wrote, not what the compiler turned it in to. The first thing we do is call malloc to get some memory. The C standard tells us what malloc does, but not how it does it. If we look at an implementation of malloc aimed at clarity (as opposed to speed), we'd see that it makes some syscall (such as mmap with MAP_ANONYMOUS) to get a large chunk of memory. It internally keeps some data structures telling it which parts of that chunk are used vs. free. It finds a free chunk at least as large as what you asked for, carves out the amount you asked for, and returns a pointer to it. It's also entirely written in C, and contains only defined behavior. If its thread-safe, it may contain some pthread calls. Now, finally, if we look at what mmap does, we see all kinds of interesting stuff. First, it does some checks to see if the system has enough free RAM and/or swap for the mapping. Next, it find some free address space to put the block in. Then it edits a data structure called the page table, and probably makes a bunch of inline assembly calls along the way. It may actually find some free pages of physical memory (i.e., actual bits in actual DRAM modules)---a process which may require forcing other memory out to swap---as well. If it doesn't do that for the entire requested block, it'll instead set things up so that'll happen when said memory is first accessed. Much of this is accomplished with bits of inline assembly, writing to various magic addresses, etc. Note also it also uses large parts of the kernel, especially if swapping is required. The inline assembly, writing to magic addresses, etc. is all outside the C specification. This isn't surprising; C runs across many different machine architectures—including a bunch that were barely imaginable in the early 1970s when C was invented. Hiding that machine-specific code is a core part of what a kernel (and to some extent C library) is for. Of course, if you go back to the example program, it becomes clear printf must be similar. It's pretty clear how to do all the formatting, etc. in standard C; but actually getting it on the monitor? Or piped to another program? Once again, a lot of magic done by the kernel (and possibly X11 or Wayland). If you think of other things the kernel does, a lot of them are outside C. For example, the kernel reads data from disks (C knows nothing of disks, PCIe buses, or SATA) into physical memory (C knows only of malloc, not of DIMMs, MMUs, etc.), makes it executable (C knows nothing of processor execute bits), and then calls it as functions (not only outside C, very much disallowed). The Relationship Between a Kernel and its Compiler(s) If you remember from before, if a program contains undefined behavior, so far as the C standard is concerned, all bets are off. But a kernel really has to contain undefined behavior. So there has to be some relationship between the kernel and its compiler, at least enough that the kernel developers can be confident the kernel will work despite violating the C standard. At least in the case of Linux, this includes the kernel having some knowledge of how GCC works internally. How likely is it to break? Future GCC versions will probably break the kernel. I can say this pretty confidently as its happened several times before. Of course, things like the strict aliasing optimizations in GCC broke plenty of things besides the kernel, too. Note also that the inlining that the Linux kernel is depending on is not automatic inlining, it's inlining that the kernel developers have manually specified. There are various people who have compiled the kernel with -O0 and report it basically works, after fixing a few minor problems. (One is even in the thread you linked to). Mostly, it's the kernel developers see no reason to compile with -O0, and requiring optimization as a side effect makes some tricks work, and no one tests with -O0, so it's not supported. As an example, this compiles and links with -O1 or higher, but not with -O0: void f(); int main() { int x = 0, *y; y = &x; if (*y) f(); return 0; } With optimization, gcc can figure out that f() will never be called, and omits it. Without optimization, gcc leaves the call in, and the linker fails because there isn't a definition of f(). The kernel developers rely on similar behavior to make the kernel code easier to read/write.
Linux cannot compile without GCC optimizations; implications? [closed]
1,558,028,264,000
Is there a command to display a list of users who modified a file providing a file history? I know that possible with svn/git etc.. but we have a config file that is not in SVN and someone modified it.
If you have not previously enabled some sort of auditing, there is not a tool that can report this after the file has been modified. You can get the date and time of when the file was last modified, but not a revision history. Moving forward, you could install, setup, enable the auditd package. From the auditctl man page: -w path Insert a watch for the file system object at path. You cannot insert a watch to the top level directory. This is prohibited by the kernel. Wildcards are not supported either and will generate a warning. The way that watches work is by tracking the inode internally. If you place a watch on a file, its the same as using the -F path option on a syscall rule. If you place a watch on a directory, its the same as using the -F dir option on a syscall rule. The -w form of writing watches is for backwards compatibility and the syscall based form is more expressive. Unlike most syscall auditing rules, watches do not impact performance based on the number of rules sent to the kernel. The only valid options when using a watch are the -p and -k. If you need to anything fancy like audit a specific user accessing a file, then use the syscall auditing form with the path or dir fields. There is more discussion about this in the question Logging hidden file creations
Display a file's history (list of users that have modified a file)
1,558,028,264,000
How can i do this in a single line? tcp dport 53 counter accept comment "accept DNS" udp dport 53 counter accept comment "accept DNS"
With a recent enough nftables, you can just write: meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS" Actually, you can do even better: set okports { type inet_proto . inet_service counter elements = { tcp . 22, # SSH tcp . 53, # DNS (TCP) udp . 53 # DNS (UDP) } And then: meta l4proto . th dport @okports accept You can also write domain instead of 53 if you prefer using port/service names (from /etc/services).
How to match both UDP and TCP for given ports in one line with nftables
1,558,028,264,000
I recently noticed I am only getting 100Mbit/s of througput on my gigabit home network. When looking into it with ethtool I found my ArchLinux Box was using 100baseT/Half as link speed instead of 1000baseT/Full which its NIC and the switch connected to it support.I am not sure why but the NIC seems to not be advertising its link-modes according to ethtool: Settings for enp0s31f6: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Half Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: on (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes When enabling auto-negotioation explicitly by running ethtool --change enp0s31f6 autoneg on it seems to advertise all its modes to the switch and uses 1000baseT/Full. That only works most of the time and for a while though. When I unplug the cable and pluggin it back in switches autoneg off most of the time, but not always. Also, sometimes setting autoneg to on immediately disables it again. Rebooting also disables it again. Note that auto-negotiation does not get disabled when unplugging but when replugging. dsmeg logs this when autoneg was enabled and I plug in a cable: [153692.029252] e1000e: enp0s31f6 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [153699.577779] e1000e: enp0s31f6 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [153699.577782] e1000e 0000:00:1f.6 enp0s31f6: 10/100 speed: disabling TSO I am using the intel NIC of my asrock motherboard (from ~2015) and an unmanaged switch (Netgear GS208).
After hours of searching I found the solution in the most obvious place: NetworkManager seems to somehow have disabled autonegotiation right in the settings for my ethernet port: The weird part is that even after knowing NetworkManager can change the ethernet link-mode I cannot find even a single source online detailing that functionality. The only way according to the google search results I found is setting it via ethtool.
Linux disables ethernet auto-negotiation on plugging-in the cable?
1,558,028,264,000
I'm trying to disable some CPUs of my server. I've found this link: https://www.cyberciti.biz/faq/debian-rhel-centos-redhat-suse-hotplug-cpu/linux-turn-on-off-cpu-core-commands/, which offers me a method as below: Here is what numactl --hardware gave me: I want to disable all CPUs from 16 to 63, so I write a script named opCPUs.sh as below: #!/bin/bash for i in {16..63}; do if [[ "$1" == "enable" ]]; then echo 1 > /sys/devices/system/cpu/cpu$i/online elif [[ "$1" == "disable" ]]; then echo 0 > /sys/devices/system/cpu/cpu$i/online else echo 'illegal parameter' fi done grep "processor" /proc/cpuinfo Then I execute it: ./opCPUs.sh disable and I can see the result of grep in the script: It seems to work. Now I think all of processes should be in CPU 0 - 15 because others have been disabled. So I use the existing processes dbus to verify as below: ps -Lo psr $(pgrep dbus) I get this: The psr tells me in which CPU the process is running, right? If so, I have disabled CPU 60, CPU 52 etc, why they are still here?
Besides @Yves answer, you actually are able to use the isolcpus kernel parameter. To disable the 4th CPU/core (CPU 3) with Debian or Ubuntu: In /etc/default/grub add isolcpus=3 to GRUB_CMDLINE_LINUX_DEFAULT GRUB_CMDLINE_LINUX_DEFAULT="quiet splash isolcpus=3" Run sudo update-grub Reboot the server. isolcpus — Isolate CPUs from the kernel scheduler. Synopsis isolcpus= cpu_number [, cpu_number ,...] Description Remove the specified CPUs, as defined by the cpu_number values, from the general kernel SMP balancing and scheduler algroithms. The only way to move a process onto or off an "isolated" CPU is via the CPU affinity syscalls. cpu_number begins at 0, so the maximum value is 1 less than the number of CPUs on the system. This option is the preferred way to isolate CPUs. The alternative, manually setting the CPU mask of all tasks in the system, can cause problems and suboptimal load balancer performance. Interestingly enough, the usage of this kernel parameters can be setting aside a CPU for later on using CPU affinity to one process/pin a process to a CPU, and thus both making sure there are no more user processes running on that CPU. In addition, also can make the server more stable having a guarantee a particular process with a very high load will have it´s own CPUs to play with. I have seen Meru doing that with their Linux based controllers before becoming aware of this setup. The associated command to then assign a process to the fourth CPU (CPU 3), is: sudo taskset -cp PID taskset is used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. SUMMARY There are several techniques applied to this question : set isolcpus = 4 in grub and reboot can disable the 5th CPU/CPU 4 permanently for user land processes; echo 0 > /sys/devices/system/cpu/cpu4/online disables the 5th CPU/CPU 4, that will still keep working for the processes that have already been assigned to it but no new processes will be assigned to CPU 4 anymore; taskset -c 3 ./MyShell.sh will force MyShell.sh to be assigned to the 4th CPU/CPU 3 whereas the 4th CPU can still accept other user land processes if isolcpus is not excluding it from doing that. PS. Anecdotally, my best example of using the isolcpus/taskset on the field, was an SSL frontend for a very busy site, that kept going unstable every couple of weeks, where Ansible/ssh would not let me in remotely anymore. I applied the techniques discussed above, and it kept working in a very stable fashion ever since.
How to disable one CPU
1,558,028,264,000
I looked at the stackexchange site but couldn't find anything. I looked at the wikipedia entry on Linux container https://en.wikipedia.org/wiki/LXC and as well as hypervisor https://en.wikipedia.org/wiki/Hypervisor but the explanation to both is beyond a person who has not worked on either will understand. I also saw http://www.linux.com/news/enterprise/cloud-computing/785769-containers-vs-hypervisors-the-battle-has-just-begun but that also doesn't explain it. I have played with VM's such as virtualbox. One of the starting ideas to my limited understanding might have been for Virtual Machines were perhaps to test software in a sandbox environment (Having a Solaris box when you cannot buy/afford to have the machine and still have some idea how the software you are developing for that target hardware is working.) While being limited had it uses. This is probably one of the ways it made the jump in cloud computing as well. The questions are broad so this is how I distill it - Can some people explain what a hypervisor and a *nix container is (with analogies if possible.)? Is a *nix hypervisor the same as virtual machine or is there a difference?
A Virtual Machine (VM) is quite a generic term for many virtualisation technologies. There are a many variations on virtualisation technologies, but the main ones are: Hardware Level Virtualisation Operating System Level Virtualisation qemu-kvm and VMWare are examples of the first. They employ a hypervisor to manage the virtual environments in which a full operating system runs. For example, on a qemu-kvm system you can have one VM running FreeBSD, another running Windows, and another running Linux. The virtual machines created by these technologies behave like isolated individual computers to the guest. These have a virtual CPU, RAM, NIC, graphics etc which the guest believes are the genuine article. Because of this, many different operating systems can be installed on the VMs and they work "out of the box" with no modification needed. While this is very convenient, in that many OSes will install without much effort, it has a drawback in that the hypervisor has to simulate all the hardware, which can slow things down. An alternative is para-virtualised hardware, in which a new virtual device and driver is developed for the guest that is designed for performance in a virtual environment. qemu-kvm provide the virtio range of devices and drivers for this. A downside to this is that the guest OS must be supported; but if supported, the performance benefits are great. lxc is an example of Operating System Level Virtualisation, or containers. Under this system, there is only one kernel installed - the host kernel. Each container is simply an isolation of the userland processes. For example, a web server (for instance apache) is installed in a container. As far as that web-server is concerned, the only installed server is itself. Another container may be running a FTP server. That FTP server isn't aware of the web-server installation - only it's own. Another container can contain the full userland installation of a Linux distro (as long as that distro is capable of running with the host system's kernel). However, there are no separate operating system installations when using containers - only isolated instances of userland services. Because of this, you cannot install different platforms in a container - no Windows on Linux. Containers are usually created by using a chroot. This creates a separate private root (/) for a process to work with. By creating many individual private roots, processes (web-servers, or a Linux distro, etc) run in their own isolated filesystem. More advanced techniques, such as cgroups can isolate other resources such as network and RAM. There are pros and cons to both and many long running debates as to which is best. Containers are lighter, in that a full OS isn't installed for each; which is the case for hypervisors. They can therefore run on lower spec'd hardware. However, they can only run Linux guests (on Linux hosts). Also, because they share the kernel, there is the possibility that a compromised container may affect another. Hypervisors are more secure and can run different OSes because a full OS is installed in each VM and guests are not aware of other VMs. However, this utilises more resources on the host, which has to be relatively powerful.
What is a Linux container and a Linux hypervisor?
1,558,028,264,000
I understand that /dev/kmem and /dev/mem provide access to the memory (i.e. raw RAM) of the system. I am also aware, that /dev/kmem can be completely disabled in kernel and that access can be restricted for /dev/mem. It seems to me, having raw access to memory can be useful for developers and hackers, but why should I need access to memory through /dev/mem. AFAIK it cannot be disabled in kernel (unlike /dev/kmem). Having access to raw memory that can be potentially abused/exploited seems to me to be just asking for trouble. Is there some practical use for it? Do any user programs require it to work properly?
There's a slide deck from Scale 7x 2009 titled: Undermining the Linux Kernel: Malicious Code Injection via /dev/mem that contained these 2 bullets. Who needs this? X Server (Video Memory & Control Registers) DOSEmu From everything I've found from search thus far it would appear that these 2 bullets are the front-runners for legitimate uses. References Anthony Lineberry on /dev/mem Rootkits - LJ 8/2009 by Mick Bauer Who needs /dev/kmem?
kernel: disabling /dev/kmem and /dev/mem
1,558,028,264,000
My program creates many small short-lived files. They are typically deleted within a second after creation. The files are in an ext4 file system backed by a real hard disk. I know that Linux periodically flushes (pdflush) dirty pages to disk. Since my files are short-lived, most likely they are not cached by pdflush. My question is, does my program cause a lot of disk writes? My concern is my hard disk's life. Since the files are small, let's assume the sum of their size is smaller than dirty_bytes and dirty_background_bytes. Ext4 has default journal turned on, i.e. metadata journal. I also want to know whether the metadata or the data is written to disk.
A simple experiment using ext4: Create a 100MB image... # dd if=/dev/zero of=image bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.0533049 s, 2.0 GB/s Make it a loop device... # losetup -f --show image /dev/loop0 Make filesystem and mount... # mkfs.ext4 /dev/loop0 # mount /dev/loop0 /mnt/tmp Make some kind of run with short lived files. (Change this to any method you prefer.) for ((x=0; x<1000; x++)) do (echo short-lived-content-$x > /mnt/tmp/short-lived-file-$x sleep 1 rm /mnt/tmp/short-lived-file-$x ) & done Umount, sync, unloop. # umount /mnt/tmp # sync # losetup -d /dev/loop0 Check the image contents. # strings image | grep short-lived-file | tail -n 3 short-lived-file-266 short-lived-file-895 short-lived-file-909 # strings image | grep short-lived-content | tail -n 3 In my case it listed all the file names, but none of the file contents. So only the contents were not written.
Are short-lived files flushed to disk?
1,558,028,264,000
From this answer the solution is to modprobe loop max_loop=64 Which makes me allowed to use 64 loopback devices then mknod -m 660 /dev/loop8 b 7 8 To create the devices. I did this for 8, 9, 10 and 8,9 works but 10 does not. I then tried loopa to loopf and tried to mount a 11th device and i get the error Error: Failed to set up a loop device: How do I make >10 loop devices?
Make sure you are running mknod -m 660 /dev/loop10 b 7 10. The format is mknod -m 660 /dev/loop<ID> b 7 <ID> where ID is the same. Update [07/10/2014] I also found a good blog post to always have more at boot. See https://yeri.be/xen-failed-to-find-an-unused-loop-device Update [05/25/2016] I run a CentOS server, and I found that this post was also helpful when the other methods don't work. This makes my new favorite method: MAKEDEV /dev/loop It creates 256 loop devices (which is the max without modifying the kernel).
How do I setup more then 10 loopback device?
1,558,028,264,000
I'm trying to find a way to safely shutdown a network interface, i.e. without disturbing any processes. For this I need to find out what processes are currently using that interface. Tools like ss, netstat or lsof are helpful showing which processes have open sockets, but they don't show wpa_supplicant, dhcpcd, hostapd and others. Is there a way to detect these processes in a general way? It might not for dhcpcd, as it is just a program opening a socket every now and then, but I'm assuming wpa_supplicant and hostapd would “do something” to that interface which is detectable and perhaps also leads to the relevant PID.
Such programs will be using Netlink sockets to talk to the network hardware's driver directly. lsof version 4.85 added support for Netlink sockets, but in my testing on CentOS 5.8, the feature doesn't appear to work very well. Perhaps it depends on features added in newer kernels. However, it is possible to make a pretty good guess about when you've run into a Netlink socket. If you cat /proc/net/netlink you get a list of open Netlink sockets, including the PID of processes that have them opened. Then if you lsof -p $THEPID those PIDs, you'll find entries with sock in the TYPE column and can't identify protocol in the NAME column. It's not guaranteed that these are Netlink sockets, but it's a pretty good bet. You might also infer that a given process is talking directly to an interface if it has files under /sys/class/net/$IFNAME open. Now, all that having been said, I think your question is wrong-headed. Let's say there is a command I haven't discovered. Call it lsif -i wlan0, and say it returns a list of PIDs accessing the named interface. What would you be able to do with it which would allow you to "not disturb" processes using that interface, as you've requested? Were you planning on killing off all the processes using that interface first? That's pretty disturbing. :) Maybe you were instead thinking that dropping the interface out from underneath a process using it would somehow be harmful? What, in the end, is so bad about ifconfig wlan0 down? Network interfaces are not storage devices. You don't have to flush data to disk and unmount them gracefully. Not breaking open sockets might be worthwhile, but as you already know, you can figure that out with netstat and lsof. wpa_supplicant isn't going to sulk if you bounce its interface unceremoniously. (If it does, it's a bug and needs to be fixed; it wouldn't indicate some fault of yours.) Well-written network programs cope with such things as a matter of course. Networks are unreliable. If a program can't cope with an interface being bounced, it also won't be able to cope with unplugged Ethernet cables, balky DSL modems, or backhoes.
Find processes using a network interface
1,558,028,264,000
How to disable system beep on Linux? I don't have superuser powers so I cannot recompile kernel/unload module.
For beeps generated in your shell (which seem to be the most annoying ones), add this to "~/.inputrc": set bell-style none Note that this is not terminal- but host-specific. That means that when you log in to another computer via ssh where this isn't set, the beep is back. (I tested on Fedora)
How to disable system beep for non-privileged user
1,631,465,526,000
When I use strace to examine a program, I often have a hard time finding where the syscalls from the dynamic loader end and the syscalls from the program begin. The output from strace ./hello where hello a simple hello world C program is 36 lines. Here's a sample: execve("./hello", ["./hello"], 0x7fffb38f4a30 /* 73 vars */) = 0 brk(NULL) = 0x1304000 arch_prctl(0x3001 /* ARCH_??? */, 0x7ffe6715fe60) = -1 EINVAL (Invalid argument) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=92340, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 92340, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f78d9fbd000 close(3) = 0 openat(AT_FDCWD, "/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260|\2\0\0\0\0\0"..., 832) = 832 pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48 Is there a way to ignore the dynamic loader syscalls?
On x86_64 the main program starts just after arch_prctl(ARCH_SET_FS) and a couple of mprotect()s, so you can sed 1,/ARCH_SET_FS/d on the strace's output. A trick you can use on all platforms is to LD_PRELOAD a small library which overides __libc_start_main() and does a pointless system call like write(-1, "IT_STARTS_HERE", 14) before calling the original __libc_start_main(). cat >hack.c <<'EOT' #define _GNU_SOURCE #include <dlfcn.h> #include <unistd.h> #include <errno.h> #include <err.h> int __libc_start_main( int (*main)(int,char**,char**), int ac, char **av, int (*init)(int,char**,char**), void (*fini)(void), void (*rtld_fini)(void), void *stack_end) { typeof(__libc_start_main) *next = dlsym(RTLD_NEXT, "__libc_start_main"); write(-1, "IT_STARTS_HERE", 14); errno = 0; return next(main, ac, av, init, fini, rtld_fini, stack_end); } EOT cc -shared -ldl hack.c -o hack.so hack_strace(){ strace -E LD_PRELOAD=./hack.so "$@" 2>&1 >&3 3>&- | sed 1,/IT_STARTS_HERE/d >&2; } 3>&1 # usage hack_strace sh -c 'echo LOL' getuid() = 2000 getgid() = 2000 getpid() = 11443 rt_sigaction(SIGCHLD, {sa_handler=0x55eba5c19380, sa_mask=~[RTMIN RT_1], sa_flags=SA_RESTORER, sa_restorer=0x7fae5c55f840}, NULL, 8) = 0 geteuid() = 2000
Can I skip syscalls made by the dynamic loader in strace?
1,631,465,526,000
I recently noticed that in normal mode when I type Ctrl-i (command for jumps) it is "confused" for the TAB key. In particular, I have this mapping: nnoremap <Tab> :tabnext<Enter>
Terminal I/O applications just see the composed characters sent by the terminal, and cannot distinguish amongst specific key chords, while GUI applications can, because GUIs tend to operate in terms of key press and release messages. Most terminals, and most terminal emulators, send a ␉ (U+0009) character down the (virtual) wire to the host system when either ⇥ Tab or ⎈ Control+I are pressed. This is not vim. This is how terminals work, and how the emulators that emulate them work too. Similarly, and oft forgotten nowadays I observe, these terminals and terminal emulators send a ␛ (U+001B) character down the (virtual) wire to the host system when either ⎋ Esc or ⎈ Control+[ are pressed.
Conflict Ctrl-I with TAB in normal mode
1,631,465,526,000
I need to find and delete files older than 1 week in the Development unit. There are limited number utilities available on this unit. -mtime find's predicate is not available. How do I check all files which are older than x days in this case?
-mtime is a standard predicate of find (contrary to -delete) but it looks like you have a stripped down version of busybox, where the FEATURE_FIND_MTIME feature has been disabled at build time. If you can rebuild busybox with it enabled, you should be able to do: find . -mtime +6 -type f -exec rm -f {} + Or if FEATURE_FIND_DELETE is also enabled: find . -mtime +6 -type f -delete If not, other options could be to use find -newer (assuming FEATURE_FIND_NEWER is enabled) on a file that is set to have a one week old modification time. touch -d "@$(($(date +%s) - 7 * 86400))" ../ref && find . ! -type f -newer ../ref -exec rm -f {} + Or if -newer is not available but sh's [ supports -nt: touch -d "@$(($(date +%s) - 7 * 86400))" ../ref && find . ! -type f -exec sh -c ' for f do [ "$f" -nt ../ref ] || printf "%s\0" "$f" done' sh {} + | xargs -0 rm -f
Finding files older than x days on a system with a stripped down busybox
1,631,465,526,000
Trying to trouble-shoot this error which pertains to microcode, my card from lspci shows, Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34) system.log shows, iwlwifi: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0 When I run modinfo, I get, (a lot of stuff cut off) description: Intel(R) Wireless WiFi driver for Linux ## lots of stuff... firmware: iwlwifi-6000g2b-6.ucode firmware: iwlwifi-6000g2a-6.ucode firmware: iwlwifi-6050-5.ucode firmware: iwlwifi-6000-6.ucode ## lots of stuff... ## NO iwlwifi-6205-#.ucode srcversion: 6BA065AF04F0DFDB8D91DBF But none of those show 6205. Which .ucode is system.log referring to when it says, iwlwifi: loaded firmware version 18.168.6.1 op_mode iwldvm There are two that I could assume, firmware: iwlwifi-6050-5.ucode firmware: iwlwifi-6000-6.ucode But, is there way to know for certain?
This is documented at the Linux Wireless wiki: ------------------------------------------------------------------------------ Device | Kernel | Module | Firmware | ----------------- | ---------| ------- | ------------------------------------ | Intel® Centrino® | 2.6.36+ | iwldvm | 17.168.5.1, 17.168.5.2 or 17.168.5.3 | Advanced-N 6205 | -------- | | ------------------------------------ | | 3.2+ | | 18.168.6.1 | ------------------------------------------------------------------------------ This table reflects the minimal version of each firmware hosted at linux-firmware.git repository that is known to work with that module and kernel version. In your specific case, the file is iwlwifi-6000g2a-6.ucode. modinfo will show all firmware files that can be used by that module(and that could support other hardware). The wireless wiki is a pretty reliable way to get information about hardware and firmware. dmesg | grep firmware could help you on probing what firmware file is being used. Example 1 - firmware was loaded correctly: [ 12.860701] iwlagn 0000:03:00.0: firmware: requesting lbm-iwlwifi-5000-1.ucode [ 12.949384] iwlagn 0000:03:00.0: loaded firmware version 8.24.2.12 Example 2 - missing firmware file d101m_ucode.bin: [ 77.481635] e100 0000:00:07.0: firmware: requesting e100/d101m_ucode.bin [ 137.473940] e100: eth0: e100_request_firmware: Failed to load firmware "e100/d101m_ucode.bin": -2
Where can I find the microcode (ucode) that is being loaded by iwlwifi (Intel 6205)?
1,631,465,526,000
If I have a windows client read a file on a Linux smb share at an interval <= 10 seconds, the windows client will show incorrect (cached?) information of that file. I've reproduced this on multiple systems. Example steps to reproduce: 1) set up linux samba share - for this example, using Debian and installing samba. example: sudo mkdir /test sudo chmod 777 /test smb.conf addition: [test] read only = no locking = no path = /test/ guest ok = yes 2) Map this directory as a drive in a windows client (this test will use L:) 3) create a file with some text on the samba server nano /test/test.txt ORIGINAL 4) create simple batch file on windows machine to view file every 5 seconds: copy con test.bat @echo off cls :1 type L:\test.txt timeout 5 goto 1 5) run batch file, it should show ORIGINAL every 5 seconds. 6) on linux server, change file contents nano /test/test.txt CHANGED 7) view the running batch file on windows, it will still say "ORIGINAL" every five seconds, and not "CHANGED" as the real file has. 8) terminate the batch file and wait ~ 15 seconds, OR change timeout to something > 10 seconds, and it will update properly. Hopefully I've explained and outlined how to test this sufficiently. Can anyone reproduce this behavior and/or suggest how to fix this? . . . NOTES: Linux Client > Linux SMB Host shows the proper file content. Windows Client > Windows SMB Host shows the proper file content. It's specifically Windows Client > Linux SMB Host that does not show the proper file content at a refresh interval of <= 10 seconds. All Windows flavors I've tested with (Win7, Win10, Server2016) exhibit the same behavior. I have also tested different protocols on my samba share 'NT1, SMB2, SMB3', and they do not change the behavior. NOTE: I believe this is most likely a Windows issue, but I have not received any responses on either technet or superuser in a week. This should be fairly easy to test, can anyone confirm this behavior or state otherwise?
I resolved this by placing oplocks = False in my smb.conf under my share settings. https://www.samba.org/samba/docs/old/Samba3-HOWTO/locking.html#id2615926
Windows clients will not refresh Linux samba file locally if reading file at intervals <= 10 seconds
1,631,465,526,000
Executing (for example) the following command to get the list of memory mapped pages: pmap -x `pidof bash` I got this output: Why some read-only pages are marked as "dirty", i.e. written that require a write-back? If they are read-only, the process should not be able to write to them... (In the provided example dirty pages are always 4 kB, but I found other cases with different values) I checked also the /proc/pid/smaps and that pages are described as "Private Dirty".
A dirty page does not necessarily require a write-back. A dirty page is one that was written to since the kernel last marked it as clean. The data doesn't always need to be saved back into the original file. The pages are private, not shared, so they wouldn't be saved back into the original file. It would be impossible to have a dirty page backed by a read-only file. If the page needs to be removed from RAM, it will be saved in swap. Pages that are read-only, private and dirty, but within the range of a memory-mapped file, are typically data pages that contain constants that need to be initialized at run time, but don't change after they have been initialized. For example, they may contain static data that embeds pointers; the pointer values depend on the address at which the program or library is mapped, so it has to be computed after the program has started, with the page being read-write at this stage. After the pointers have been computed, the contents of the page won't ever change in this instance of the program, so the page can be changed to read-only. See “Hunting Down Dirty Memory Pages” by stosb for an example with code fragments. You may, more rarely, see read-only, executable, private, dirty pages; these happen with some linkers that mix code and data more freely, or with just-in-time compilation.
Why read-only memory mapped regions have dirty pages?
1,631,465,526,000
How can I do logrotate? I can see no effect when I do logrotate: root@me-Latitude-E5550:/etc/logrotate.d# cd .. root@me-Latitude-E5550:/etc# cd .. root@me-Latitude-E5550:/# logrotate -d /etc/logrotate.conf Ignoring /etc/logrotate.conf because of bad file mode. Handling 0 logs root@me-Latitude-E5550:/# chmod 644 /etc/logrotate.d/* root@me-Latitude-E5550:/# cd /etc/logrotate.d root@me-Latitude-E5550:/etc/logrotate.d# ls apport custom pm-utils speech-dispatcher upstart apt dpkg ppp ufw cups-daemon lightdm rsyslog unattended-upgrades root@me-Latitude-E5550:/etc/logrotate.d# cd .. root@me-Latitude-E5550:/etc# cd .. root@me-Latitude-E5550:/# logrotate -d /etc/logrotate.conf Ignoring /etc/logrotate.conf because of bad file mode. Handling 0 logs root@me-Latitude-E5550:/# cd /var/log root@me-Latitude-E5550:/var/log# ls -larth total 34M drwxrwxrwx 2 root root 4,0K Feb 18 2016 speech-dispatcher drwxrwxrwx 2 root root 4,0K Mai 19 2016 upstart drwxrwxrwx 2 root root 4,0K Jul 19 2016 fsck -rwxrwxrwx 1 root root 31 Jul 19 2016 dmesg -rwxrwxrwx 1 root root 57K Jul 19 2016 bootstrap.log drwxrwxrwx 3 root root 4,0K Jul 19 2016 hp drwxr-xr-x 14 root root 4,0K Jul 19 2016 .. drwxrwxrwx 2 root root 4,0K Jun 14 13:17 apt drwxrwxrwx 2 root root 4,0K Jun 14 13:20 installer drwxrwxrwx 2 root root 4,0K Jun 19 09:50 unattended-upgrades -rwxrwxrwx 1 root root 3,8K Jun 19 09:55 fontconfig.log -rwxrwxrwx 1 root root 554 Jun 19 15:02 apport.log.1 drwxrwxrwx 2 root root 4,0K Jun 20 07:35 lightdm -rwxrwxrwx 1 root root 836K Jun 20 07:35 syslog.1 -rwxrwxrwx 1 root root 32K Jun 20 14:31 faillog -rw------- 1 root utmp 768 Jul 14 09:28 btmp drwxrwxrwx 2 root root 4,0K Jul 14 10:38 dist-upgrade -rwxrwxrwx 1 root root 43K Jul 14 10:45 alternatives.log -rwxrwxrwx 1 root root 286K Jul 14 11:04 lastlog drwxrwxrwx 2 root root 4,0K Jul 18 08:59 sysstat -rwxrwxrwx 1 root root 1,8M Jul 18 11:13 dpkg.log drwxrwxrwx 2 root root 4,0K Jul 18 12:19 cups -rw-r----- 1 root adm 14K Jul 18 12:20 apport.log -rw-r--r-- 1 root root 32K Jul 18 18:38 Xorg.0.log.old -rwxrwxrwx 1 root root 1,9K Jul 18 18:44 gpu-manager.log -rw-r--r-- 1 root root 1009 Jul 18 18:44 boot.log drwxrwxr-x 13 root syslog 4,0K Jul 18 18:44 . -rw-rw-r-- 1 root utmp 85K Jul 18 18:45 wtmp -rw-r--r-- 1 root root 29K Jul 18 21:17 Xorg.0.log -rwxrwxrwx 1 root root 5,7M Jul 18 21:18 kern.log -rwxrwxrwx 1 root root 580K Jul 18 21:50 auth.log -rwxrwxrwx 1 root root 24M Jul 18 21:54 syslog root@me-Latitude-E5550:/var/log# logrotate -f /etc/logroate.conf error: cannot stat /etc/logroate.conf: No such file or directory root@me-Latitude-E5550:/var/log# logrotate -f /etc/logrotate.conf root@me-Latitude-E5550:/var/log# ls -larth total 34M drwxrwxrwx 2 root root 4,0K Feb 18 2016 speech-dispatcher drwxrwxrwx 2 root root 4,0K Mai 19 2016 upstart drwxrwxrwx 2 root root 4,0K Jul 19 2016 fsck -rwxrwxrwx 1 root root 31 Jul 19 2016 dmesg -rwxrwxrwx 1 root root 57K Jul 19 2016 bootstrap.log drwxrwxrwx 3 root root 4,0K Jul 19 2016 hp drwxr-xr-x 14 root root 4,0K Jul 19 2016 .. drwxrwxrwx 2 root root 4,0K Jun 14 13:17 apt drwxrwxrwx 2 root root 4,0K Jun 14 13:20 installer drwxrwxrwx 2 root root 4,0K Jun 19 09:50 unattended-upgrades -rwxrwxrwx 1 root root 3,8K Jun 19 09:55 fontconfig.log -rwxrwxrwx 1 root root 554 Jun 19 15:02 apport.log.1 drwxrwxrwx 2 root root 4,0K Jun 20 07:35 lightdm -rwxrwxrwx 1 root root 836K Jun 20 07:35 syslog.1 -rwxrwxrwx 1 root root 32K Jun 20 14:31 faillog -rw------- 1 root utmp 768 Jul 14 09:28 btmp drwxrwxrwx 2 root root 4,0K Jul 14 10:38 dist-upgrade -rwxrwxrwx 1 root root 43K Jul 14 10:45 alternatives.log -rwxrwxrwx 1 root root 286K Jul 14 11:04 lastlog drwxrwxrwx 2 root root 4,0K Jul 18 08:59 sysstat -rwxrwxrwx 1 root root 1,8M Jul 18 11:13 dpkg.log drwxrwxrwx 2 root root 4,0K Jul 18 12:19 cups -rw-r----- 1 root adm 14K Jul 18 12:20 apport.log -rw-r--r-- 1 root root 32K Jul 18 18:38 Xorg.0.log.old -rwxrwxrwx 1 root root 1,9K Jul 18 18:44 gpu-manager.log -rw-r--r-- 1 root root 1009 Jul 18 18:44 boot.log drwxrwxr-x 13 root syslog 4,0K Jul 18 18:44 . -rw-rw-r-- 1 root utmp 85K Jul 18 18:45 wtmp -rw-r--r-- 1 root root 29K Jul 18 21:17 Xorg.0.log -rwxrwxrwx 1 root root 5,7M Jul 18 21:18 kern.log -rwxrwxrwx 1 root root 580K Jul 18 21:50 auth.log -rwxrwxrwx 1 root root 24M Jul 18 21:55 syslog root@me-Latitude-E5550:/var/log#
You are changing permission on logrotate.d. you need to chmod 644 /etc/logrotate.conf And chown root:root /etc/logrotate.conf And then it could work
Ignoring /etc/logrotate.conf because of bad file mode
1,631,465,526,000
I'm still new to Linux and Unix-like systems and I've tried to search on the internet about my issue. Unfortunately I don't get a feasible answer right now. My problem is that the console(tty) on my Debian linux can't display any language other than English where it's a bit inconvenient for me as I have some folders and files on my disks with names in Chinese. When I try to locate the files in a terminal window of the Gnome desktop, however, it displays the Chinese characters for me perfectly. How can I get the file names displayed right in the console(tty)? Thanks for your help.
Short answer: you can't. Longer: the Linux console has limited ability to display Unicode in the console, supporting only 512 glyphs (which is a minuscule slice of Chinese). The reason this is because it stores the information in (kernel) memory. Furthermore, when doing this, it reduces the number of video attributes available (usually by eliminating "bold"). You can reportedly setup a framebuffer device, noting that few people discuss this in active use (it may not work well). Further reading: 2. Display setup (The Unicode HOWTO) In April 2000, Edmund Thomas Grimley Evans has implemented an UTF-8 console terminal emulator. It uses Unicode fonts and relies on the Linux frame buffer device. 7.6. Configuring the Linux Console (Linux From Scratch - Version 6.3)mentions the 512-character limit Due to the use of a 512-glyph LatArCyrHeb-16 font in the previous example, bright colors are no longer available on the Linux console unless a framebuffer is used. If one wants to have bright colors without framebuffer and can live without characters not belonging to his language, it is still possible to use a language-specific 256-glyph font, as illustrated below. How to display unicode in a Linux virtual terminal? Linux vconsole with utf-8 character broke when autocomplete #2602 yaft (yet another framebuffer terminal)
Linux console can't display any language other than English while the terminal under Gnome can
1,631,465,526,000
I want to set up a directory where all new files and directories have a certain access mask and also the directories have the sticky bit set (the t one, which restricts deletion of files inside those directories). For the first part, my understanding is that I need to set the default ACL for the parent directory. However, new directories do not inherit the t bit from the parent. Hence, non-owners can delete files in the subdirectories. Can I fix that?
This is a configuration that allows members of a group, acltest, to create and modify group files while disallowing the deletion and renaming of files except by their owner and "others," nothing. Using the username, lev and assuming umask of 022: groupadd acltest usermod -a -G acltest lev Log out of the root account and the lev account. Log in and become root or use sudo: mkdir /tmp/acltest chown root:acltest /tmp/acltest chmod 0770 /tmp/acltest chmod g+s /tmp/acltest chmod +t /tmp/acltest setfacl -d -m g:acltest:rwx /tmp/acltest setfacl -m g:acltest:rwx /tmp/acltest ACL cannot set the sticky bit, and the sticky bit is not copied to subdirectories. But, you might use inotify or similar software to detect changes in the file system, such as new directories, and then react accordingly. For example, in Debian: apt-get install inotify-tools Then make a script for inotify, like /usr/local/sbin/set_sticky.sh. #!/usr/bin/env bash inotifywait -m -r -e create /tmp/acltest | while read path event file; do case "$event" in *ISDIR*) chmod +t $path$file ;; esac done Give it execute permission for root: chmod 0700 /usr/local/sbin/set_sticky.sh. Then run it at boot time from, say, /etc/rc.local or whichever RC file is appropriate: /usr/local/sbin/set_sticky.sh & Of course, in this example, /tmp/acltest should disappear on reboot. Otherwise, this should work like a charm.
Set sticky bit by default for new directories via ACL?
1,631,465,526,000
If I create a tun interface with ip tuntap add mode tun command and force it administratively up with ip link set dev tun1 up command, then the interface itself is always "physically" down: root@A58:~# ip link show dev tun1 46: tun1: <NO-CARRIER,POINTOPOINT,MULTICAST,NOARP,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT qlen 500 link/none root@A58:~# This makes sense as there are no applications connected to this interface. However, I also have tun0 in my system which is "physically" up: root@A58:~# ip link show dev tun0 45: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 100 link/none root@A58:~# Is there a way to find out which process is connected to this tun0 interface? I had no luck with ps -ef | grep tun0 or lsof | grep tun0.
The Linux kernel exposes this info now in /proc/$PID/fdinfo/$FD. For example: # grep ^iff: /proc/*/fdinfo/* /proc/31219/fdinfo/5:iff: tun0 /proc/31235/fdinfo/5:iff: tun1 /proc/31252/fdinfo/5:iff: tun2 /proc/31267/fdinfo/5:iff: tun3 Tested with Debian 5.8.10.
How to find out which process keeps tunnel interface(tun) up?
1,631,465,526,000
I have a setup with sftp only users: Match Group sftponly ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I get the following message in my secure.log: fatal: bad ownership or modes for chroot directory With the match keyword there comes some security stuff with it... the directories need to be owned by root, and the directories need to be chmod 755 (drwxr-xr-x). So it makes it impossible for a user to have write permissions to the folders, if it is only writable to the user root and set to ben non-writable for groups due to ssh's security. Someone know about a good work around?
I have same settings on our server. We use same config of SSHD. Users' home directories are owned by root and within them there are folders documents and public_html owned by respective users. Users then login using SFTP and write into those folders (not directly into home). As SSH is not allowed for them, it perfectly works. You can adjust which directories will be created for new users in /etc/skel/ (at least in openSUSE, I'm not so familiar with other distros). Another possibility would be ACL (openSUSE documentation) - it can add write permission for respective user for his home directory.
Chrooted SFTP user write permissions
1,631,465,526,000
I'm trying to secure my authorized_keys file to prevent it from being modified. I run this: [root@localhost]# chattr +i authorized_keys chattr: Inappropriate ioctl for device while reading flags on authorized_keys I think it may be due to the filesystem: [root@localhost]# stat -f -c %T /home/user/ nfs there is a way to modify it with chattr?
NFS doesn't have a concept of immutable files, which is why you get the error. I'd suggest that you just remove write access from everyone instead, which is probably close enough for your purposes. $ > foo $ chmod a-w foo $ echo bar > foo bash: foo: Permission denied The main differences between removing the write bit for all users instead of using the immutable attribute: The immutable attribute must be unset by root, whereas chmod can be changed by the user owning the file; The immutable attribute removes the ability to remove the file without removing the immutable attribute, which removing the write bit doesn't do (although you can change the directory permissions to disallow modification, if that is acceptable). If either of these things matter to you when dealing with authorized_keys, you probably have a more fundamental problem with your security model.
`chattr +i` error on NFS
1,631,465,526,000
My desktop has a nasty habit. When I have several high intensity applications running and my CPU is at maximum usage for a period of time, the core temperature rises and my computer auto-shuts off. Is there a way I can monitor (write a script) my CPU temperature in the background and have some sort of warning when it gets above a certain temperature? I'm running Opensuse with dwm as my window manager. I usually use sensors to see my CPU temperature.
You could write a script to display your temperature in dwm's status bar, for example: temp (){ awk '{print $4"°C"}' <(acpi -t) echo $temp } xsetroot -name "$(temp)" Your sensors output may be more complex, depending on your setup: this works on one of my machines: awk '/temp1/ {print +$2"°C"}' <(sensors) If you patch in statuscolours, you can additionally have the output change colour as the $temp hits higher values... The Arch Wiki has an introduction to setting up a basic statusbar script and the dwm site includes an .xinitrc example. You can see my dwm-status script for more details: http://beta.intuxication.org/jasonwryan/archer/file/tip/Scripts/dwm-status
How to monitor my CPU temperature with a minimal script to show a warning?
1,631,465,526,000
Are there any wgetpaste alternatives? As a clarification... wgetpaste is an extremely simple command-line interface to various online pastebin services. The basic usage is to simply upload a local file to a pastebin-like online service for sharing.
I use an online service called sprunge.us. It lets you post pretty simply like this command | curl -F "sprunge=<-" http://sprunge.us I have curl -F "sprunge=<-" http://sprunge.us | xclip aliased to webshare on my system, so it becomes simply command | webshare. The added xclip at the end gets the url into the X clipboard; it's not on every system, and there are several other tools out there like it.
wgetpaste alternatives?
1,631,465,526,000
While reading through the kernel documentation on ramdisk in ramfs-rootfs-initramfs.txt i was having a doubt like the ramdisk explained there is same as the initrd features described in the post at the-difference-between-initrd-and-initramfs. Could someone clarify me on this?? And if it is the same, i read that there are many disadvantages for it, but still in my fedora PC, i see initrd-2.6.29.4-167.fc11.i686.PAE.img in my boot folder. Is it different from the initrd mentioned above?? UPDATE_EDIT : In of the articles I even saw a command like # update-initramfs -u all update-initramfs: Generating /boot/initrd.img-2.6.18-5-amd64 So how is this initramfs linked to initrd.img ?
A ramdisk is a set of blocks that gets copied to an allocated chunk of memory, then treated as a block device. A normal filesystem is created on the ramdisk. The initrd (initial ramdisk) is a ramdisk that is mounted during bootup. The initramfs is something different. It's a cpio archive of files that is loaded during bootup. The kernel loads the contents into a virtual filesystem it calls rootfs. Unlike a ramdisk, deleting files directly frees memory, and there's no extra filesystem and block layer involved. Both methods result in files being available to the kernel at boot time before any devices have been loaded, and so in practice you can achieve similar results with both. Older systems use initrd (it was created before initramfs) but modern systems should all be using initramfs. You may still see the word initrd in reference to something that is really an initramfs; it's just naming for compatibility's sake.
Is Ramdisk and initrd the same?
1,631,465,526,000
We've bought a commercial application who just work only if their dongle usb is connected to the server. However sometimes the application can not recognize the dongle, so it doesn't work, but if someone eject the dongle physically from USB port and attach it again it will recognize and work fine. There are 43 modules loaded on the server and attach/eject the dongle does not increase/decrease number of modules. Also I have usbmon0, usbmon1 and usbmon2 files in /dev before/after eject/attach the dongle and number of files in /dev will not change before/after eject/attach the dongle. journalctl -f command after ejecting the dongle: Jan 19 18:10:28 iwr kernel: usb 2-2.1: USB disconnect, device number 5 journalctl -f command after attaching the dongle: Jan 19 18:11:11 iwr kernel: usb 2-2.1: new full-speed USB device number 6 using uhci_hcd Jan 19 18:11:11 iwr kernel: usb 2-2.1: New USB device found, idVendor=0403, idProduct=c580 Jan 19 18:11:11 iwr kernel: usb 2-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Jan 19 18:11:11 iwr kernel: usb 2-2.1: Product: HID UNIKEY Jan 19 18:11:11 iwr kernel: usb 2-2.1: Manufacturer: OEM Jan 19 18:11:11 iwr kernel: usbhid 2-2.1:1.0: couldn't find an input interrupt endpoint Can I eject and then attach it logically? (issue a command, remove a module, etc.)
Many answers found on the Internet (including those in TNW's comment) rely on /sys/bus/usb/devices/2-2/power/level or /sys/bus/usb/devices/2-2/power/control which are both deprecated since 2.6.something kernel. For newer kernels, the suggested procedure is to unbind and rebind its driver, which usually results in a power cycle: # Find out which driver to unbind tree /sys/bus/usb/devices/2-2.1 | grep driver |-- driver -> ../../../../../../bus/usb/drivers/whatever # Unbind the driver echo 2-2.1 > /sys/bus/usb/drivers/whatever/unbind # Rebind the driver echo 2-2.1 > /sys/bus/usb/drivers/whatever/bind
How to logically eject/disconnect & reattach a usb device (dongle)?
1,631,465,526,000
I am handed a path of a directory or a file. Which utility/shell script will reliably give me the UUID of the file system on which is this directory/file located? By UUID of file system I mean the UUID=... entry as shown by e.g. blkid I'm using Redhat Linux. (someone suggested that I should ask this here at unix.stackexchange.com, so I moved it from the original stackexchange.com)
One option is stat + findmnt combo: findmnt -n -o UUID $(stat -c '%m' "$path") Here -n disables header, and -o UUID prints only UUID value. Option -c '%m' of stat is present to output only mountpoint of given path.
How to get UUID of filesystem given a path?
1,631,465,526,000
I'm having some doubts about how to install and allow Linux to correctly read/write to a NTFS formatted harddrive used as backup of various machines (windows included, that's how I need NTFS). For now, I've read some pages and I have the feeling I need someone else's guidance from who already did this step-by-step, to not ruin things here. What I need is to be able to save a Linux file, with its chown and chmod settings, to a NTFS filesystem, and be able to retrieve this information back. What I have today is a NTFS that saves all files with the owner:group of who mounted the volume, and permissions rwxrwxrwx for all. I read this article but it is too much information and I could not understand some things when trying to actually implement: Is it stable in the current version? Does Ubuntu 10.04 have all things needed already? Or do I need to install anything? What is the relation of POSIX ACL to this? Do I need install anything regarding this or just ntfs-3g will do? Where are Ubuntu packages to run with apt-get? If I map the users (with usermap) can bring the harddrive to another computer with different users, will I be able to read them? (Under Linux/Windows)? For one thing I noticed, usermap was not ready to use. So I downloaded and compiled (but not installed because I was afraid to mess up things here), the latest version of ntfs-3g. In the README file it says: > TESTING WITHOUT INSTALLING > > Newer versions of ntfs-3g can be > tested without installing anything and > without disturbing an existing > installation. Just configure and make > as shown previously. This will create > the scripts ntfs-3g and lowntfs-3g in > the src directory, which you may > activate for testing : > > ./configure > make > > then, as root : > src/ntfs-3g [-o mount-options] /dev/sda1 /mnt/windows > > And, to end the test, unmount the > usual way : > umount /dev/sda1 But it tells nothing about the mount-options that I need to use to have full backups (full == backing up / restoring files, owners, groups and permissions). This faq says: Why have chmod and chown no effect? By default files on NTFS are owned by root with full access to everyone. To get standard per-file protection you should mount with the "permissions" option. Moreover, if you want the permissions to be interoperable with a specific Windows configuration, you have to map the users. Also, I did used the ntfs-3g.usermap /dev/sdb2 tools to create the map file and got this result: # Generated by usermap for Linux, v 1.1.4 :carl:S-1-5-21-889330461-3416208041-4118870141-511 :default:S-1-5-21-2592120051-4195220491-4132615201-511 carl:carl:S-1-5-21-889330462-3416208046-4118870148-1000 Now this default was mapped because I wrote "default" to one file that was under the default user during the inquiring. I'm not sure if I did that right. I don't care for any users but carl (and root for that matter), and for any other groups but users. I saw the FAQ telling me to answer the group with the username. Isn't it the case to tell the group as "users"? And how can I check, booting Windows, if this mapping is correct? Summary: I need rsync to save Linux files and Windows files from various computers, to a NTFS external USB HD, without losing file permissions. I don't know how to install and run the driver ntfs-3g to allow chown, chmod and anything else that is needed to make that possible. What options, and where? All computers have carl username, but that doesn't guarantee that their SID, UID or GID are the same. The environment is composed of 18 "documents" folders, 6 of them Linux, 6 of them Win7, 6 of them virtualbox Win XP. All of them will be a single "documents" folder into the NTFS external hard drive. Reference: I also read this forum, and maybe it is useful to someone trying to help me here. Also thought of these other three solutions, making the filesystem ext. But the external HD may be used in Windows boxes; I could not install or have write to install drivers, so it needs to be readable easily by any Windows and NTFS is the standard. All my Google searches was too much technical to follow.
You can use ntfs-3g, but make sure you place the mappings file in the right place. Once you do that you should see file ownerships in ../User/name match the unix user. However, if you just want to use it as backup you should probably just save a big tarball onto the ntfs location. If you also want random access you can place an ext2 image file and loop mount it. That will save you from a lot of these headaches. Ok, assuming you will mount NTFS under /ntfs run ntfs-3g.usermap /dev/sdb1 (or whatever your ntfs partition is). Answer the questions. Then mkdir /ntfs/.NTFS-3G. Then cp UserMapping /ntfs/.NTFS-3G/UserMapping. Now put an entry in /etc/fstab: /dev/sdb1 /ntfs ntfs-3g defaults 0 0 Then mount /ntfs. The command ls -l /ntfs/Users/Carl should show your Linux user as the owner of files there.
Is NTFS under linux able to save a linux file, with its chown and chmod settings?
1,631,465,526,000
Whenever there is high disk I/O, the system tends to be much slower and less responsive than usual. What's the progress on Linux kernel regarding this? Is this problem actively being worked on?
I think for the most part it has been solved. My performance under heavy IO has improved in 2.6.36 and I expect it to improve more in 2.6.37. See these phoronix Articles. Wu Fengguang and KOSAKI Motohiro have published patches this week that they believe will address some of these responsiveness issues, for which they call the "system goes unresponsive under memory pressure and lots of dirty / writeback pages" bug. Andreas Mohr, one of the users that has reported this problem to the LKML and tested the two patches that are applied against the kernel's vmscan reported success. Andreas' problem was the system becoming fully unresponsive (and switching to a VT took 20+ seconds) when making an EXT4 file-system when a solid-state drive was connected via USB 1.1. On his system when writing 300M from the /dev/zero file the problem was even worse. Here's a direct link to the bug Also from Phoronix Fortunately, from our testing and the reports of other Linux users looking to see this problem corrected, the relatively small vmscan patches that were published do seem to better address the issue. The user-interface (GNOME in our case) still isn't 100% fluid if the system is sustaining an overwhelming amount of disk activity, but it's certainly much better than before and what's even found right now with the Linux 2.6.35 kernel. There's also the Phoronix 2.6.36 release announcement It seems block barriers are going away and that should also help performance. In practice, barriers have an unpleasant reputation for killing block I/O performance, to the point that administrators are often tempted to turn them off and take their risks. While the tagged queue operations provided by contemporary hardware should implement barriers reasonably well, attempts to make use of those features have generally run into difficulties. So, in the real world, barriers are implemented by simply draining the I/O request queue prior to issuing the barrier operation, with some flush operations thrown in to get the hardware to actually commit the data to persistent media. Queue-drain operations will stall the device and kill the parallelism needed for full performance; it's not surprising that the use of barriers can be painful. There's also this LWN article on fair I/O Scheduling I would say IO reawakened as a big deal about the time of the release of ext4 in 2.6.28. The following links are to Linux Kernel Newbies Kernel releases, you should review the Block, and Filesystems sections. This may of course be unfair sentiment, or just the time I started watching FS development, I'm sure it's been improving all along, but I feel that some of the ext4 issues, 'caused people to look hard at the IO stack, or it might be that they were expecting ext4 to resolve all the performance issues, and then when it didn't they realized they had to look elsewhere for the problems. 2.6.28, 2.6.29, 2.6.30, 2.6.31, 2.6.32, 2.6.33, 2.6.34, 2.6.35, 2.6.36, 2.6.37
What's the progress regarding improving system performance/responsiveness during high disk I/O?
1,631,465,526,000
Quoting from https://www.kernel.org/doc/Documentation/process/adding-syscalls.rst: At least on 64-bit x86, it will be a hard requirement from v4.17 onwards to not call system call functions in the kernel. It uses a different calling convention for system calls where struct pt_regs is decoded on-the-fly in a syscall wrapper which then hands processing over to the actual syscall function. This means that only those parameters which are actually needed for a specific syscall are passed on during syscall entry, instead of filling in six CPU registers with random user space content all the time (which may cause serious trouble down the call chain). What serious trouble down the call chain is the last parenthesized clause referring to? To me it seems stupid not to load the six registers in the generic leadup to the syscall. Forcing each syscall wrapper to do it makes them larger and the syscall funcs become a new special case, so I'm wondering what the "serious trouble" is with having unintentional user content in unused argument registers.
One of the concerns wasn’t so much with arbitrary register values, but that they get copied to the kernel stack. Unused registers can thus be used to write arbitrary caller-controlled values to the stack, with no checks. These values on the stack could potentially be used in a more complex attack. That’s why removing this possibility seemed like a good idea. Kees Cook’s 4.17 summary also mentions possible influence of these register values on speculative execution.
What is the rationale for the change of syscall calling convention in new Linuxes?
1,631,465,526,000
When I do ls /dev/tty*, I see the following output: /dev/tty /dev/tty12 /dev/tty17 /dev/tty21 /dev/tty26 /dev/tty30 /dev/tty35 /dev/tty4 /dev/tty44 /dev/tty49 /dev/tty53 /dev/tty58 /dev/tty62 /dev/ttyS0 /dev/tty0 /dev/tty13 /dev/tty18 /dev/tty22 /dev/tty27 /dev/tty31 /dev/tty36 /dev/tty40 /dev/tty45 /dev/tty5 /dev/tty54 /dev/tty59 /dev/tty63 /dev/ttyS1 /dev/tty1 /dev/tty14 /dev/tty19 /dev/tty23 /dev/tty28 /dev/tty32 /dev/tty37 /dev/tty41 /dev/tty46 /dev/tty50 /dev/tty55 /dev/tty6 /dev/tty7 /dev/ttyS2 /dev/tty10 /dev/tty15 /dev/tty2 /dev/tty24 /dev/tty29 /dev/tty33 /dev/tty38 /dev/tty42 /dev/tty47 /dev/tty51 /dev/tty56 /dev/tty60 /dev/tty8 /dev/ttyS3 /dev/tty11 /dev/tty16 /dev/tty20 /dev/tty25 /dev/tty3 /dev/tty34 /dev/tty39 /dev/tty43 /dev/tty48 /dev/tty52 /dev/tty57 /dev/tty61 /dev/tty9 The formatting is well and I can see all the files in modest block inside terminal. But when I run this command such a way watch -d -n1 'ls /dev/tty*', I see: Every 1.0s: ls /dev/tty* debian: Wed Jun 30 21:08:06 2021 /dev/tty /dev/tty0 /dev/tty1 /dev/tty10 /dev/tty11 /dev/tty12 /dev/tty13 /dev/tty14 /dev/tty15 /dev/tty16 /dev/tty17 /dev/tty18 /dev/tty19 /dev/tty2 /dev/tty20 /dev/tty21 ... So the output listed in vertical and doesn't fit my screen. What is the reason? How can I solve this?
What is the reason? When watch executes commands they are not connected to the terminal. In other words, isatty(3) returns 0. You can use the following isatty.c to check if a command is connected to the terminal when it's ran: #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(void) { printf("%d\n", isatty(STDOUT_FILENO)); return EXIT_SUCCESS; } Compile: gcc isatty.c -o isatty Run in your terminal emulator: $ ./isatty 1 Run it in watch: $ watch ./isatty Every 2.0s: ./isatty darkstar: Wed Jun 30 20:42:51 2021 0 How can I solve this? Use -C option with ls in watch: watch -d -n1 'ls -C /dev/tty*'
ls formatting inside watch command
1,631,465,526,000
I just got a Kensigton Slimblade Trackball and I'm trying to configure it. I'm adapting it from my old Logitech Marble configuration. I want the configuration to be: Left-Bottom: Left click Left-Top: Backward Right-Top: Right click and ball scroll lock Right-Bottom: Middle click The configuration I could set until now is: Left-Bottom: Left click Left-Top: Middle click Right-Top: Right click and ball scroll lock Right-Bottom: Backward This is my configuration script: xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Middle Button Emulation" 8 1 xinput set-button-map "Kensington Kensington Slimblade Trackball" 1 2 8 4 5 6 7 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation" 8 1 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Button" 8 8 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Axes" 8 6 7 4 5 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Timeout" 16 300 Before running this script, xev reports button numbers as: Left-Bottom: 1, Left-Top: 2, Right-Top: 8, Right-Botom: 3 After running this script: Left-Bottom: 1, Left-Top: 2, Right-Top: 8, Right-Botom: 8 So AFAIK, xinput set-button-map changes button order. In this page, I learned that the 2nd value corresponds to the middle mouse button, and the 8th to the Thumb1 (normally related to backward function). So I thought I should just use number 3 as the 2nd element and 2 as the 8th element like this: xinput set-button-map "Kensington Kensington Slimblade Trackball" 1 3 8 4 5 6 7 2 but now the top-left button has right-click function and left-bottom is disabled. xev now reports Left-Bottom: 1, Left-Top: 3, Right-Top: 2, Right-Botom: 8. Anyone knows how I set the configuration as I intend ? I'm using Ubuntu 16.04. Thanks.
A few minutes after I post the question I found the answer. Here goes in case anyone needs it (configuration for Mint 18/Ubuntu 16.04): xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Middle Button Emulation" 8 0 7 8 9 xinput set-button-map "Kensington Kensington Slimblade Trackball" 1 8 2 4 5 6 7 3 2 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation" 8 1 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Button" 8 8 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Axes" 8 6 7 4 5 xinput set-int-prop "Kensington Kensington Slimblade Trackball" "Evdev Wheel Emulation Timeout" 16 300 Edit After upgrading for Mint 19 (at home) and Ubuntu 18.04 (at office) I found the configuration above doesn't work. 18.04 uses a different library for these kind of devices (libinput) and even if I reinstalled Evdev some options don't work. After a painful search I found the solution. Create a a file with .conf extension in /usr/share/X11/xorg.conf.d/ folder. In my case I named it 10-slimblade.conf. Put this configuration inside the file: Section "InputClass" Identifier "Kensington Kensington Slimblade Trackball" MatchProduct "Kensington Kensington Slimblade Trackball" MatchIsPointer "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "ButtonMapping" "1 8 2 4 5 6 7 3 2" Option "ScrollButton" "8" Option "ScrollMethod" "button" Option "MiddleEmulation" "on" EndSection Restart session, and that's it.
Configuring Kensington Slimblade in Linux
1,631,465,526,000
I recently installed Linux Ubuntu 14.04 to my computer. To enable internet connection I needed to change my IP and Gateway address. I did the following as a root user # ifconfig eth0 "my ip address here" netmask 255.255.255.0 up # route add default gw " gw address here" It works fine for a couple of minutes but then goes back to the previous settings every time. So, How can I change the IP and the gw addresses permanently?
As stated by jpkotta, network-manager is likely the culprit. You can see its status by running ps -aux | grep network-manager | grep <username>. If you get a result, it is running, otherwise it isn't. It will keep overwriting any changes you make with ifconfig as long as it is running. Kill network-manager by running sudo service network-manager stop. You can bring it back up any time with sudo service network-manager start. Once it is disabled, use ifconfig to set your static, OR edit your /etc/network/interfaces file to include something like: auto eth0 iface eth0 inet static address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 Finally, run ifup -a to bring up the interfaces you have in your /etc/network/interfaces file. All of this can be avoided though, if you'd rather not mess around with killing network manager. Just click on its icon in the taskbar and click 'edit connections'.
How can I change the IP and gateway addresses permanently?
1,631,465,526,000
I am trying to force the capslock led on. xset does not work for me, so I am trying to use setleds. In a graphical console, this command returns: > LANG=C setleds -L +caps KDGKBLED: Inappropriate ioctl for device Error reading current flags setting. Maybe you are not on the console? In a virtual terminal, it works, however the effect is local to that virtual terminal. From what I understand, running > setleds -L +caps < /dev/tty1 from a virtual terminal (my X server is sitting on tty1) should work. However, this requires root access. Is there a way to send a command to the console underlying a X server, be it from the said xserver or from another VT, without root? Edit: From a suggestion from Mark Plotnik, and based on code found here, I wrote and compiled the following: #include <X11/Xlib.h> #include <X11/XKBlib.h> #define SCROLLLOCK 1 #define CAPSLOCK 2 #define NUMLOCK 16 void setLeds(int leds) { Display *dpy = XOpenDisplay(0); XKeyboardControl values; values.led_mode = leds & SCROLLLOCK ? LedModeOn : LedModeOff; values.led = 3; XChangeKeyboardControl(dpy, KBLedMode, &values); XkbLockModifiers(dpy, XkbUseCoreKbd, CAPSLOCK | NUMLOCK, leds & (CAPSLOCK | NUMLOCK) ); XFlush(dpy); XCloseDisplay(dpy); } int main() { setLeds(CAPSLOCK); return 0; } From what Gilles wrote about xset, I did not expect it to work, but it does... in some sense: it sets the led, but it also sets the capslock status. I do not fully understand all the code above, so I may have done a silly mistake. Apparently, the line XChangeKeyboardControl... does not change the behavior of the program, and XkbLockModifiers is what sets the led and the capslock status.
In principle, you should be able to do it with the venerable xset command. xset led named 'Caps Lock' or xset led 4 to set LED number 4, if your system doesn't recognize the LEDs by name. However, this doesn't seem to work reliably. On my machine, I can only set Scroll Lock this way, and I'm not the only one. This seems to be a matter of XKB configuration. The following user-level work-around should work (for the most part): Extract your current xkb configuration: xkbcomp $DISPLAY myconf.xkb Edit the file myconf.xkb, replacing !allowExplicit with allowExplicit in the relevant blocks: indicator "Caps Lock" { allowExplicit; whichModState= locked; modifiers= Lock; }; indicator "Num Lock" { allowExplicit; whichModState= locked; modifiers= NumLock; }; Load the new file xkbcomp myconf.xkb $DISPLAY Now setting the leds on and off with xset should work. According to the bug report, you will not be able to switch the leds off when they are supposed to be on (for example if CapsLock is enabled).
Change the status of the keyboard leds, from within an X session, without root access
1,631,465,526,000
When I try to find a file using find -name "filename" I get an error that says: ./var/named/chroot/var/named' is part of the same file system loop as `./var/named' I ran the ls -ldi /var/named/chroot/var/named/ /var/named command and the inode numbers are the same. Research indicates the fix is to delete the hard link /var/named/chroot/var/named/ using rm -f and recreate it as a directory but when I do this I am advised that it can't be deleted because it is a directory already. How do I fix this? I'm running Centos 6 with Plesk 11. The mount command gives this: /dev/vzfs on / type reiserfs (rw,usrquota,grpquota) proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs (rw,relatime) none on /dev type tmpfs (rw,relatime) none on /dev/pts type devpts (rw,relatime) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) /etc/named on /var/named/chroot/etc/named type none (rw,bind) /var/named on /var/named/chroot/var/named type none (rw,bind) /etc/named.rfc1912.zones on /var/named/chroot/etc/named.rfc1912.zones type none (rw,bind) /etc/rndc.key on /var/named/chroot/etc/rndc.key type none (rw,bind) /usr/lib64/bind on /var/named/chroot/usr/lib64/bind type none (rw,bind) /etc/named.iscdlv.key on /var/named/chroot/etc/named.iscdlv.key type none (rw,bind) /etc/named.root.key on /var/named/chroot/etc/named.root.key type none (rw,bind)
named, that is the DNS server, runs in a chroot. To access the configuration file, the startup script uses mount --bind to make the configuration dir visible inside the chroot. This means that /var/named/ is the same as /var/named/chroot/var/named, and /var/named/chroot/var/named/chroot/var/named and so on. This is a recursive directory structure so if find tried to transverse it all it would never be able to terminate its execution, so it realizes that the two directory are actually the same, and prints you that message, to warn you. The message means that find won't search inside /var/named/chroot/var/named because it realized it is the same as some other directory already seen before. It is a totally harmless message, you can safely ignore it: after skipping /var/named/chroot/var/named the find operation continues normally.
find: File system loop detected
1,631,465,526,000
In the following video: Linux HOWTO: Secure Your Data with PGP, Part 2, you are shown how to create a key pair with gpg. At about 1:50, the instructor says the following: While the key is being generated, it is a good idea to move your mouse around a little bit to give it a bit more random number entropy for the creation of the keypair. This seems to me like a myth, especially since command-line tools shouldn't usually be affected by the cursor. On the other hand, I have no clue how Linux's random number generator works, whether it is shared by the GUI or independent from it. Is there any stock in what he claims, or this an example of cargo cult programming?
There is a grain of truth to this, in fact more truth than myth, but nonetheless the statement reflects a fundamental misunderstanding of what's going on. Yes, moving the mouse while generating a key with GPG can be a good idea. Yes, moving the mouse contributes some entropy that makes random numbers random. No, moving the mouse does not make the key more secure. All good random generators suitable for cryptography, and Linux's is in that category, have two components: An entropy source, which is non-deterministic. The purpose of the entropy is to bootstrap the random number generator with unpredictable data. The entropy source must be non-deterministic: otherwise, an adversary could reproduce the same computation. A pseudorandom number generator, which produces unpredictable random numbers in a deterministic fashion from a changing internal state. Entropy has to come from a source that is external to the computer. The user is one source of entropy. What the user does is mostly not random, but the fine timing of keystrokes and mouse movements is so unpredictable as to be slightly random — not very random, but little by little, it accumulates. Other potential sources of entropy include the timing of network packets and camera or microphone white noise. Different kernel versions and configurations may use a different set of sources. Some computers have dedicated hardware RNG circuits based on radioactive decay or, less impressively, unstable electronic circuits. These dedicated sources are especially useful in embedded devices and servers which can have pretty predictable behavior on their first boot, without a user to do weird things. Linux provides random numbers to programs via two devices: /dev/random and /dev/urandom. Reading from either device returns cryptographic-quality. Both devices use the same internal RNG state and the same algorithm to transform the state and produce random bytes. They have peculiar limitations which makes neither of them the right thing: /dev/urandom can return predictable data if the system has not yet accumulated sufficient entropy. /dev/random calculates the amount of available entropy and blocks if there isn't enough. This sounds good, except that the calculation is based on theoretical considerations that make the amount of available entropy decrease linearly with each output bit. Thus /dev/random tends to block very quickly. Linux systems save the internal RNG state to disk and restore it at boot time. Therefore entropy carries over from one boot to the next. The only time when a Linux system may lack entropy is when it's freshly installed. Once there is sufficient entropy in the system, entropy does not decrease; only Linux's flawed calculation decreases. For more explanations of this consideration, read /dev/urandom is suitable to generate a cryptographic key, by a professional cryptographer. See aso Can you explain the entropy estimate used in random.c. Moving the mouse adds more entropy to the system. But gpg can only read from /dev/random, not /dev/urandom (a way to solve this problem is to make /dev/random the same 1:9 device as /dev/urandom), so it is never at risk of receiving not-random-enough random numbers. If you don't move the mouse, the key is as random as can be; but what can happen is that gpg may get blocked in a read from /dev/random, waiting for the kernel's entropy counter to rise.
Adding "random number entropy" for GPG keys?
1,631,465,526,000
I would like to send an email when a file reach a certain size limit. The only way I thought of doing this is by doing a cronjob which will check the file size and send the email if the file is bigger than the desired size. However, it seems like a bad solution for me to add a cronjob which would check ,for example every 15-30 min, the size of a file? I was wondering if there is a better way of doing this to automatically detect when the file is appended some text (event?) so I could then check the size and do the desired treatment.
I can conceive of 2 approaches to do this. You can either use a while loop which would run a "stat" command at some set frequency, performing a check to see if the file's size has exceeded your desired size. If it has, then send an email. This method is OK but can be a bit inefficient since it's going to run the "stat" command irregardless if there was an event on the file or not, at the set time frequency. The other method would involve using file system events that you can subscribe watchers to using the command inotifywatch. Method #1 - Every X seconds example If you put the following into a script, say notify.bash: #!/bin/bash file="afile" maxsize=100 # 100 kilobytes while true; do actualsize=$(du -k "$file" | cut -f1) if [ $actualsize -ge $maxsize ]; then echo size is over $maxsize kilobytes .... send email .... exit else echo size is under $maxsize kilobytes fi sleep 1800 # in seconds = 30 minutes done Then run it, it will report on any access to the file, if that access results in the file's size exceeding your minimum size, it will trigger an email to be sent and exit. Otherwise, it will report the current size and continue watching the file. Method #2 - Only check on accesses example The more efficient method would be to only check the file when there are actual accesses. The types of accesses can vary, for this example I'm illustrating how to watch for just file accesses, but your could watch only on other events, such as the file being closed. Again we'll name this file, notify.bash: #!/bin/bash file=afile maxsize=100 # 100 kilobytes while inotifywait -e access "$file"; do actualsize=$(du -k "$file" | cut -f1) if [ $actualsize -ge $maxsize ]; then echo size is over $maxsize kilobytes .... send email .... exit else echo size is under $maxsize kilobytes fi done Running this script would result in the following output: $ ./notify.bash Setting up watches. Watches established. Generating some activity on the file, the file now reports it's size as follows: $ seq 100000 > afile $ du -k afile 576 afile The output of our notification script: afile ACCESS size is over 100 kilobytes At which point it would exit. Sending email To perform this activity you can simply do something like this within the script: subject="size exceeded on file $file" emailAddr="[email protected]" mailCmd="mail -s \"$subject\" \"$emailAddrs\"" ( echo ""; echo "DATE: $(date)"; ) | eval mail -s "$subject" \"$emailAddr\" Considerations The second method as it is will work in most situations. One where it will not is if the file is already exceeding the $maxsize when the script is invoked, and there are no further events on the file of type access. This can be remedied with either an additional check performed in the script when it's invoked or by expanding the events that inotifywatch acts on. References How to execute a command whenever a file changes? How to check size of a file? inotify-tools
Automatically detect when a file has reached a size limit
1,631,465,526,000
I am using sed. I was using a regex that was correct as far as I could see, but sed did not do anything. Turns out that I was using \s+ which sed can not understand, and when I switched to [ ]+ it worked. So to sum up, I made a regex which for it to work I had to escape almost everything and remove the \s for whitespace. Seems that there is a mode to avoid all these which is -r So I wanted to ask: Why isn't -r the default mode for sed? Why did I have to go to so much trouble to escape everything? man says that there is the option --posix and that "POSIX.2 BREs should be supported" What does this refer to? NFA/DFA mode?
Re 1) The answer is the same as for any other tool that was improved over decades. :) You don't want to break existing scripts by changing default behaviour. Re 2) That has nothing to do with the matching engine; it's just a question of which set of regular expressions is supported. POSIX BRE means "basic regular expression".
Why isn't sed using the extended regex mode by default?
1,631,465,526,000
I'm having problems getting sFTP working while there are no problems with ssh. I'm basically building zlib, openssl, and openssh for an ARM processor using an existing embedded Linux filesystem. After searching for ideas, this seemed liked a common problem, but I haven't made any progress. I only have one user defined, which is root with a empty password. I'm using openssh version 4.7p1, and I modified sshd_config with the following settings: PermitRootLogin yes PermitEmptyPasswords yes UseDNS yes UsePrivilegeSeparation no SyslogFacility AUTH LogLevel DEBUG3 Subsystem sftp /usr/local/libexec/sftp-server -f AUTH -l DEBUG3 The sftp-server is located in /usr/local/libexec and has the following permissions: root@arm:/usr/local/libexec# ls -l -rwxr-xr-x 1 root root 65533 Oct 3 22:12 sftp-server -rwx--x--x 1 root root 233539 Oct 3 22:12 ssh-keysign I know sftp-server is being found (path is set in sshd_config) because if I rename the sftp_server executable, I get the following error: auth.err sshd[1698]: error: subsystem: cannot stat /usr/local/libexec/sftp-server: No such file or directory auth.info sshd[1698]: subsystem request for sftp failed, subsystem not found Also, the target's login init-scripts are very simple and consists of a single file (etc/profile.d/local.sh), which only contain definitions for LD_LIBRARY_PATH, PATH and PYTHONPATH as shown below: #!/bin/sh export LD_LIBRARY_PATH="/usr/local/lib" export PATH="/usr/local/bin:/usr/local/libexec:${PATH}" export PYTHONPATH="/home/root/python" As you can see .bashrc, .profile, etc do not exist in root's home directory: root@arm:~# ls -la drwxr-xr-x 2 root root 4096 Oct 4 14:57 . drwxr-xr-x 3 root root 4096 Oct 4 01:11 .. -rw------- 1 root root 120 Oct 4 01:21 .bash_history Here is the system log output when using FileZilla to connect to the sftp server on the target. From the log it seems that the sftp-server executable is found, but the child processes is exited immediately. I am using debug arguments when calling sftp-server in sshd_config (Subsystem sftp /usr/local/libexec/sftp-server -f AUTH -l DEBUG3), but no logs were captured. Oct 4 14:29:45 arm auth.info sshd[2070]: Connection from 192.168.1.12 port 45888 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: Client protocol version 2.0; client software version PuTTY_Local:_Mar_28_2012_12:33:05 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: no match: PuTTY_Local:_Mar_28_2012_12:33:05 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: Enabling compatibility mode for protocol 2.0 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: Local version string SSH-2.0-OpenSSH_4.7 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: fd 3 setting O_NONBLOCK Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: list_hostkey_types: ssh-rsa,ssh-dss Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_KEXINIT sent Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_KEXINIT received Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellma1 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: ssh-rsa,ssh-dss Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysr Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysr Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: none,[email protected] Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: none,[email protected] Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: first_kex_follows 0 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: reserved 0 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellma1 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: ssh-rsa,ssh-dss Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: aes256-ctr,aes256-cbc,[email protected],aes192-ctr,aes192-cbc,aes128-ctr,aes128-cbc,blowfish-ctr,blowfi8 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: aes256-ctr,aes256-cbc,[email protected],aes192-ctr,aes192-cbc,aes128-ctr,aes128-cbc,blowfish-ctr,blowfi8 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: hmac-sha1,hmac-sha1-96,hmac-md5 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: hmac-sha1,hmac-sha1-96,hmac-md5 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: none,zlib Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: none,zlib Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: first_kex_follows 0 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_parse_kexinit: reserved 0 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: mac_setup: found hmac-sha1 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: kex: client->server aes256-ctr hmac-sha1 none Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: mac_setup: found hmac-sha1 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: kex: server->client aes256-ctr hmac-sha1 none Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_KEX_DH_GEX_REQUEST_OLD received Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: dh_gen_key: priv key bits set: 277/512 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: bits set: 2052/4096 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: bits set: 2036/4096 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: kex_derive_keys Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: set_newkeys: mode 1 Oct 4 14:29:45 arm auth.debug sshd[2070]: debug2: cipher_init: set keylen (16 -> 32) Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: SSH2_MSG_NEWKEYS sent Oct 4 14:29:45 arm auth.debug sshd[2070]: debug1: expecting SSH2_MSG_NEWKEYS Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: set_newkeys: mode 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: cipher_init: set keylen (16 -> 32) Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: SSH2_MSG_NEWKEYS received Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: KEX done Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: userauth-request for user root service ssh-connection method none Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: attempt 0 failures 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: Trying to reverse map address 192.168.1.12. Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: parse_server_config: config reprocess config len 302 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: input_userauth_request: setting up authctxt for root Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: input_userauth_request: try method none Oct 4 14:29:46 arm auth.info sshd[2070]: Accepted none for root from 192.168.1.12 port 45888 ssh2 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: Entering interactive session for SSH2. Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: fd 4 setting O_NONBLOCK Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: fd 5 setting O_NONBLOCK Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: server_init_dispatch_20 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: server_input_channel_open: ctype session rchan 256 win 2147483647 max 16384 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: input_session_request Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: channel 0: new [server-session] Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_new: init Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_new: session 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_open: channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_open: session 0: link with channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: server_input_channel_open: confirm session Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: server_input_channel_req: channel 0 request [email protected] reply 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_by_channel: session 0 channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_input_channel_req: session 0 req [email protected] Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: server_input_channel_req: channel 0 request subsystem reply 1 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_by_channel: session 0 channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_input_channel_req: session 0 req subsystem Oct 4 14:29:46 arm auth.info sshd[2070]: subsystem request for sftp Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: subsystem: exec() /usr/local/libexec/sftp-server -f AUTH -l DEBUG3 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: fd 3 setting TCP_NODELAY Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: fd 7 setting O_NONBLOCK Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: fd 7 is O_NONBLOCK Oct 4 14:29:46 arm auth.debug sshd[2073]: debug1: permanently_set_uid: 0/0 Oct 4 14:29:46 arm auth.debug sshd[2073]: debug3: channel 0: close_fds r -1 w -1 e -1 c -1 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: read<=0 rfd 7 len -1 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: read failed Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: close_read Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: input open -> drain Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: ibuf empty Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: send eof Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: input drain -> closed Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: notify_done: reading Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: Received SIGCHLD. Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_by_pid: pid 2073 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_exit_message: session 0 channel 0 pid 2073 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: request exit-status confirm 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_exit_message: release channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: write failed Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: close_write Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: output open -> closed Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: send close Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: channel 0: will not send data after close Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: rcvd close Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: channel 0: will not send data after close Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: is dead Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: gc: notify user Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_by_channel: session 0 channel 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_close_by_channel: channel 0 child 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: session_close: session 0 pid 0 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: gc: user detached Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: is dead Oct 4 14:29:46 arm auth.debug sshd[2070]: debug2: channel 0: garbage collecting Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: channel 0: free: server-session, nchannels 1 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: channel 0: status: The following connections are open:\r\n #0 server-session (t4 r256 i3/0 o3/0 fd 7/7 cfd -1)\r\n Oct 4 14:29:46 arm auth.debug sshd[2070]: debug3: channel 0: close_fds r 7 w 7 e -1 c -1 Oct 4 14:29:46 arm auth.info sshd[2070]: Connection closed by 192.168.1.12 Oct 4 14:29:46 arm auth.debug sshd[2070]: debug1: do_cleanup Oct 4 14:29:46 arm auth.info sshd[2070]: Closing connection to 192.168.1.12
While this is more of an alternate solution than a direct answer to your issue, I would try using the internal sftp server instead of an external one. Since this is an embedded system, this probably makes more sense to do anyway. In your sshd_config, just add: Subsystem sftp internal-sftp That way you can leave out the sftp binary and save some space.
sFTP server fails to start
1,631,465,526,000
kernel: EDAC MC0: UE page 0x0, offset 0x0, grain 0, row 7, labels ":": i3200 UE All of a sudden today, our CentOS release 6.4 (Final) system started throwing EDAC errors. I rebooted, and the errors stopped. I have been searching for answers, but they fall into two camps, memory or a chipset. I would like some advice on where to search further to narrow this down to chipset or memory.
What you're experiencing is an Error Detection and Correction event. Given the error includes this bit: MC0 you're experiencing a memory error. This message is telling you where specifically you're experiencing the error. MC0 means the RAM in the first socket (#0). The rest of that message is telling you specifically within that RAM DIMM the error occurred. Given you're getting just one, I would continue to monitor it but do nothing for the time being. If it continues then you most likely are experiencing a failing memory module. You could also try to test it more thoroughly using memtest86+. This previous question titled: How to blacklist a correct bad RAM sector according to MemTest86+ error imdocation? will show you how to blacklist the memory if you're interested in that as well.
Does kernel: EDAC MC0: UE page 0x0 point to bad memory, a driver, or something else?
1,631,465,526,000
Is there a default program where I can check if my audio devices are in silent? Edit: By silence, I mean that if there is something playing on that (not just activated or opened) Something like this: if [[ device0 is silent ]] ; then radio $RANDOM fi Edit 2: What I'm trying to achieve is a script that plays radio and can keep playing when the player fails, e.g. if the internet connection goes down and the player didn't recovery, I will kill the player and start over again
If you're using PulseAudio (Gnome-based Linux distributions tend to use PulseAudio, you can check if one is running with ps -C pulseaudio) and you want to know whether some applications are sending any data to any "sink", you could do: pacmd list-sink-inputs | grep -c 'state: RUNNING' Still with PulseAudio, if you want to check whether your sound output is muted, there might be simpler but you can get the "mute" status of the default "sink" using: pacmd dump | awk ' $1 == "set-sink-mute" {m[$2] = $3} $1 == "set-default-sink" {s = $2} END {print m[s]}'
Testing if audio devices / sound cards are currently playing?
1,631,465,526,000
On a Linux machine that runs systemd, is there any way to see what or who issued a shutdown or reboot?
Examine the system logs of the previous boot with sudo journalctl -b -1 -e. Examine /var/log/auth.log. Are you sure it's not one of "power interruption/spike", "CPU overheat", .... On MY system (Ubuntu 16.04,6), sudo journalctl | grep shutdown Jan 29 12:58:07 bat sudo[14365]: walt : TTY=pts/0 ; PWD=/home/walt ; USER=root ; COMMAND=/sbin/shutdown now Feb 12 11:23:59 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service. Feb 19 09:35:18 bat ureadahead[437]: ureadahead:lxqt-session_system-shutdown.png: Ignored relative path Feb 19 09:35:18 bat ureadahead[437]: ureadahead:gshutdown_gshutdown.png: Ignored relative path Feb 19 09:35:18 bat ureadahead[437]: ureadahead:mate-gnome-main-menu-applet_system-shutdown.png: Ignored relative path Feb 27 16:45:40 bat systemd-shutdown[1]: Sending SIGTERM to remaining processes... Mar 05 17:53:27 bat systemd-shutdown[1]: Sending SIGTERM to remaining processes... Mar 15 09:57:45 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service. Mar 21 17:40:30 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service. Apr 15 18:16:37 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service. ... The first line shows when user walt did a sudo shutdown now.
How to find out who/what caused a reboot/shutdown?
1,631,465,526,000
I have a long-running process that is hitting a resource limit, such as the maximum number of open files. I don't want to kill it. Usually, you'd do: (stop service) ulimit -n <new limit> (start service) Is there a way to avoid having to stop and start the service and increase the limits?
I've figured it out. On some kernels (e.g. 2.6.32+), at least on CentOS/RHEL, you can change the resource limits of a running process using /proc/<pid>/limits, e.g.: $ grep "open files" /proc/23052/limits Limit Soft Limit Hard Limit Units Max open files 1024 4096 files To change the maximum open files to a soft limit of 4096, hard limit of 8192: echo -n "Max open files=4096:8192" > /proc/23052/limits This gives: $ grep "open files" /proc/23052/limits Limit Soft Limit Hard Limit Units Max open files 4096 8192 files Note the -n in echo -n - without that, you'll get an "invalid argument" error. The above doesn't always work, so Another option is prlimit command, introduced with util-linux 2.21 allows you to read and change the limits of running processes. This is a followup to the writable /proc/<pid>/limits, which was not integrated in mainline kernel. This solution should work. $ prlimit --nofile --output RESOURCE,SOFT,HARD --pid 23052 RESOURCE SOFT HARD NOFILE 1024 4096 Set the limits: $ prlimit --nofile=4096:8192 --pid 23052 Confirm: $ prlimit --nofile --output RESOURCE,SOFT,HARD --pid 23052RESOURCE SOFT HARD NOFILE 4096 8192 $ grep "open files" /proc/23052/limits Limit Soft Limit Hard Limit Units Max open files 4096 8192 files
Change the resource limits (ulimit / rlimit) of a running process
1,631,465,526,000
I have been attempting to install Arch Linux on my Macbook Pro but the wireless and ethernet drivers don't work. Because of this, I cannot access the internet on it. So whilst searching for a solution I downloaded these drivers: http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2 (I got the link for the drivers from this AUR repo: https://aur.archlinux.org/packages/b43-firmware/) The problem is though, is that I have absolutely no idea how to install the drivers from the command line during the install procedure. To make myself absolutely clear, I do not have an internet connection of any sort on said MacBook, nor do I have a working install. So because of this every solution must be able to be done from the installation media command line. NOTE: I have also noticed that during startup I get a brief message about wireless drivers not found but it goes by so fast I cannot properly read it.
From the live CD You seem to be able to get a working connection on the installation media, so here is one idea: Start the arch live CD and setup your network. Then mount your newly installed partition (for example on /mnt) and chroot into your system using # arch-chroot /mnt From there, you will be able to update pacman's database and install the desired packages. For broadcom, you will need to install from AUR: # pacman -Syy base-devel # pacman -S b43-fwcutter # curl https://aur.archlinux.org/cgit/aur.git/snapshot/b43-firmware.tar.gz | tar xzf - # cd b43-firmware # makepkg --asroot --install Note: never use --asroot in normal situation. Without network connection This is a little bit more tricky here. Compiling from AUR will be harder, so if you can first setup the ethernet using official packages, that will be better. The idea is to let pacman prepare a list of downloads, use another PC and a USB stick to convey the packets to your install. Mount the USB stick on your fresh install and create a list of packages to download. # cd /mnt/usbstick # pacman -Sp your_ethernet_driver > pkgs_list.txt If you really want to install the broadcom drivers (or your ethernet card is also an unofficial packet) also issue # pacman -Sp base-devel b43-fwcutter >> pkgs_list.txt Unmount the key and find an internet connection on another PC. Download all the packets using for example curl, wget or simply your browser. If you are really unlucky, the pacman database may be too old and you will not find the packets in their indicated version. You will have to search a little bit round to find the right package. Save all the packets on the stick. If you go the unofficial way, find the page on the AUR and download the tarball for the packet, but also all dependencies and all sources. For broadcom, for example download the b43-firmware tarball but also the http://www.lwfinger.com/b43-firmware/broadcom-wl-{xyz}.tar.bz2 source tarball. Go back to your arch and from your stick run # pacman -U *.pkg.tar.* For broadcom, (or similar for unofficial packets) # tar xzf b43-firmware.tar.gz # cd b43-firmware/ # mv ../broadcom-wl-{xyz}.tar.bz2 . # makepkg --asroot --install Note: the third step moves the sources into the build directory so that makepkg finds it locally and do not attempt to download them. And same, do not use --asroot in normal case.
Install Drivers Offline Arch Linux
1,631,465,526,000
I went through this article, which explains various methods for checking your RAM usage. However, I can't reconcile the different methods and don't know which one is correct. When I first login, I'm greeted with a screen like this: System information as of Sun Apr 28 21:46:58 UTC 2013 System load: 0.0 Processes: 76 Usage of /: 15.6% of 7.87GB Users logged in: 1 Memory usage: 41% IP address for eth0: Swap usage: 0% This suggests to me that I am using 41% of my RAM, which seems quite high since the server isn't doing much. Or does that number refer to something besides RAM? Next I try the free -m method: ubuntu@ip-:~$ free -m total used free shared buffers cached Mem: 590 513 76 0 67 315 -/+ buffers/cache: 130 459 Swap: 0 0 0 According to the explanatory graphic in the article, this implies I have 130MB of used RAM and 459MB of free RAM, which suggests I'm using about 22% of my RAM. Next I run top: top - 22:14:48 up 195 days, 21:30, 2 users, load average: 0.00, 0.01, 0.05 Tasks: 77 total, 1 running, 76 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 97.7%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 604376k total, 525692k used, 78684k free, 69124k buffers Swap: 0k total, 0k used, 0k free, 322740k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24332 1864 976 S 0.0 0.3 0:08.75 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd This is the most confusing, as the summary shows me using 525MG of 604M total, and yet when is use the "m" interactive command to sort by top memory the top process is only using 0.3% of the memory??? Finally, the ps command seems to show very little memory usage as well: root@ip-:/home/ubuntu# ps -o command,rss COMMAND RSS ps -o command,rss 788 sudo su root 1764 su root 1404 bash 2132 I would love for someone to correct whatever misunderstandings I have that are creating these apparent conflicts. Thanks! EDIT for Rahul Ouput of cat /proc/meminfo: MemTotal: 604376 kB MemFree: 157564 kB Buffers: 49640 kB Cached: 231376 kB SwapCached: 0 kB Active: 290040 kB Inactive: 97772 kB Active(anon): 107672 kB Inactive(anon): 4844 kB Active(file): 182368 kB Inactive(file): 92928 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 52 kB Writeback: 0 kB AnonPages: 106836 kB Mapped: 22920 kB Shmem: 5712 kB Slab: 42032 kB SReclaimable: 34016 kB SUnreclaim: 8016 kB KernelStack: 688 kB PageTables: 3584 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 302188 kB Committed_AS: 242768 kB VmallocTotal: 34359738367 kB VmallocUsed: 7152 kB VmallocChunk: 34359729008 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 637952 kB DirectMap2M: 0 kB
You just need to understand Memory Concept As per your Output Of /proc/meminfo , You just need to Notice below things : Buffers :- A buffer is something that has yet to be "written" to disk. It represents how much RAM is dedicated to cache disk block. "Cached" is similar to "Buffers", only this time it caches pages from file reading Cached :- A cache is something that has been "read" from the disk and stored for later use. Generally, you can consider cache area as another "free" RAM since it will be shrunk gradually if the application demands more memory. It is enough to understand that both "buffers" and "Cached" represent the size of system cache. They dynamically grow or shrink as requested by internal Linux kernel mechanism. at Webhosting they do cache clear using below cmd: (mostly configured in cron): sync && echo 3 > /proc/sys/vm/drop_caches Quote Link EDIT for one more requirement i.e per user memory usage #!/bin/bash total_mem=0 printf "%-10s%-10s\n" User MemUsage while read u m do [[ $old_user != $u ]] && { printf "%-10s%-0.1f\n" $old_user $total_mem; total_mem=0; } total_mem=$(echo $m + $total_mem | bc); old_user=$u done < <(ps --no-headers -eo user,%mem| sort -k1) #--EOF Please check with above script and let me know, if it showing properly or not.
After researching, still confused about monitoring RAM usage
1,345,313,914,000
I have an windows application (Under Wine) that only works when I change timezone to NewYork's TimeZone. with Any other zone it doesn't start!! So, Is it possible in Linux to run an application with different TimeZone than system configured TimeZone? I'm using Ubuntu 10.04
Generally, set the TZ environment variable: TZ=America/New_York myapplication I don't know if Wine has its own configuration in addition to or overriding the environment variable.
Run an application with different TimeZone
1,345,313,914,000
I have a server (SUSE 11.5) that has two disks. There is only one volume group (vg01). How do I determine the physical device on which that vg exists?
I think # pvdisplay shows you the physical device(s) corresponding to all your volume groups. Inter alia, my system shows, for example --- Physical volume --- PV Name /dev/sdc6 VG Name olddebian PV Size 186.26 GiB / not usable 638.00 KiB Allocatable yes PE Size 4.00 MiB Total PE 47683 Free PE 5443 Allocated PE 42240 PV UUID QcpaYU-GuWX-ssIl-U2i9-26Cq-QhQf-fgOyD4 This is the only one of my VGs that corresponds to a raw partition. The others are on top of software raid devices.
How do I determine LVM mapping on a physical device?
1,345,313,914,000
If I disable memory overcommit by setting vm.overcommit_memory to 2, by default the system will allow to allocate the memory up to the dimension of swap + 50% of physical memory, as explained here. I can change the ratio by modifying vm.overcommit_ratio parameter. Let's say I set it to 80%, so 80% of physical memory may be used. My question are: what the system will do with the remaining 20%? why is this parameter required in first place? why I should not always set it to 100%?
What the system will do with the remaining 20%? The kernel will use the remaining physical memory for its own purposes (internal structures, tables, buffers, caches, whatever). The memory overcommitment setting handle userland application virtual memory reservations, the kernel doesn't use virtual memory but physical one. Why is this parameter required in first place? The overcommit_ratio parameter is an implementation choice designed to prevent applications to reserve more virtual memory than what will reasonably be available for them in the future, i.e. when they actually access the memory (or at least try to). Setting overcommit_ratio to 50% has been considered a reasonable default value by the Linux kernel developers. It assumes the kernel won't ever need to use more than 50% of the physical RAM. Your mileage may vary, the reason why it is a tunable. Why I should not always set it to 100%? Setting it to 100% (or any "too high" value) doesn't reliably disable overcommitment because you cannot assume the kernel will use 0% (or too little) of RAM. It won't prevent applications to crash as the kernel might preempt anyway all the physical memory it demands.
Where the remaining memory of vm.overcommit_ratio goes?
1,345,313,914,000
I am on Arch Linux where I am trying to create a systemd timer as a cron alternative for hibernating my laptop on low battery. So I wrote these three files: /etc/systemd/system/battery.service [Unit] Description=Preko skripte preveri stanje baterije in hibernira v kolikor je stanje prenizko [Service] Type=oneshot ExecStart=/home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript User=nobody Group=systemd-journal /etc/systemd/system/battery.timer [Unit] Description=Periodical checking of battery status every two minutes [Timer] OnUnitActiveSec=2min [Install] WantedBy=timers.target /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript #!/bin/sh /usr/bin/acpi -b | /usr/bin/awk -F'[,:%]' '{print $2, $3}' | ( read -r status capacity if [ "$status" = Discharging ] && [ "$capacity" -lt 50 ]; then /usr/bin/systemctl hibernate fi ) And then to enable timer I executed: sudo systemctl enable battery.timer sudo systemctl start battery.timer And somehow it isn't working. Script works on its own. This means that if I execute command below, my computer hibernates just fine. /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript ADD1: After enabling and starting timer I ran some checks and this is what I get: [ziga@ziga-laptop ~]$ systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES n/a n/a n/a n/a battery.timer battery.serv Tue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago logrotate.timer logrotate.se Tue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago shadow.timer shadow.servi Tue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago updatedb.timer updatedb.ser Tue 2016-06-28 22:53:58 CEST 23h left Mon 2016-06-27 22:53:58 CEST 23min ago systemd-tmpfiles-clean.timer systemd-tmpf and [ziga@ziga-laptop ~]$ systemctl | grep battery battery.timer loaded active elapsed Periodical checking of battery status every two minutes ADD2: After applying solution from Alexander T my timer starts (check the code below) but script doesn't hibernate my laptop while it hibernates it if I execute it directly. [ziga@ziga-laptop ~]$ systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Tue 2016-06-28 19:17:30 CEST 1min 43s left Tue 2016-06-28 19:15:30 CEST 16s ago battery.timer battery.service
An answer to this question is to swap User=nobody not with User=ziga but with User=root in /etc/systemd/system/battery.service. Somehow even if user ziga has all the privileges of using sudo command it can't execute systemctl hibernate inside of the bash script. I really don't know why this happens. So the working files are as follows: /etc/systemd/system/battery.service [Unit] Description=Preko skripte preveri stanje baterije in hibernira v kolikor je stanje prenizko [Service] Type=oneshot ExecStart=/home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript User=root Group=systemd-journal /etc/systemd/system/battery.timer [Unit] Description=Periodical checking of battery status every two minutes [Timer] OnBootSec=2min OnUnitActiveSec=2min [Install] WantedBy=battery.service /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript #!/bin/sh /usr/bin/acpi -b | /usr/bin/awk -F'[,:%]' '{print $2, $3}' | ( read -r status capacity if [ "$status" = Discharging ] && [ "$capacity" -lt 7 ]; then /usr/bin/systemctl hibernate fi ) I tried it and it allso works with User=ziga or User=nobody but we need to change /usr/bin/systemctl hibernate into sudo /usr/bin/systemctl hibernate in the last script. So it looks like User variable somehow doesn't even matter... Oh and you can as well remove absolute names from the last script and change first line from #!/bin/sh to #!/bin/bash. I also changed WantedBy=timers.target to WantedBy=battery.service in /etc/systemd/system/battery.timer. There you go. The best cron alternative to hibernate laptops on low battery. =)
using systemd timers instead of cron
1,345,313,914,000
When I want Linux to consider newly created partitions without rebooting, I have several tools available to force a refresh of the kernel "partition cache": partx -va /dev/sdX kpartx -va /dev/sdX hdparm -z /dev/sdX blockdev --rereadpt /dev/sdX sfdisk -R /dev/sdX (deprecated) partprobe /dev/sdX ... I'm not sure about the difference between these techniques, but I think they don't use the same ioctl, like BLKRRPART or BLKPG. So, what is the difference between those ioctl?
BLKRRPART tells the kernel to reread the partition table. man 4 sd With BLKPG you can create, add, delete partitions as you please (from the kernel, not on disk of course). You have to tell the kernel the offset and size of individual partition, which implies that you must have parsed the partition table yourself beforehand. See Linux kernel: /include/uapi/linux/blkpg.h I personally use partprobe (part of parted), which uses the latter approach, probably to support partition tables not supported by the kernel.
Forced reread of partition table: difference between BLKRRPART and BLKPG ioctl? (Linux)
1,345,313,914,000
I have a utility that has a nasty habit of going quiet and staying there, I already know how long into the process it does this so I am using timeout to fight this, but sometimes it does it before that time. Is there a tool similar to timeout that will kill the process if it stops directing output to stdout?
With zsh, you could do: zmodload zsh/system coproc your-command while :; do sysread -t 10 -o 1 <&p && continue if (( $? == 4 )); then echo "Timeout" >&2 kill $! fi break done The idea being to use the -t option of sysread to read from your-command output with a timeout. Note that it makes your-command's output a pipe. It may be that your-command starts buffering its output when it doesn't go to a terminal, in which case you may find that it doesn't output anything in a while, but only because of that buffering, not because it's hung somehow. You could work around that by using stdbuf -oL your-command to restore line-buffering (if your-command uses stdio) or use zpty instead of coproc to fake a terminal output. With bash, you'd have to rely on dd and GNU timeout if available: coproc your-command while :; do timeout 10 dd bs=8192 count=1 2> /dev/null <&${COPROC[0]} && continue if (($? == 124)); then echo Timeout >&2 kill "$!" fi done Instead of coproc, you could also use process substitution: while :; do timeout 10 dd bs=8192 count=1 2> /dev/null <&3 && continue if (($? == 124)); then echo Timeout >&2 kill "$!" fi done 3< <(your-command) (that won't work in zsh or ksh93 because $! doesn't contain the pid of your-command there).
Kill a process if it goes quiet for a certain amount of time
1,345,313,914,000
I have a USB key that contains my keepass2 password database and I'd like to perform some actions when it is plugged into my computer, namely: Auto-mount it to some specific location When the mounting is done properly, launching keepass2 on the password database file Simple tasks I guess, but I can't find how to do that. I'm using Ubuntu 12.10, and it auto-mounts the device as a "media usb-key" and tries to open the images on it (even though there are none). What is the best way to do that and to disable the ubuntu auto-mounting (so it doesn't conflict) ?
When a new device appears, udev is notified. It normally creates a device file under /dev based on built-in rules¹. You can override these rules to change the device file location or run an arbitrary program. Here is a sample such udev rule: KERNEL=="sd*", ATTRS{vendor}=="Yoyodine", ATTRS{serial}=="123456789", NAME="keepass/s%n", RUN+="/usr/local/sbin/keepass-drive-inserted /dev/%k%n" The NAME= directive changes the location of the device file, I included it for illustration purposes but it is probably not useful for your use case. The ATTRS rules identify the device; run udevinfo -a -n /dev/sdz when the drive is available as /dev/sdz to see what attributes it has. Beware that you can only use ATTRS rules from a single section of the udevinfo input (in addition, you can use ATTR rules from the initial section). See Understand output of `udevadm info -a -n /dev/sdb` for more background. This rule goes into a file called something like /etc/udev/rules.d/local-storage-keypass.rules. Put the commands you want to run in the script given in the RUN directive. Something like: #!/bin/sh set -e if [ -d /media/keypass-drive ]; then [ "$(df -P /media/keypass-drive | awk 'NR==2 {print $1}')" = "$(df -P /media | awk 'NR==2 {print $1}')" ] else mkdir /media/keypass-drive fi mount "$1" /media/keypass-drive su ereon -c 'keypass2' & If you're having trouble running a GUI program from a script triggered from udev, see Can I launch a graphical program on another user's desktop as root? ¹ Not on modern systems where /dev is on udevtmpfs.
Triggering an action when a specific volume is connected
1,345,313,914,000
I have a home partition which is shared by mulitple distros on the same box. I'm using bind mounts from fstab. Each Linux install has something like this: UUID=[...] /mnt/data ext4 nodev,nosuid 0 2 /mnt/data/arch /home none defaults,bind 0 0 /mnt/data/files /files none defaults,bind 0 0 The disadvantage is, of course, that /mnt/data/arch and /mnt/data/files are now mounted twice. On a hunch, I tried umount /mnt/data, which seems to work as I had hoped: according to mount, the device is now only mounted to /home and /files. My questions are: Is this safe, or am I overlooking something? Is it possible to get the same effect as umount /mnt/data using only fstab? Or could I do it in rc.local?
It's safe to unmount one of the bind-mounted copies. After you run mount --bind /foo /bar, the kernel doesn't keep track of which of /foo or /bar came first, they're two mount points for the same filesystem (or part of a filesystem). Note that if /foo is a mount point but /foo/wibble isn't, mount --bind /foo/wibble /bar makes /bar point to a part of the filesystem that's mounted on /foo. It's still ok to unmount /foo. So if you mount /mnt/data, then bind parts of it to /home and /files, and unmount /mnt/data, you end up with no access to the parts of /mnt/data outside arch and files. If that doesn't bother you, go for it. You can't achieve that through fstab: it only supports mounting filesystems. Bind mounts get in through a hack (the bind mount option is turned into a --bind option to the mount command internally). mount --move and unmounting can't be specified in fstab. You can use /etc/rc.local to call umount.
Umount device after bind mounting directories: is it safe?
1,345,313,914,000
I am trying to identify NICs on ~20 remote servers (2-6 NICs on every server). To begin with, I want to identify those ready for use and free ones. How can I check the state of the physical media? I know some ways, including ifconfig|grep RUNNING, ethtool, cat /sys/class/net/eth0/carrier, but all they require that the interface is up. I don't want to bring ALL interfaces up. Not sure why, but I don't like to have enabled, but not configured interfaces in the network. Is there a way I can avoid this? Or am I just wrong and there's nothing bad about all interfaces being up (and not configured)? Even if they are plugged in?
ip link show , by default shows all the interfaces, use ip link show up to show only the running interfaces. You could use filters to get the difference.
Check whether network cable is plugged in without bringing interface up
1,345,313,914,000
Okay its easy to create a ssh pair with ssh-keygen, but how do I generate with ssh-keygen a ssh pair which allows me to use AES-256-CBC? The default one is always AES-128-CBC, I tried already different parameters but they didn't function like: ssh-keygen -b 4096 -t rsa -Z aes-256-cbc But they didn't work, any idea how to do so?
You do not generate the key used by aes when you use ssh-keygen. Since aes is a symmetric cipher, its keys do not come in pairs. Both ends of the communication use the same key. The key generated by ssh-keygen uses public key cryptography for authentication. From the ssh-keygen manual: ssh-keygen generates, manages and converts authentication keys for ssh(1). ssh-keygen can create RSA keys for use by SSH protocol version 1 and DSA, ECDSA, Ed25519 or RSA keys for use by SSH protocol version 2. From the ssh manual: Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are done using separate keys, and it is unfeasible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. ssh implements public key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms. The problem with public key cryptography is that it is quite slow. Symmetric key cryptography is much faster and is used by ssh for the actual data transfer. The key used for the symmetric cryptography is generated on the fly after the connection was established (quoting from the sshd manual): For protocol 2, forward security is provided through a Diffie-Hellman key agreement. This key agreement results in a shared session key. The rest of the session is encrypted using a symmetric cipher, currently 128-bit AES, Blowfish, 3DES, CAST128, Arcfour, 192-bit AES, or 256-bit AES. The client selects the encryption algorithm to use from those offered by the server. Additionally, session integrity is provided through a cryptographic message authentication code (hmac-md5, hmac-sha1, umac-64, umac-128, hmac-ripemd160, hmac-sha2-256 or hmac-sha2-512). If you wish to use aes256-cbc you need to specify it on the command line using the -c option, in its most basic form this would look like this: $ ssh -c aes256-cbc user@host You can also specify your preferred selection of ciphers in ssh_config, using a comma-separated list. Tinkering with the defaults, is, however, not recommended since this is best left to the experts. There are lots of considerations and years of experience that went into the choice of defaults by the OpenSSH developers.
Generate a SSH pair with AES-256-CBC
1,345,313,914,000
I'm building busy-box and iptables for an embedded device and one of the dependencies for them are the kernel headers. I have searched the whole file system for *.ko files and found none. So i concluded the apps aren't creating any loadable drivers (kernel modules). What are other cases for a user space application to require kernel headers?
Because those programs are build to use things defined in the kernel headers: busybox-1.22.1]$ egrep -RHn '^#include <linux' modutils/modutils-24.c:194:#include <linux/elf-em.h> include/fix_u32.h:17:#include <linux/types.h> libbb/loop.c:11:#include <linux/version.h> console-tools/openvt.c:23:#include <linux/vt.h> console-tools/kbd_mode.c:23:#include <linux/kd.h> console-tools/showkey.c:19:#include <linux/kd.h> util-linux/blockdev.c:36:#include <linux/fs.h> util-linux/mkfs_ext2.c:50:#include <linux/fs.h> util-linux/mkfs_vfat.c:28:#include <linux/hdreg.h> /* HDIO_GETGEO */ util-linux/mkfs_vfat.c:29:#include <linux/fd.h> /* FDGETPRM */ .... For each specific tool, you'd need to read the source of the tool and the relevant kernel header to figure out exactly what. You can see a few things are commented to make it easy. For example, mkfs_vfat includes linux/fd.h to get FDGETPRM: $ egrep -RHn FDGETPRM util-linux/mkfs_vfat.c util-linux/mkfs_vfat.c:29:#include <linux/fd.h> /* FDGETPRM */ util-linux/mkfs_vfat.c:351: int not_floppy = ioctl(dev, FDGETPRM, &param); You could probably remove the relevant #include and watch for compiler errors to make it easier, you'll get warnings that some things are not defined. Those things likely come from the kernel headers.
Why do user space apps need kernel headers?
1,345,313,914,000
I'm running Ubuntu 13.10 and since I upgraded to kernel 3.12.8 (build from source, including ubuntu patches) on a ivybridge video, the boot spash screen was flickering and messing up. So I googled around and tried adding i915.modeset=1 paramenter to grub (without really knowing what I was doing) and magically the spash screen was fixed and I also noticed a much smoother scrolling of window contents (e.g a web page in chrome). So I just would like to know more about i915.modeset=1.
You are using whats called Kernel Mode Setting (KMS) to make sure that your Intel graphic drivers are loaded early in the boot process, therefore making the "fancy" boot screen display correctly. Kernel mode-setting (KMS) shifts responsibility for selecting and setting up the graphics mode from X.org to the kernel. When X.org is started, it then detects and uses the mode without any further mode changes. This promises to make booting faster, more graphical, and less flickery https://askubuntu.com/questions/1080/what-is-kernel-mode-setting Also see https://wiki.archlinux.org/index.php/Kernel_Mode_Setting#Early_KMS_start
What is i915.modeset=1 for?
1,345,313,914,000
I'm trying to implement a mechanism of automated backup using udev rules and systemd. The idea is to launch a backup routine upon hot-plugging a specific storage device, quite similar to this question, for which I provided an answer myself by the way, but here I'm interesteded in discussing some further tweaks. Namely I want the device to be umounted after the backup service finishes. Some background: So far I got it to work using udev to start up a systemd service which itself runs a backup routine. The relevant files follow: backup.service [Unit] Description=<DESCRIPTION HERE> BindsTo=<STORAGE DEVICE UNIT HERE>.device mnt-backup.mount After=<STORAGE DEVICE UNIT HERE>.device mnt-backup.mount [Service] ExecStart=<CALL TO BACKUP SCRIPT HERE> mnt-backup.mount [Unit] DefaultDependencies=no Conflicts=umount.target Before=umount.target [Mount] What=/dev/disk/by-uuid/<DEVICE UUID HERE> Where=/mnt/backup Type=<FILESYSTEM HERE> 90-backup.rules KERNEL=="sd*", ATTRS{serial}=="<HD SERIAL HERE>", TAG+="systemd", ENV{SYSTEMD_WANTS}+="backup.service" The question: Now I want mnt-backup.mount to be stopped as soon as backup.service finishes. According to the documentation ExecStartPost= will be executed after the command in ExecStart=, so I tried adding ExecStartPost=/usr/bin/systemctl stop mnt-backup.mount to backup.service, even though I realise it stops mnt-backup.mount, to which it is itself bound, what, as far as I understand, effectively requires backup.service to be stoped before mnt-backup.mount for a graceful stop, hence creating a cyclic dependence. When testing this, it worked a couple of times before I experienced a kernel panic, the first I've seen on my machine, so it got me wondering if this was somehow the cause. In any case, is my approach correct?
While I'm not sure whether the previous approach is guaranteed to work, there's an alternative which certainly looks preferable. The magic property is called StopWhenUnneeded. This should be set to true under [Unit] in the mount file, so it becomes: mnt-backup.mount [Unit] DefaultDependencies=no Conflicts=umount.target Before=umount.target StopWhenUnneeded=true [Mount] What=/dev/disk/by-uuid/<DEVICE UUID HERE> Where=/mnt/backup Type=<FILESYSTEM HERE> As simple as that. The great advantage of this approach, is the fact it is explicitly supported by systemd and thus guaranteed to work. I would also strongly suggest setting RefuseManualStart to true in the service unit, which forbids users to manually start the service. The idea is precisely to automate the backup mechanism, therefore the user shouldn't be able to explicitly start it, so better leave this responsability exclusively to udev.
systemd - umount device after service which depends on it finishes
1,345,313,914,000
I have a Macbook Pro and I am loving it, though I still miss my Linux box, there are many things I need which are not completely compatible with Mac OS X. I heard many stories about installing Linux on a Mac OS, some say it's not a problem, but some others tend to say differently. My question is, is it or is not fine to install Linux on a Mac OS machine? What are the pros and cons? I am very well aware about virtual machines, but let's be honest, they do not run quite as well when running on a physical hardware.
tl;dr: it's doable but you will have to work just a little bit. If you don't have the ability to use Ethernet, and are installing from netinst media, you're basically screwed (although if you're really determined you can make it work). When I originally wrote this answer, I'd only done this once, but now I'm doing it again on a different Mac, so I've split the post into two. Debian Jessie on a MacBook Pro I have successfully installed Debian Jessie (currently aka Debian Testing) on my MacBook Pro, early 2011. I'm going to say this right away: If you have a MacBook Air and/or no Ethernet cord, you are largely screwed if you use a distro that uses a network-based installation (such as Arch Linux, or the recommended Debian image, or one of the Ubuntu alternate CDs). You will basically have to download all the firmware files, boot the installation media in such a way that it is prevented from doing network configuration, install the firmware manually, and then try to get it to pick up the firmware. Then have it do network configuration. To be perfectly honest, I never got that to work and am not entirely sure that it's a sound plan. Other than that, installation went smoothly. If you intend to keep OS X, you should use OS X's built-in Disk Utility to resize, as GNU/Linux doesn't currently have write support for the default Mac filesystem configuration (HFS+ with journaling, for those curious; write support only works without journaling). Note that you don't have to boot into the Recovery partition to do this - HFS+ can do online resizing - but you may see Disk Utility or your entire computer freeze. Don't worry, this has happened to me a couple of times and you just have to let it do its thing, but you won't be able to use the Mac while the process is taking place. I have heard that Disk Utility has bugs when creating an empty partition (which you will have to do for Disk Utility to let you resize). Therefore, I'd recommend creating a FAT filesystem on the new partition. You're welcome to try with the "none" option selected, but I played it safe. Since I used the Debian Installer, I'm not really sure how it installed GRUB (I'm going to replace Debian with Arch soon, so I'll edit this answer with my results). It appears to have installed to the EFI partition in the Mac, but I'm not sure if it did any magic aside from that. Presumably not, but who knows. After installing GRUB, you need to reboot into Mac OS X. Open a terminal, mount the EFI partition (use diskutil list to dump information about disks; it's like OS X's version of blkid or lsblk), and muck around with the bless utility until you get to the GRUB menu on reboot. (I don't know exact steps for this, because I tried a bunch of things at the same time because I didn't want to wait through OS X's long reboot time). See man bless in OS X for the details of this utility. Note that yes, upon success you will go directly to the GRUB boot menu (assuming you're using GRUB). I'm not sure the internals of how it works, especially with Apple's moon-man EFI implementation, but here's how you choose the OS to boot from: If you want GNU/Linux, do nothing. The GRUB boot menu will appear (again, assuming you're using GRUB). If you want Mac OS X, wait for the startup tone, then hold Option until you get the disk chooser menu. Two disk options should appear: Macintosh HD and EFI Boot. Select Macintosh HD. Note: the Mac OS X option in GRUB appears to do nothing but hang. If you want Mac OS X Recovery, wait for the startup tone, then hold Option until you get the disk chooser menu. It's the exact same thing as booting regular OS X, except you choose EFI Boot instead of Macintosh HD. The touchpad driver in Xorg is extremely lacking. Xorg will choose the Synaptics driver for you, which is a piece of crap on an Apple touchpad. Therefore, Google around until you find a decent driver, then override the Synaptics driver with it in your xorg.conf (or xorg.conf.d, depending on distro), although I never could find a driver that could actually do right-click on the Apple trackpad, which is kind of a pain in the neck. I would tell you the exact details of my configuration, but I have an initial Time Machine backup running and can't be bothered to reboot into Debian. I'll edit this answer when I do, though. The biggest thing besides the wireless (which needs firmware but is easy to bootstrap as long as you have an Ethernet cable) was that if I closed the lid, the screen failed to wake up. The keyboard backlight would turn on, but never the screen. Preliminary Googling says that this is a kernel bug, but I haven't looked into exact fixes. I've started experimenting with the pm-* family of utilities (e.g. pm-suspend) but haven't done anything in-depth. A workaround for this issue is to switch to a virtual console, to "defocus" Xorg. This way, when you close the lid, your computer won't try to suspend at all. Note that this means that the Apple logo on the back will continue to be lit, although turning down the screen brightness also affects the Apple logo. Note, though, that you can only use the function keys when Xorg is "focused". Which brings me neatly to my next topic... The keyboard basically acts normally. Option works exactly as you would expect alt to. Command is the superkey. The only thing that tripped me up - although not for long - is that the function keys not needing Fn pressed is a hardware thing, not a software thing. Therefore pressing e.g. brightness up works the same as in OS X - when you press F2, it turns up the brightness, and when you press Fn+F2, it sends the F2 key. The final thing that I should mention is that I never got 3D acceleration to work. The GLX Gears demo worked with (I think) mesa, but I got booted to GNOME Fallback, so clearly true acceleration isn't working. The solution that I found hung me at boot (see the last post about the Debian installation in my blog), so I don't think there actually is a solution, at least until the linux-firmware-nonfree package is split up even more. If you're interested in all the gory details, you should read my blog posts on the matter (just click next until you reach the one called "I FIXED EVERYTHING"). They also probably mention some details that I can't remember off the top of my head (like the name of that touchpad driver!). Arch Linux (September 2013 image) on an iMac I allocated space for the Arch install from OS X (see the beginning of the Debian section for the reasoning behind this), creating a ~100 GB partition for /home and ~100GB partition for /. The CD boots fine - just hold down option, and then select the CD icon labelled "EFI Boot". The keyboard works fine up until you hit enter on the "boot Arch" option, at which point presumably Arch takes over from EFI, and hence the EFI Bluetooth keyboard driver. Therefore you'll need a USB keyboard to actually go through the installation. The first thing I did after booting was to connect to the internet with wifi-menu, which surprisingly worked without a hitch. Next I messed with the sizing of the partitions that I'd allocated for Arch using cgdisk, since I'd changed my mind - this is apparently OK and I was able to reboot into OS X without a problem. One problem that I ran into is that I made a partition too small, and wanted to cut into the OS X partition to expand it. However, when I went to Disk Utility to shrink the OS X partition, it said "preparing to partition..." and then never got any further. Tried doing it from the recovery partition (with Macintosh HD both mounted and unmounted): same result. So the moral of the story is: be sure about your partition layout before you install! From then on the install went without a problem. When I got to bootloader installation, I installed the grub, efibootmgr and dosfstools packages from Arch, as recommended by the wiki. I additionally installed os-prober, although according to the package description this is only for BIOS systems. I mounted the EFI system partition on /boot/efi (following the wiki, I'll refer to this as $esp below). Note that (at least on my computer) the EFI system partition is the first partition, making it /dev/sda1 under GNU/Linux and /dev/disk0s1 under OS X/Darwin. I installed GRUB using the following command: grub-install --target=x86_64-efi --efi-directory=$esp --bootloader-id=grub --recheck --debug If you can't be bothered to look, this is pretty much verbatim what the wiki recommends for the easy install (not keeping everything in the EFI partition, so some stuff goes in /boot). At the end it said "EFI variables are not supported on this system", but it still seems to have installed OK (as ls /boot/efi/EFI returns "grub" in addition to "APPLE"). Next, I generated grub.cfg: grub-mkconfig -o /boot/grub/grub.cfg I'll note that it seems to have found OS X on the correct partition, although given my experience in Debian I bet the menu item won't work. We'll see. Next, I rebooted into OS X - I seem not to have broken anything, although the EFI firmware seems to take slightly longer to get to the Apple logo as opposed to just the grey screen (it might be just me, not sure). In preparation for using bless I mounted the EFI partition in OS X: sudo mkdir /mnt sudo mount -t msdos /dev/disk0s1 /mnt cd /mnt Next I did this exact sequence of commands, rebooting in between each one to check if it worked (and remounting every time I rebooted): sudo bless --folder /mnt/ --bootefi EFI/grub/grubx64.efi This yielded different, and arguably better results than my attempt from Debian did. What happened this time was that "EFI Boot" is now offered as an option when you hold Option, along with "Macintosh HD" and "Recovery-$YOUR_INSTALLED_OS_X_VERSION". GRUB successfully loaded Arch, but I got dropped to an initrd shell. This was because I had misconfigured it so that the LUKS devices never got created, though, not due to a Mac-specific issue. This is as far as I've gotten, but I'll be back with more edits later.
What should I be aware of when installing Linux on a Mac?
1,345,313,914,000
I was about to diff a backup from it's source to manually verify that the data is correct. Some chars, like åäö, is not shown correctly on the original data but as the clients (over samba) interpret it correctly it's nothing to worry about. The data restored from backup shows the chars correctly, leading diff to not consider them to be the same files (with diffs, but rather completely different files). md5 sums, same file but different name. # md5sum /original/iStock_000003637083Large-barn* e37c34968dd145a0e25692e1cb7fbdb1 /original/iStock_000003637083Large-barn p? strand.jpg # md5sum /frombackup/iStock_000003637083Large-barn* e37c34968dd145a0e25692e1cb7fbdb1 /frombackup/iStock_000003637083Large-barn på strand.jpg Mount options and filesystems /dev/sdb1 on /original type ext4 (rw,noatime,errors=remount-ro) /dev/sdc1 on /frombackup type ext4 (rw) Locale LANG=sv_SE.UTF-8 LANGUAGE= LC_CTYPE="sv_SE.UTF-8" LC_NUMERIC="sv_SE.UTF-8" LC_TIME="sv_SE.UTF-8" LC_COLLATE="sv_SE.UTF-8" LC_MONETARY="sv_SE.UTF-8" LC_MESSAGES="sv_SE.UTF-8" LC_PAPER="sv_SE.UTF-8" LC_NAME="sv_SE.UTF-8" LC_ADDRESS="sv_SE.UTF-8" LC_TELEPHONE="sv_SE.UTF-8" LC_MEASUREMENT="sv_SE.UTF-8" LC_IDENTIFICATION="sv_SE.UTF-8" LC_ALL= od -c # ls "/original/iStock_000003637083Large-barn p� strand.jpg" | od -c 0000000 / v a r / w w w / m e d i a b a 0000020 n k e n _ i m a g e s / k u n d 0000040 i d 8 0 / _ B a r n / i S t o c 0000060 k _ 0 0 0 0 0 3 6 3 7 0 8 3 L a 0000100 r g e - b a r n p 345 s t r a 0000120 n d . j p g \n 0000127 # ls "/frombackup/iStock_000003637083Large-barn på strand.jpg" | od -c 0000000 / d a t a / v a r / w w w / m e 0000020 d i a b a n k e n _ i m a g e s 0000040 / k u n d i d 8 0 / _ B a r n / 0000060 i S t o c k _ 0 0 0 0 0 3 6 3 7 0000100 0 8 3 L a r g e - b a r n p 303 0000120 245 s t r a n d . j p g \n 0000135
Unix filesystems tend to be locale-agnostic in the sense that file names consist of bytes and it's the application's business to decide what those bytes mean if they fall outside the ASCII range. The convention on unix today is to encode filenames and everything else in UTF-8, apart from some legacy environments (mostly Asian). Windows filesystems, on the other hand, tend to have an encoding that is specified in the filesystem properties. If you need to work with filenames in a different encoding, create a translated view of that filesystem with convmvfs. See working with filenames in a different encoding over ssh It appears that your original system has filenames encoded in latin-1. Your current system uses UTF-8, and the one-byte sequence representing å in latin-1 (\345) is an invalid sequence in UTF-8 which ls prints as ?. Your backup process has somehow resulted in filenames encoded in UTF-8. Samba translates filenames based on its configuration. To access the original files with your native encoding, make a recoded view: mkdir /original-recoded convmvfs -o icharset=LATIN1,ocharset=UTF8 /original /original-recoded diff -r /original-recoded /frombackup (You may need other options depending on what permissions and ownership you want to obtain.)
Same file, different filename due to encoding problem?
1,345,313,914,000
Does the latest version of the Linux kernel (3.x) still use the Completely Fair Scheduler (CFS) for process scheduling which was introduced in 2.6.x ? If it doesn't, which one does it use, and how does it work? Please provide a source.
That's still the default, yes, though I would not call it the same, as it is constantly in development. You can read how it works with links to the code at http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=blob;f=Documentation/scheduler/sched-design-CFS.txt
Does Linux kernel 3.x use the CFS process scheduler?
1,345,313,914,000
I am on a CrunchBang machine and trying to write a script that needs to have the OS install date as a reference. I searched and found this command: ls -lct /etc | tail -1 | awk '{print $6, $7, $8}' It prints Mar 31 21:24 I did not understand the tail -1 part, but was able figure out that $6 $7 $8 are the 6th 7th 8th occurrences of the last line that the command is referencing. However, I realized that the year cannot be included as the year was not displayed in the ls -ltc command. Some people suggested finding the date /etc was created and some checking the /var/log/syslog etc. I thought these might be a little specific to the distro. What is your recommendation for a truly distro-agnostic way to find the OS install date?
If the assumption is that you have an ext{2,3,4} filesystem, and you formatted the root filesystem when you installed the OS (and didn't do upgrades from another OS without a wipe), you can use dumpe2fs: % dumpe2fs -h /dev/mapper/vg_desktop-lv_root 2>&1 |grep 'Filesystem created' Filesystem created: Sat Jul 23 04:28:07 2011
What is a distro-agnostic way determine the OS install date?
1,345,313,914,000
I want to learn SELinux to a high level, being able to understand the intricacies of domains, types and switching. What is the best way to go about this? I considered starting with Fedora and a good manual, although as Fedora ships with so many pre-written policies I found it somewhat overwhelming. Is there a good tutorial or learning distro suited to this purpose?
Fedora's SELinux documentation is a good place to start. While referring to Fedora 13, the SELinux User Guide has plenty of information about how SELinux works. I also recommend reading Dan Walsh's Blog, where he talks about SELinux and related issues. Lastly, drop into #fedora-selinux on the FreeNode IRC network, there are often people there who can provide some input.
What is the best way to learn SELinux? [closed]
1,345,313,914,000
Predictable network interface names are not supposed to change when hardware is added or removed. Isn't that the whole point of the naming scheme??? My wireless interface was named wlp3s0. I installed an ASUS Xonar DX 7.1 Channels PCI Express x1 Interface Sound Card in a free PCI slot and my wireless interface name changed to wlp5s0. The wireless card is in the same PCI slot that it was in before the sound card was installed, so why would the interface name change?! The mobo is a GIGABYTE GA-970A-UD3, and the wireless card is an ASUS PCE-N15. The system is running Arch Linux with a stock kernel. I'm looking for a reasonable explanation of why the interface name would change in this scenario. If there is not a good reason why the interface name would change, where do I file a bug report/who do I complain to? It's not a big deal and the only config I needed to change was my network profile for netctl. I just think if a "predictable" network interface name isn't predictable then they completely failed at their job and this naming scheme is useless garbage! /rant
Predictable network interface names are not supposed to change when hardware is added or removed. Isn't that the whole point of the naming scheme??? Long story short, this is nothing new; it's expected/intended. Therefore, you don't need to file a bug, unless you want to ask your PC-maker to support Linux better (BIOS) or the hardware manufacturer (drivers). Some options if you'd like to improve the situation for hot-plugging devices and/or go back to old naming scheme: Disable new naming scheme for network devices with net.ifnames=0 kernel cmdline Add biosdevname=1 kernel commandline to incorporate BIOS-provided index numbers to names Create or edit udev rules for custom names or altered naming schemes You disable the assignment of fixed names, so that the unpredictable kernel names are used again. For this, simply mask udev's .link file for the default policy: ln -s /dev/null /etc/systemd/network/99-default.link If you're using systemd and/or udev, the "predictable naming scheme" argument might be different than before. Based on the naming scheme of the WiFi interface, though, I am assuming that you are using a system with systemd. You can try appending the following boot parameter to the kernel commandline to use the "old" naming convention of network devices. However, I'm not entirely certain what, if any, additional effects this may have other than retaining the naming scheme for network devices. net.ifnames=0 Adding it to /etc/default/grub can facilitate the persistence and reuse of this parameter; again, assuming you're using grub2: GRUB_CMDLINE_LINUX="net.ifnames=0" If udev uses device firmware, location and other options when determining device names, then perhaps the location or something else may have changed internally, depending on how the relevant devices interact with each other. This seems not as relevant here, as the devices are a WiFi adapter and a soundcard. Nevertheless, it may be related to the underlying bus structure; which does seem relevant, as the devices are both connected to PCI slots. Additional info from FedoraDocs 8.1. Naming Schemes Hierarchy By default, systemd will name interfaces using the following policy to apply the supported naming schemes: Scheme 1: Names incorporating Firmware or BIOS provided index numbers for on-board devices (example: eno1), are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 2. Scheme 2: Names incorporating Firmware or BIOS provided PCI Express hotplug slot index numbers (example: ens1) are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 3. Scheme 3: Names incorporating physical location of the connector of the hardware (example: enp2s0), are applied if applicable, else falling directly back to scheme 5 in all other cases. Scheme 4: Names incorporating interface's MAC address (example: enx78e7d1ea46da), is not used by default, but is available if the user chooses. Scheme 5: The traditional unpredictable kernel naming scheme, is used if all other methods fail (example: eth0). This policy, the procedure outlined above, is the default. If the system has biosdevname enabled, it will be used. Note that enabling biosdevname requires passing biosdevname=1 as a command-line parameter except in the case of a Dell system, where biosdevname will be used by default as long as it is installed. If the user has added udev rules which change the name of the kernel devices, those rules will take precedence. Additional Resources Q&A on AskUbuntu Writing udev Rules: Examples ArchWiki: udev Systemd: Predictable Network Interface Names
Why did the interface name of my wireless card change when I added a sound card?
1,345,313,914,000
I am experiencing quite heavy audio skipping when streaming audio to my bluetooth speaker (Sony SRS-X3) using pulseaudio and Arch Linux on a T430. I think it is related to a known bug [1]. The speaker works flawlessly with Android. $ sudo lspci -nnk | grep -iA2 net > Network controller [0280]: Intel Corporation Centrino Ultimate-N 6300 [8086:4238] (rev 3e) > Subsystem: Intel Corporation Centrino Ultimate-N 6300 3x3 AGN [8086:1111] > Kernel driver in use: iwlwifi $ sudo lsusb | grep Blue > 0a5c:21e6 Broadcom Corp. BCM20702 Bluetooth 4.0 [ThinkPad] Does anyone have an idea on how to reduce/prevent the skipping? Information that helps me understand the problem is also appreciated. I suspect it is related to interference with WiFi. There is less skipping with WiFi off or deep at night (less traffic). How does Android handle this? My research turned up the Linux Frequency Broker [2]. Is it implemented? [1] https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/405294 [2] https://wireless.wiki.kernel.org/en/developers/frequencybroker
It may help to disable the bluetooth coexistance parameter of the iwlwifi module to see if conditions improve. Open a terminal window and enter echo "options iwlwifi bt_coex_active=0" | sudo tee -a /etc/modprobe.d/iwlwifi.conf Reboot
How to prevent bluetooth audio skipping with the A2DP profile on Arch Linux?
1,345,313,914,000
I'm running a Laptop with Arch Linux, X.org and i3. Due to a broken LCD panel, I would like to disable/ignore the left ~228 Pixels of the screen until I have time to get it repaired. So far, I've tried using a non-standard resolution and then adding an offset, but had no success. Is there any simple solution for this?
You can use xrandr. I have tested this breathy on a single monitor. First look at current resolution and subtract 228 from X. Replace X and Y below for new resolutions Y=y, X=x-228. (note in the text below lower case x is a literal x). Run xrandr to get output name. Then xrandr --fb XxY --output OUTPUT_NAME --transform 1,0,-228,0,1,0,0,0,1
How can I disable a part of the screen in X.Org
1,345,313,914,000
I am reading the man page of mount and clone. I understand that mount is used to add a directory hierarchy to a mount point (a directory). In clone's man page, under the CLONE_NEWNS section, they refer to mounts as the file hierarchy as seen by a process. My question is that, is the term 'mount' being used to refer to the individual directories in the directory hierarchy seen by a process, and 'mount points' used to refer to the directories where file systems can be mounted ?
Id express it like this: "mount points": locations in the file hierarchy where file systems have been mounted to "mounts": the set of mounted file systems / the set of locations in the file hierarchy where file systems have been mounted to "to mount": the action of mounting a file system into the file hierarchy The view of a process to the file hierarchy does see the mounts insofar as it sees the file hierarchy. This includes those parts where file systems have been mounted into that hierarchy.
Confusion regarding the term 'mount' in Linux
1,345,313,914,000
Are there any man pages on the /sys/ directory and how devices are setup? I'm hoping that there may be something similar to man proc, but can't really find anything to push me in the right direction.
How devices are "set up", in general, has nothing to do with /sys. Most likely you are looking for information about udev or another hotplugging daemon. You can find authoritative information about /sys (for which, the underlying filesystem is called sysfs) in the kernel documentation.
/sys/ documentation?
1,345,313,914,000
If I do the following command on my standard Linux Mint installation: comp ~ $ ps -eo rtprio,nice,cmd RTPRIO NI CMD ... 99 - [migration/0] 99 - [watchdog/0] 99 - [migration/1] - 0 [ksoftirqd/1] 99 - [watchdog/1] I get some of the processes with realtime priority of 99. What is the meaning of rtprio in a non real time Linux? Does this mean that if I just run a program with rtprio 99 it runs real time? Where do real time OSes fall in this story?
"Real time" means processes that must be finished by their deadlines, or Bad Things (TM) happen. A real-time kernel is one in which the latencies by the kernel are strictly bounded (subject to possiby misbehaving hardware which just doesn't answer on time), and in which most any activity can be interrupted to let higher-priority tasks run. In the case of Linux, the vanilla kernel isn't set up for real-time (it has a cost in performance, and the realtime patches floating around depend on some hacks that the core developers consider gross). Besides, running a real-time kernel on a machine that just can't keep up (most personal machines) makes no sense. That said, the vanilla kernel handles real time priorities, which gives them higher priority than normal tasks, and those tasks will generally run until they voluntarily yield the CPU. This gives better response to those tasks, but means that other tasks get hold off.
Real time priorities in non real time OS
1,345,313,914,000
I have a linux DHCP server running on my network. I recently found out that I can assign specific IP addresses to clients based on their MAC address by modifying the dhcpd.conf file. Now is there something I can do from the server side that would invalidate a specific client's lease, forcing it to get a new one from the server (after I have added entries in dhcpd.conf), without releasing/renewing on the client side?
The answer to this depends on how you previously configured the DHCP server. Normal DHCP behaviour is this: Lease is given a lease time perhaps 7days. Client machine starts requesting a new lease half way through the current lease period. Client machine only stops using the IP address when it either gets a new lease from the same DHCP server or the lease has expired. The consequence of this is that you need to start planning your network maintenance. When you are going to make a change that will require new IP settings, about "lease time" ahead, you need to reduce the lease time down to a more dynamic setting (e.g. 30 minutes). that way changes in DHCP will be rolled out smoothly, and then when you are ready, you increase the lease time back to a more sensible value. Do not leave it at 30 minutes as it will mean that should the DHCP server fail, half your machines will be connectionless in 15 minutes. You can force through a change in lease by asking everyone to reboot their computers (or for the more technically capable, releasing and then renewing their leases)
Force dhcp client to get a new lease
1,345,313,914,000
From the man page, I know you can use raw sockets, but I don’t understand what is meant by “bind to any address for transparent proxying”. I know there’s another capability required to bind to privileged ports, so I know you can’t bind to any port. Is there a way to tell Linux that you’re binding on an address for proxying?
Quoting from this Security SE Answer: CAP_NET_RAW: Any kind of packet can be forged, which includes faking senders, sending malformed packets, etc., this also allows to bind to any address (associated to the ability to fake a sender this allows to impersonate a device, legitimately used for "transparent proxying" as per the manpage but from an attacker point-of-view this term is a synonym for Man-in-The-Middle),
What does CAP_NET_RAW do?
1,345,313,914,000
I have a chroot setup and I've been running graphical applications from it with no problem. The only setup I've done is set DISPLAY=:0 and it works. However I always thought Unix domain sockets were used for X11 so I couldn't figure out why this was working. I did a little digging and it turns out I was right. My X.org server is launched with the -nolisten tcp flag and I have a unix domain socket in /tmp/.X11-unix yet somehow my chroot can launch graphical applications on that X11 display without any socket. I never hard linked the socket to the chroot, in fact they're not even on the same file system. /tmp/.X11-unix is completely empty on the chroot. How is it possible that my chroot can launch graphical applications on my X11 display?
The X server also supports abstract sockets, which work identically to UNIX sockets, and have pathnames similar to UNIX sockets, but the pathnames start with a NUL character. See the documentation for "abstract" in the unix(7) manpage. An abstract socket effectively exists in all filesystem namespaces and chroots; you don't have to link anything into the chroot or namespace to use it. Perhaps the X server and client are both using an abstract socket to communicate? X clients using the standard X client libraries will automatically attempt to use an abstract socket, before they try to use the default UNIX socket. In libxcb, see _xcb_open and _xcb_open_abstract in src/xcb_util.c.
X.org working with no socket in chroot?
1,345,313,914,000
Is it possible to disable L1 and/or L2 cache on Ubuntu 14.04 (preferably in a higher level language like Python)? If so, how? In addition, will disabling the cache differ significantly between different architectures? If so, I'm more interested in an ARM Cortex-A15. EDIT While researching how to disable the cache, I did find out about the "drop_caches" file in /proc/sys/vm/ from the kernel.org documentation "Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free." ... "This file is not a means to control the growth of the various kernel caches (inodes, dentries, pagecache, etc...) These objects are automatically reclaimed by the kernel when memory is needed elsewhere on the system." This does not seem like what I'm looking for as not only does this not seem like it would disable the cache, I thought that virtual memory resides within the operating system and not on the hardware. My goal is to disable the cache so the desired memory must be sought elsewhere, such as within the RAM. EDIT To clarify, I understand what disabling the cache will do to the system. However, it is a common technique used in space applications to increase reliability for safety-critical applications. Here are some resources that document this phenomenon: Reducing embedded software radiation-induced failures through cache memories Guideline for Ground Radiation Testing of Microprocessors in the Space Radiation Environment There are even books on the topic: Ionizing Radiation Effects in Electronics: From Memories to Imagers
You can not do it directly in Python, as you need a kernel module to do that (and root rights to load that module). See http://lxr.free-electrons.com/source/arch/arm/mm/cache-v7.S#L21 for what it takes to invalidate the L1 cache (invalidate, not disable). Different CPU architectures (e.g x86 vs ARM) require different assembly code (CPU instructions) to disable the cache. I'm not sure if the Linux kernel has any possibility to disable the L1/L2/L3/L4 caches and if it would have that, I believe it would be just used internally for a short period of time, as the CPU is slow without these caches. See Is there a way to disable CPU cache (L1/L2) on a Linux system? for a link on how you can disable the cache on an x86/x64 system (you need to change the register cr0). For ARM check Cache disabled behavior. I'm not sure that you completely understand what the CPU caches do. Can you please elaborate why you want to cripple the performance of your system?
How to disable processor's L1 and L2 caches?
1,345,313,914,000
I have an 8G usb stick (I'm on linux Mint), and I'm trying to copy a 5.4G file into it, but getting No space left on device The filesize of the copied file before failing is always 3.6G An output of the mounted stick shows.. df -T /dev/sdc1 ext2 7708584 622604 6694404 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe df -h /dev/sdc1 7.4G 608M 6.4G 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe du -h --max-depth=1 88K ./.ssh ls -h myfile -rw-r--r-- 1 moo moo 5.4G May 26 09:35 myfile So a 5.4G file, won't seem to go on an 8G usb stick. I thought there wasn't issues with ext2, and it was only problems with fat32 for file sizes and usb sticks ? Would changing the formatting make any difference ? Edit: Here is an report from tunefs for the drive sudo tune2fs -l /dev/sdd1 Filesystem volume name: Last mounted on: /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe Filesystem UUID: ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: ext_attr resize_inode dir_index filetype sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 489600 Block count: 1957884 Reserved block count: 97894 Free blocks: 970072 Free inodes: 489576 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 477 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8160 Inode blocks per group: 510 Filesystem created: Mon Mar 2 13:00:18 2009 Last mount time: Tue May 26 12:12:59 2015 Last write time: Tue May 26 12:12:59 2015 Mount count: 102 Maximum mount count: 26 Last checked: Mon Mar 2 13:00:18 2009 Check interval: 15552000 (6 months) Next check after: Sat Aug 29 14:00:18 2009 Lifetime writes: 12 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Default directory hash: half_md4 Directory Hash Seed: 249823e2-d3c4-4f17-947c-3500523479fd FS Error count: 62 First error time: Tue May 26 09:48:15 2015 First error function: ext4_mb_generate_buddy First error line #: 757 First error inode #: 0 First error block #: 0 Last error time: Tue May 26 10:35:25 2015 Last error function: ext4_mb_generate_buddy Last error line #: 757 Last error inode #: 0 Last error block #: 0
Your 8GB stick has approximately 7.5 GiB and even with some file system overhead should be able to store the 5.4GiB file. You use tune2fs to check the file sytem status and properties: tune2fs -l /dev/<device> By default 5% of the space is reserved for the root user. Your output lists 97894 blocks, which corresponds to approximately 385MiB and seems to be the default value. You might want to adjust this value using tune2fs if you don't need that much reserved space. Nevertheless, even with those 385MiB the file should fit on the file system. Your tune2fs output shows an unclean file system with errors. So please run fsck on the file system. This will fix the errors and possibly place some files in the lost+found directory. You can delete them if you're not intending to recover the data. This should fix the file system and copying the file will succeed.
Unable to copy large file onto ext2 usb stick [closed]
1,345,313,914,000
Is there currently a generic command that will "pivot" input. e.g. #labeled.file name: bob title: code monkey name: joe title: pointy haired is converted to: name title bob code monkey joe pointy haired and vice-versa
I'm not sure there's something in coreutils that can do this, but it seems your question has been asked before by people not necessarily interested in an existing tool like you seem to be. The following links may be interesting to you as a last resort in case you can't find a tool that already does this. Transpose a file in Bash (from Stack Overflow) Transposing rows and columns (from this site) For what it's worth, you may want to take a look at the GNU coreutils manual, especially the 4th section
Command to transpose (swap rows and columns of) a text file [duplicate]
1,345,313,914,000
I am experimenting mounting options for a program I am writing. I am running Linux Mageia 2. I added the following line to /etc/fstab /dev/sr0 /mem auto user,noauto, 0 0 and I removed all other entries regarding /dev/sr0 which is the device for my DVD drive. Then, acting as normal user, I can successfully $ mount /dev/sr0 but I then get an error message ("Only root can ...") for $ umount /dev/sr0 Of course, the device is not busy : I do nothing between mount and umount. Added after solving : If you are interested only in solving that problem, you can skip the rest of the question, and go directly to the accepted answer. The rest of the question is about my work to find a solution or better document the problem. However, there is a post-mortem section at the very end of the question that complements the answer with my own remarks. Ownership of files : $ ls -ld /mem /dev/sr0 brw-rw----+ 1 root cdrom 11, 0 mai 14 01:01 /dev/sr0 drwxr-xr-x 12 root root 4096 janv. 21 22:34 /mem/ And I am a member of the group "cdrom" I have the very same problem when mounting a file system image with a loop device. However, everything works fine when I replace "user" with the option "users", which seems to indicate that the system is confused when remembering who mounted the file system. The first reply by Rahul Patil does not bring further insight since it is essentially equivalent to what I used, if my understanding of the umounting procedure is correct. However it lead me to think further about this process (hence one upvote) and to get more details. This was even more supported by the comment of Hauke Laging. As I understand it, summarily, the umount command takes its argument (a device or a mount point) and tries to identify applicable entries in /etc/mtab and then checks with /etc/fstab whether it can execute the request. According to the man page for mount(8),the name of the mounting user [should be] written to mtab so that he can unmount the filesystem again. When I check /etc/mtab after mounting, there is no such information written that I can find. I do not know how it is supposed to be stored and what it should look like. Hence the probleme is really with mount rather than with umount. To be absolutely sure that the problem is not that another /etc/fstab entry is used for mounting (which would explain ignorance of the "user"option), I deleted all other entries in /etc/fstab, keeping only the single line at the beginning of my question. Then I repeated the mount-umount sequence, unfortunately with the same result. $ grep sr0 /etc/mtab /dev/sr0 /mem udf ro,nosuid,nodev,noexec,relatime,utf8 0 0 $ mount | grep sr0 /dev/sr0 on /mem type udf (ro,nosuid,nodev,noexec,relatime,utf8) Hauke Laging asked for ls -l /etc/mtab, which I thought was an error and that he was really asking for cat /etc/mtab. But I did it anyway ... $ ls -l /etc/mtab lrwxrwxrwx 1 root root 12 juin 25 2012 /etc/mtab -> /proc/mounts $ ls -l /proc/mounts lrwxrwxrwx 1 root root 11 mai 19 13:21 /proc/mounts -> self/mounts $ ls -l /proc/self/mounts -r--r--r-- 1 myself mygroup 0 mai 19 13:22 /proc/self/mounts This last information surprised me. Though I am essentially the only user on that computer, I see no reason why this file should belong to me, or to any other user than root itself. Many thanks Hauke, but why did you ask that question ? Actually the file does not belong to me. I guess it must be a virtual file. I repeated the request as user "friend", and then as "root": $ ls -l /proc/self/mounts -r--r--r-- 1 friend users 0 mai 19 14:10 /proc/self/mounts # ls -l /proc/self/mounts -r--r--r-- 1 root root 0 mai 19 14:10 /proc/self/mounts I would welcome any suggestion as to what the problem might be, or for experiments to attempt. Thanks Post-mortem: here are some final remarks after the problem was solved by Hauke Laging. I followed on the web the lead of Hauke's explanation. Apparently this is an old issue. It is explained in an old document from october 2000, mentionning some problems with other options, but not user. Hopefully some of the kernel reliability issues are now corrected. The issue is briefly mentionned in the bug section of the mount man page but not in enough detail, especially regarding alternative setups and effect on options. However, lost in that very long man page is the following information: When the proc filesystem is mounted (say at /proc), the files /etc/mtab and /proc/mounts have very similar contents. **The former has somewhat more information, such as the mount options used**, but is not necessarily up-to-date (cf. the -n option below). It is possible to replace /etc/mtab by a symbolic link to /proc/mounts, and especially when you have very large numbers of mounts things will be much faster with that symlink, but **some information is lost that way, and in particular using the "user" option will fail**. It would certainly have been useful to have a hint about this where the option user is described, which is the first place I looked.
The problem is that your /etc/mtab is not a file but a symlink to /proc/mounts. This has advantages but also the disadvantage that user does not work. You already guessed right the reason for that: "the system is confused when remembering who mounted the file system". This information is written to mtab, cannot be written there in your case though. The kernel doesn't care (doesn't even know) about user mounts (this is a userspace feature). Thus this info is not contained in /proc/mounts. Do this: cd /etc cp mtab mtab.file rm mtab mv mtab.file mtab umount as user should work after you have mounted the volume again.
Option "user" work for mount, not for umount
1,345,313,914,000
I'm developing for a specific TI ARM processor with custom drivers that made it to the kernel. I'm trying to migrate from 2.6.32 to 2.6.37, but the structure changed so much I will have weeks of work to upgrade my code. For example, my chip is the dm365, which comes with video processing drivers. Now most of the old drivers which were directly exposed to me go through v4l2, which might make more sense. TI provides very little information for those upgrades. How am I supposed to keep up with the changes? When I google for specific file names, I seldom get a few patches with fewer comments on what changed and why and how old relates to new.
If you select a kernel to track, be sure to select one that is tagged for long-term support. But sooner or later you will have to move on...
How am I supposed to keep up with kernels as a developer?
1,345,313,914,000
Is it OK for two or more processes concurrently read/write to the same unix socket? I've done some testing. Here's my sock_test.sh, which spawns 50 clients each of which concurrently write 5K messages: #! /bin/bash -- SOC='/tmp/tst.socket' test_fn() { soc=$1 txt=$2 for x in {1..5000}; do echo "${txt}" | socat - UNIX-CONNECT:"${soc}" done } for x in {01..50}; do test_fn "${SOC}" "Test_${x}" & done I then create a unix socket and capture all traffic to the file sock_test.txt: # netcat -klU /tmp/tst.socket | tee ./sock_test.txt Finally I run my test script (sock_test.sh) and monitor on the screen all 50 workers doing their job. At the end I check whether all messages have reached their destination: # ./sock_test.sh # sort ./sock_test.txt | uniq -c To my surprise there were no errors and all 50 workers have successfully sent all 5K messages. I suppose I must conclude that simultaneous writing to unix sockets is OK? Was my concurrency level too low to see collisions? Is there something wrong with my test method? How then I test it properly? EDIT Following the excellent answer to this question, for those more familiar with python there's my test bench: #! /usr/bin/python3 -u # coding: utf-8 import socket from concurrent import futures pow_of_two = ['B','KB','MB','GB','TB'] bytes_dict = {x: 1024**pow_of_two.index(x) for x in pow_of_two} SOC = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) SOC.connect('/tmp/tst.socket') def write_buffer( char: 'default is a' = 'a', sock: 'default is /tmp/tst.socket' = SOC, step: 'default is 8KB' = 8 * bytes_dict['KB'], last: 'default is 2MB' = 2 * bytes_dict['MB']): print('## Dumping to the socket: {0}'.format(sock)) while True: in_memory = bytearray([ord(char) for x in range(step)]) msg = 'Dumping {0} bytes of {1}' print(msg.format(step, char)) sock.sendall(bytes(str(step), 'utf8') + in_memory) step += step if last % step >= last: break def workers(concurrency=5): chars = concurrency * ['a', 'b', 'c', 'd'] with futures.ThreadPoolExecutor() as executor: for c in chars: executor.submit(write_buffer, c) def parser(chars, file='./sock_test.txt'): with open(file=file, mode='rt', buffering=8192) as f: digits = set(str(d) for d in range(0, 10)) def is_digit(d): return d in digits def printer(char, size, found, junk): msg = 'Checking {}, Expected {:8s}, Found {:8s}, Junk {:8s}, Does Match: {}' print(msg.format(char, size, str(found), str(junk), size == str(found))) char, size, found, junk = '', '', 0, 0 prev = None for x in f.read(): if is_digit(x): if not is_digit(prev) and prev is not None: printer(char, size, found, junk) size = x else: size += x else: if is_digit(prev): char, found, junk = x, 1, 0 else: if x==char: found += 1 else: junk += 1 prev = x else: printer(char, size, found, junk) if __name__ == "__main__": workers() parser(['a', 'b', 'c', 'd']) Then in the output you may observe lines like the following: Checking b, Expected 131072 , Found 131072 , Junk 0 , Does Match: True Checking d, Expected 262144 , Found 262144 , Junk 0 , Does Match: True Checking b, Expected 524288 , Found 219258 , Junk 0 , Does Match: False Checking d, Expected 524288 , Found 219258 , Junk 0 , Does Match: False Checking c, Expected 8192 , Found 8192 , Junk 0 , Does Match: True Checking c, Expected 16384 , Found 16384 , Junk 0 , Does Match: True Checking c, Expected 32768 , Found 32768 , Junk 610060 , Does Match: True Checking c, Expected 524288 , Found 524288 , Junk 0 , Does Match: True Checking b, Expected 262144 , Found 262144 , Junk 0 , Does Match: True You can see that payload in some cases (b, d) is incomplete, however missing fragments are received later (c). Simple math proves it: # Expected b + d = 524288 + 524288 = 1048576 # Found b,d + extra fragment on the other check on c b + d + c = 219258 + 219258 + 610060 = 1048576 Therefore simultaneous writing to unix sockets is OK NOT OK.
That is a very short test line. Try something larger than the buffer size used by either netcat or socat, and sending that string in multiple times from the multiple test instances; here's a sender program that does that: #!/usr/bin/env expect package require Tcl 8.5 set socket [lindex $argv 0] set character [string index [lindex $argv 1] 0] set length [lindex $argv 2] set repeat [lindex $argv 3] set fh [open "| socat - UNIX-CONNECT:$socket" w] # avoid TCL buffering screwing with our results chan configure $fh -buffering none set teststr [string repeat $character $length] while {$repeat > 0} { puts -nonewline $fh $teststr incr repeat -1 } And then a launcher to call that a bunch of times (25) using different test characters of great length (9999) a bunch of times (100) to hopefully blow well past any buffer boundary: #!/bin/sh # NOTE this is a very bad idea on a shared system SOCKET=/tmp/blabla for char in a b c d e f g h i j k l m n o p q r s t u v w x y; do ./sender -- "$SOCKET" "$char" 9999 100 & done wait Hmm, I don't have a netcat hopefully nc on Centos 7 will suffice: $ nc -klU /tmp/blabla > /tmp/out And then elsewhere we feed data to that $ ./launcher Now our /tmp/out will be awkward as there are no newlines (some things buffer based on newline so newlines can influence test results if that is the case, see setbuf(3) for the potential for line-based buffering) so we need code that looks for a change of a character, and counts how long the previous sequence of identical characters was. #include <stdio.h> int main(int argc, char *argv[]) { int current, previous; unsigned long count = 1; previous = getchar(); if (previous == EOF) return 1; while ((current = getchar()) != EOF) { if (current != previous) { printf("%lu %c\n", count, previous); count = 0; previous = current; } count++; } printf("%lu %c\n", count, previous); return 0; } Oh boy C! Let's compile and parse our output... $ make parse cc parse.c -o parse $ ./parse < /tmp/out | head 49152 b 475136 a 57344 b 106496 a 49152 b 49152 a 38189 r 57344 b 57344 a 49152 b $ Uh-oh. That don't look right. 9999 * 100 should be 999,900 of a single letter in a row, and instead we got...not that. a and b got started early, but it looks like r somehow got some early shots in. That's job scheduling for you. In other words, the output is corrupt. How about near the end of the file? $ ./parse < /tmp/out | tail 8192 l 8192 v 476 d 476 g 8192 l 8192 v 8192 l 8192 v 476 l 16860 v $ echo $((9999 * 100 / 8192)) 122 $ echo $((9999 * 100 - 8192 * 122)) 476 $ Looks like 8192 is the buffer size on this system. Anyways! Your test input was too short to run past buffer lengths, and gives a false impression that multiple client writes are okay. Increase the amount of data from clients and you will see mixed and therefore corrupt output.
Concurrently reading/writing to the same unix socket?
1,472,722,116,000
With stat 8.13 on a Debian based Linux - among many others - the following FORMAT directives (--format=) are offered: In combination with --file-system (-f): %s Block size (for faster transfers) %S Fundamental block size (for block counts) Question(s): What exactly is meant? My best guess is that %s, %S equals %b (display in Blocks) and %B (display block's size), where the latter are for files and the first two are for file systems. Is that correct?
%S fundamental block size (for block counts) tells you how big each block is on the file system. On most file systems, this is the smallest amount of space any file can take up. Each file uses a multiple of this. For example, $ echo > a # create a file containing a single byte $ du -h a # see how much disk space it's using 4.0K a $ stat -f -c '%S' . # see what stat thinks the block size is 4096 $ tune2fs -l /dev/mydrive | grep '^Block size' 4096 I'm not 100% sure it always works like this. For example, I expect it could also decide to print 512 or 1024, even if the underlying block size is different, provided stat -c %b FILE * stat -f -c %S FILE = du --block-size=1 FILE. The exact implementation would depend on the file system. %s block size (for faster transfers) suggests how many bytes you should read at a time if you're copying large files, for example what you should use as the bs (blocksize) parameter when using dd. But on the systems I checked, it always prints 4096, even where larger values might be faster. See Is there a way to determine the optimal value for the bs parameter to dd? for more discussion of that. Technically, this information (and all the information from stat -f) comes from the statvfs system call. %s corresponds to the f_bsize field, and %S is f_frsize. So you could look into their precise meanings starting with the statvfs man page unsigned long f_bsize; /* Filesystem block size */ unsigned long f_frsize; /* Fragment size */
stat file system sizes
1,472,722,116,000
We have a company RDS (Remote Desktop Server) TSG (Terminal Services Gateway) server, which allows employees to connect to an RDS session from home, so they can see a work RDS desktop from home. This works fine on their home computers using windows 7 with the following settings:                                                   However, some users have Linux at home and are trying to use freerdp 1.2.0. I've tested this on a laptop connected to the internal company LAN using the following command and it works fine: $ xfreerdp /f /rfx /cert-ignore /v:farm.company.com /d:company.com /u:administrator /p: However, if I try to use that command on a laptop, which is not using the company LAN connection, i.e. a home connection, I get this: freerdp_set_last_error 0x2000C Error: protocol security negotiation or connection failure So I'm now trying to use some of the new TSG commands in freerdp 1.2.0 as follows, but that also doesn't work. I can only see 4 TSG related commands: /g:<gateway>[:port] Gateway Hostname /gu:[<domain>&#93;<user> or <user>[@<domain>] Gateway username /gp:<password> Gateway password /gd:<domain> Gateway domain I read somewhere that I only really need to use /g in my particular scenario, I may have read that incorrectly. So when I try: $ xfreerdp /f /rfx /cert-ignore /v:farm.company.com /d:company.com /g:rds.company.com /u:administrator /p: That will give me: Could not open SAM file! Could not open SAM file! Could not open SAM file! Could not open SAM file! rts_connect: error! Status Code: 401 HTTP/1.1 401 Unauthorized Content-Type: text/plain Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rds.company" X-Powered-By: ASP.NET Date: Wed, 02 Jul 2014 12:36:41 GMT Content-Length: 13 Considering the original command: $ xfreerdp /f /rfx /cert-ignore /v:farm.company.com /d:company.com /u:administrator /p: This works on a Linux laptop, which is connected to the network within the company LAN. Why can't I use a similar command (with the extra TSG parameters) on the same Linux laptop, which is connected to the internet at home? Am I not using the new TSG switches correctly?
You need to make sure that the layout of the command you are typing is correct. If you have one thing messed up or in the wrong location then you will have an error no matter what you try. the command you tried to run $ xfreerdp /f /rfx /cert-ignore /v:farm.company.com /d:company.com /g:rds.company.com /u:administrator /p: you need to type the command like this- xfreerdp /cert-ignore /v:WORKSTATION /d:DOMAIN /u:USERNAME /p:PASSWORD /g:GATEWAY Now if you are not using the same account for the terminal server for the RD gateway then you will have to run this- xfreerdp /v:WORKSTATION /d:DOMAIN /u:USERNAME /p:PASSWORD /g:GATEWAY /gd:GATEWAYDOMAIN /gu:GATEWAYUSERNAME /gp:GATEWAYPASSWORD. The reason this is needed is if the user that is connecting is not already saved in the rd gateway as an authorized account it will refuse to connect that user. so the main user for the gateway that has all the rights to access the connect will be able to force connect them through and it shoud make the full connection. This allows for a sort of force connection Also make sure that your router you are using at the company is configured to have access from remote connections from outside the office. This will also cause for connection failures if it was never setup or configured incorrectly.
Can't connect to an external RDS TSG server from home
1,472,722,116,000
I want to create a fixed size Linux ramdisk which never swaps to disk. Note that my question is not "why" I want to do this (let say, for example, that it's for an educative purpose or for research): the question is how to do it. As I understand ramfs cannot be limited in size, so it doesn't fit my requirement of having a fixed size ramdisk. It also seems that tmpfs may be swapped to disk. So it doesn't fit my requirement of never swapping to disk. How can you create a fixed size Linux ramdisk which does never swap to disk? Is it possible, for example, to create tmpfs inside a ramfs (would such a solution fit both my requirements) and if so how? Note that performances are not an issue and the ramdisk getting full and triggering "disk full" errors ain't an issue either.
This is just a thought and has more than one downside, but it might be usable enough anyway. How about creating an image file and a filesystem inside it on top of ramfs, then mount the image as a loop device? That way you could limit the size of ramdisk by simply limiting the image file size. For example: $ mkdir -p /ram/{ram,loop} $ mount -t ramfs none /ram/ram $ dd if=/dev/zero of=/ram/ram/image bs=2M count=1 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.00372456 s, 563 MB/s $ mke2fs /ram/ram/image mke2fs 1.42 (29-Nov-2011) /ram/ram/image is not a block special device. Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 256 inodes, 2048 blocks 102 blocks (4.98%) reserved for the super user First data block=1 Maximum filesystem blocks=2097152 1 block group 8192 blocks per group, 8192 fragments per group 256 inodes per group Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done $ mount -o loop /ram/ram/image /ram/loop $ dd if=/dev/zero of=/ram/loop/test bs=1M count=5 dd: writing `/ram/loop/test': No space left on device 2+0 records in 1+0 records out 2027520 bytes (2.0 MB) copied, 0.00853692 s, 238 MB/s $ ls -l /ram/loop total 2001 drwx------ 2 root root 12288 Jan 27 17:12 lost+found -rw-r--r-- 1 root root 2027520 Jan 27 17:13 test In the (somewhat too long) example above the image file is created to be 2 megabytes and when trying to write more than 2 megabytes on it, write simply fails because the filesystem is full. One obvious downsize for all this is of course that there is much added complexity, but at least for academic purposes this should suffice.
How to create a fixed size Linux ramdisk which does never swap to disk?
1,472,722,116,000
I recently installed Fedora 14 on my home PC so I have a dual boot system running windows and linux. I probably would primarily use Linux on that machine as its older and Linux manages its resources MUCH better than Windows does, BUT I'm a bit of a Netflix junky and from what I've read there isn't currently a solution that allows for Netflix to work on Linux. Evidently Moonlight (which as I understand is supposed to be like silverlight) is missing a key piece of functionality. So is there really no solution?
there is an easy way to install netflix now. How to install Netflix on Ubuntu, Linux Mint and Fedora $ sudo apt-add-repository ppa:ehoover/compholio $ sudo apt-get update && sudo apt-get install netflix-desktop
How can I watch Netflix on Linux?
1,472,722,116,000
Under Linux, is it possible to view error messages that show up on the text mode terminal while in GUI mode, instead of having to press Ctrl+Alt+F1 or Ctrl+Alt+F2 to view the messages every time and then switching back to GUI mode by pressing Ctrl+Alt+F7? Thank you.
You can see the current contents of the text console /dev/tty1 in the file /dev/vcs1 (where 1 is the number in Ctrl+Alt+F1). (If you try to read from /dev/tty1, you'll compete with the program running there for keyboard input.) The vcs devices are normally only readable by root. You get a snapshot; there's no convenient way to get content as it comes. The ttysnoop program allows you to watch the traffic on a console from another terminal (including an X terminal). But this is something you have to set up in advance. Instead of trying to catch the messages when they've been output on the text console, arrange to have the messages directed to a different location. Most such console output will end up in the system logs, in files under /var/log. Under X (i.e. in graphical mode), you can catch these messages with xconsole, which is part of the standard X distribution. If xconsole doesn't show the messages you want, edit your question to mention where these messages are coming from. If you can't get xconsole to show any message, edit your question to include your exact operating system, any configuration steps you've taken, and any error message you saw. If the messages are not coming from the system logging facility, but from a program you started in the text mode console, you'll be better served by using redirection. Arrange to start the program like this: mv ~/.myprogram.log ~/.myprogram.log.old myprogram --with arguments 2>&1 >~/.myprogram.log Then you can read the output from the program from anywhere by looking in the file ~/.myprogram.log. In particular, to watch the file grow in real time, run tail -n +1 -f ~/.myprogram.log If the program is started by your X startup scripts, it would be better to redirect the output from the whole X startup sequence to a file. In fact many distributions do this automatically. If you're using a .xinitrc or .xsession file, put the following line near the beginning of the file to redirect the output from subsequent programs: exec >"$HOME/.xsession-$DISPLAY.log" 2>&
Viewing system console messages in GUI
1,472,722,116,000
I am trying to figure out why the following device is not setup to its driver on my Creator CI20. For reference I am using a Linux kernel v4.13.0 and doing the compilation locally: make ARCH=mips ci20_defconfig make -j8 ARCH=mips CROSS_COMPILE=mipsel-linux-gnu- uImage From the running system I can see: ci20@ci20:~# find /sys | grep rng /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8 /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8/compatible /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8/name /sys/bus/platform/drivers/jz4780-rng /sys/bus/platform/drivers/jz4780-rng/bind /sys/bus/platform/drivers/jz4780-rng/unbind /sys/bus/platform/drivers/jz4780-rng/uevent So the device is seen by the kernel at runtime, now the missing piece is why the driver is never binded ? I would have expected something like this: /sys/bus/platform/drivers/jz4780-rng/100000d8.rng I did find some other posts explaining how to debug a running system, such as: https://stackoverflow.com/questions/28406776/driver-binding-using-device-tree-without-compatible-string-in-the-driver https://stackoverflow.com/questions/35580862/device-tree-mismatch-probe-never-called https://stackoverflow.com/questions/41446737/platform-device-driver-autoloading-mechanism Is it possible to get the information for a device tree using /sys of a running kernel? While the information is accurate on those posts, it is not very helpful for me. Since I am building locally my kernel (I added printk in the probe function of jz4780-rng driver), my question is instead: what option should I turn on at compile time so that the kernel prints an accurate information on its failure to call the probe function for the jz4780-rng driver ? In particular how do I print the complete list of the tested bus/driver for driver_probe_device ? I am ok to add printk anywhere in the code to debug this. The question is rather: which function is traversing the device tree and calling the probe/init function ? For reference: $ dtc -I fs -O dts /sys/firmware/devicetree/base | grep -A 1 rng rng@d8 { compatible = "ingenic,jz4780-rng"; }; compatible string is declared as: cgu: jz4780-cgu@10000000 { compatible = "ingenic,jz4780-cgu", "syscon"; reg = <0x10000000 0x100>; clocks = <&ext>, <&rtc>; clock-names = "ext", "rtc"; #clock-cells = <1>; rng: rng@d8 { compatible = "ingenic,jz4780-rng"; }; }; And in the driver as: static const struct of_device_id jz4780_rng_dt_match[] = { { .compatible = "ingenic,jz4780-rng", }, { }, }; MODULE_DEVICE_TABLE(of, jz4780_rng_dt_match); static struct platform_driver jz4780_rng_driver = { .driver = { .name = "jz4780-rng", .of_match_table = jz4780_rng_dt_match, }, .probe = jz4780_rng_probe, .remove = jz4780_rng_remove, }; module_platform_driver(jz4780_rng_driver); Update1: When I build my kernel with CONFIG_DEBUG_DRIVER=y, here is what I can see: # grep driver_probe_device syslog Sep 6 10:08:07 ci20 kernel: [ 0.098280] bus: 'platform': driver_probe_device: matched device 10031000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.098742] bus: 'platform': driver_probe_device: matched device 10033000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.099209] bus: 'platform': driver_probe_device: matched device 10034000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.106945] bus: 'platform': driver_probe_device: matched device 1b000000.nand-controller with driver jz4780-nand Sep 6 10:08:07 ci20 kernel: [ 0.107282] bus: 'platform': driver_probe_device: matched device 134d0000.bch with driver jz4780-bch Sep 6 10:08:07 ci20 kernel: [ 0.107470] bus: 'platform': driver_probe_device: matched device 16000000.dm9000 with driver dm9000 Sep 6 10:08:07 ci20 kernel: [ 0.165618] bus: 'platform': driver_probe_device: matched device 10003000.rtc with driver jz4740-rtc Sep 6 10:08:07 ci20 kernel: [ 0.166177] bus: 'platform': driver_probe_device: matched device 10002000.jz4780-watchdog with driver jz4740-wdt Sep 6 10:08:07 ci20 kernel: [ 0.170930] bus: 'platform': driver_probe_device: matched device 1b000000.nand-controller with driver jz4780-nand But only: # grep rng syslog Sep 6 10:08:07 ci20 kernel: [ 0.166842] bus: 'platform': add driver jz4780-rng Sep 6 10:08:42 ci20 kernel: [ 54.584451] random: crng init done As a side note, the rng toplevel node: cgu is not referenced here, but there is a jz4780-cgu driver. Update2: If I move the rng node declaration outside the toplevel cgu node, I can at least see some binding happening at last: # grep rng /var/log/syslog Sep 6 10:30:57 ci20 kernel: [ 0.167017] bus: 'platform': add driver jz4780-rng Sep 6 10:30:57 ci20 kernel: [ 0.167033] bus: 'platform': driver_probe_device: matched device 10000000.rng with driver jz4780-rng Sep 6 10:30:57 ci20 kernel: [ 0.167038] bus: 'platform': really_probe: probing driver jz4780-rng with device 10000000.rng Sep 6 10:30:57 ci20 kernel: [ 0.167050] jz4780-rng 10000000.rng: no pinctrl handle Sep 6 10:30:57 ci20 kernel: [ 0.167066] devices_kset: Moving 10000000.rng to end of list Sep 6 10:30:57 ci20 kernel: [ 0.172774] jz4780-rng: probe of 10000000.rng failed with error -22 Sep 6 10:31:32 ci20 kernel: [ 54.802794] random: crng init done Using: rng: rng@100000d8 { compatible = "ingenic,jz4780-rng"; }; I can also verify: # find /sys/ | grep rng /sys/devices/platform/10000000.rng /sys/devices/platform/10000000.rng/subsystem /sys/devices/platform/10000000.rng/driver_override /sys/devices/platform/10000000.rng/modalias /sys/devices/platform/10000000.rng/uevent /sys/devices/platform/10000000.rng/of_node /sys/firmware/devicetree/base/rng@100000d8 /sys/firmware/devicetree/base/rng@100000d8/compatible /sys/firmware/devicetree/base/rng@100000d8/status /sys/firmware/devicetree/base/rng@100000d8/reg /sys/firmware/devicetree/base/rng@100000d8/name /sys/bus/platform/devices/10000000.rng /sys/bus/platform/drivers/jz4780-rng /sys/bus/platform/drivers/jz4780-rng/bind /sys/bus/platform/drivers/jz4780-rng/unbind /sys/bus/platform/drivers/jz4780-rng/uevent
A working solution to get the driver to bind to the device is: cgublock: jz4780-cgublock@10000000 { compatible = "simple-bus", "syscon"; #address-cells = <1>; #size-cells = <1>; reg = <0x10000000 0x100>; ranges; cgu: jz4780-cgu@10000000 { compatible = "ingenic,jz4780-cgu"; reg = <0x10000000 0x100>; clocks = <&ext>, <&rtc>; clock-names = "ext", "rtc"; #clock-cells = <1>; }; rng: rng@d8 { compatible = "ingenic,jz4780-rng"; reg = <0x100000d8 0x8>; }; }; This was found by staring at other examples. I would prefer a solution where I get a proper diagnosis why the previous attempt is incorrect.
How to debug a driver failing to bind to a device on Linux?
1,472,722,116,000
I'd like to set up my laptop so that if a wrong password is entered when the screen is locked, a picture is taken using the laptop's webcam. I examined xlock (from xlockmore package), but there is no option to run a customized action when a wrong password is entered. There is a similar question on SuperUser, but only targets Windows: Taking a picture after entering wrong password. (For those who like funny cat photos: My laptop is set up to take a picture after 3 incorrect password attempts.)
Copied this post on ask Ubuntu by gertvdijk, pointed out by mazs in the comments. In the effort of closing this question. Based on this post on the Ubuntuforums by BkkBonanza. This is an approach using PAM and will work for all failed login attempts. Using SSH, a virtual terminal or via the regular login screen, it doesn't matter as everything is handled by PAM in the end. Install ffmpeg, we're going to use this as a command line way of grabbing the webcam images. Update: ffmpeg is removed when you upgrade to Ubuntu 14.04. We can use avconv in place of ffmpeg in the below script. No need to install anything separately. Create a small script somewhere, e.g. /usr/local/bin/grabpicture with the following content #!/bin/bash ts=`date +%s` ffmpeg -f video4linux2 -s vga -i /dev/video0 -vframes 3 /tmp/vid-$ts.%01d.jpg exit 0 #important - has to exit with status 0 Change the /dev/video0 with the actual video device of your webcam and choose a path where the pictures are being saved - I just choose /tmp. In the newer version of Ubuntu use avconv instead of ffmpeg (sudo apt-get install libav-tools). Make it executable, e.g. chmod +x /usr/local/bin/grabpicture. Test it, by just calling it: /usr/local/bin/grabpicture. Check if you see files appearing in /tmp/vid....jpg. Configure PAM to call this on every failed attempt. Note: do this carefully - if this fails you'll not be able to gain access to your system again in a regular way. Open a terminal window with root access (sudo -i) and leave it open - just in case you screw up in the next steps. Open /etc/pam.d/common-auth in your favourite editor, e.g. by doing gksudo gedit /etc/pam.d/common-auth. Keep in mind for the following steps that order of lines in this file matters. Locate the line below. By default there's one line before the one with pam_deny.so. On my 12.04 system it looks like this: auth [success=1 default=ignore] pam_unix.so nullok_secure In this line change the success=1 to success=2 to have it skip our script on succes. This is an important step. Right below there, add a new one to call the actual script: auth [default=ignore] pam_exec.so seteuid /usr/local/bin/grabpicture Save and close the file. No need to restart anything. Test it. In a new terminal window, as regular user, try su -l username to log in as another user with username username (change with an actual one of course). Deliberately enter the wrong password. Check if this result in a new picture. The same as above, but now enter the correct password. Check if you log in and it doesn't result in a picture being taken. If the tests have succeeded you can log out from your DE (Unity/KDE/...) and you should see the same when entering a wrong password from the login screen.
Taking a picture with a laptop webcam after entering an incorrect password
1,472,722,116,000
After 30 minutes of uptime using Ubuntu 14.04 with a hybrid SSD I see many processes blocking IO using iotop. This is during disk writes, for example if I open and close an empty file in gedit it can take 2 seconds to close down due to dconf writing settings, this effects other apps in a similar way; slowing the whole system down quite severely. Using strace I managed to trace this back to an fsync call and from there managed to reproduce it using the sync command. So to recap, simply running sync from the terminal repeatedly can take on the order of 1 - 2 seconds but ONLY after 30 minutes uptime. To prove this I made a script that outputs uptime in seconds against time taken to execute sync, and ran it every second : while true; do cat /proc/uptime | awk '{printf "%f ",$1}'; /usr/bin/time -f '%e' sync; sleep 1; done; I ran the above script, waited around an hour (system was left idle) and then plotted the results in gnuplot (y = time in seconds to execute sync, x = uptime in seconds): The point in time where the graph spikes is around 1780 (1780/60 = roughly 30 minutes). Nothing should be writing to the disk at this time apart from the script, so there should be next to nothing in the page cache after the first sync each subsequent sync will be writing exactly what's being written to the script which will be roughly 100 bytes or so. This issue persists after reboots; for example - if I wait 30 minutes for the slowdown then reboot, the slowdown will still be there. If I powerdown then reboot the issue disappears until 30 minutes later. Another curiosity is that when I examined the above graph and zoomed in on an area where the slowdown is occurring I got this: The peaks and troughs repeat - this occurs almost exactly every 10 seconds from trough to trough and also the peak kinks as it comes down. I've also ran hdparm tests (hdparm -t /dev/sda and hdparm -T /dev/sda) before the slowdown : /dev/sda: Timing cached reads: 23778 MB in 2.00 seconds = 11900.64 MB/sec /dev/sda: Timing buffered disk reads: 318 MB in 3.01 seconds = 105.63 MB/sec and during the slowdown: /dev/sda: Timing cached reads: 2 MB in 2.24 seconds = 915.50 kB/sec /dev/sda: Timing buffered disk reads: 300 MB in 3.01 seconds = 99.54 MB/sec Showing that actual disk reads aren't being effected but cached reads are, could that mean that this is to do with the system bus and not the HD after all? Here's the solutions I've tried : Change the spindown settings of the HD maybe the HD was going into power savings mode: hdparm /dev/sda -S252 #(set it to 5 hours before spindown) Change the filesystem's journalling type to writeback rather than ordered so that we get performance improvements - this isn't solving the problem though as it doesn't explain the 30 minutes slowdown-free uptime. Disabled CRON as it seems to be occuring after a round 30 minutes. CPU usage is fine and is completely idle so no processes can be blamed however I've tried shutting down every service including the session manager (lightdm) this does nothing as I believe the issue is lower level. Analysing any new processes coming in at 30 minutes indicates no changes - I've diffed the output of PS before and after and there's no difference. This only started occuring about 2 weeks ago, nothing was installed and no updates were done around that time. I'm thinking this issue is much lower level so would really appreciate some help here as I'm clueless, even pointing me in the right direction would be helpful - for example is there a way to examine what's being flushed out the page cache? Write caching is enabled on the disk in question, I've also tried disabling write barriers. SMART data on the HD indicates no problems with the HD itself however I have my suspicions it's the HD doing something mysterious as it persists after reboots. EDIT: I've done : watch -n 1 cat /proc/meminfo ... to see how the memory changes particularly looking at the dirty row and the writeback row which I believe is the HDs disk buffer. They all stay at zero for the most part highest being probably 300kb. Calling sync flushes these as expected back to 0 but during the slowdown calling sync when there is zero dirty pages and zero kb in the disk buffer still locks IO. What else could sync be doing if there's nothing to flush out the page cache and write cache?
The symptoms are very consistent with a mostly saturated IO system, however having for the most part ruled out IO load from the OS/userspace side, another possibility is the drive running self-tests on itself, which may include reading from all the sectors. This should be queryable/tunable from smartctl (At least one place being smartctl -c for querying). As for why it's coming and going and started suddenly now: The drive has passed a certain stage in it's life (number of sectors written, time spun up, etc.) and the firmware on the drive have triggered one of these scans I believe this also can be triggered via smartctl, so it's possible some automated process triggered it Having one of these scans triggered and flagged as either in progress or started, when the drive has spent a certain amount of time powered on, it's re-triggered either from the beginning or to resume where it left off
Calls to sync/fsync slow down after 30 minutes uptime
1,472,722,116,000
If I opened hexdump without any argument in the terminal: hexdump When I type something in the terminal and press Enter, hexdump will display the hex values for whatever characters I type. But hexdump will only display the hex values if I type 16 characters, for example: Here, I typed the character a 15 times and I pressed Enter(so hexdump received 16 characters (15a + \n)). But if I type less than 16 characters, for example: Here, I typed the character a 14 times and I pressed Enter(so hexdump received 15 characters (14a + \n)). And in this case hexdump did not display anything. Can I make hexdump display the hex values for whatever length of characters it receives instead of it waiting for 16 characters to be received? Note: I do not want to "use options both for hexdump and xxd to display one byte as hex per line" (as suggested in a comment here). What I want to do basically is for example to know what the hex value for A without having to type an extra 15 characters to get it.
Try hexdump -v -e '/1 "%02X\n"'. That displays one hex byte per line, so the line output buffering won't stop the line from being displayed. Then you only have to type A and return to know the hex value for A. You still have to type return, because the shell buffers also does line buffering on the input. man ascii also works. :-)
How to make hexdump not wait for 16 characters from stdin to display their hex values?
1,472,722,116,000
My computer runs Windows Server 2008 R2. It hosts a Hyper-V virtual machine running Ubuntu 12.04 as the guest OS. I want to copy text from Ubuntu and paste this text in Windows (and copy text in Windows and paste it in Ubuntu). How can I do this?
You can use ncat - which also has a windows port - to transfer data over network. On one system you run it in "listen" mode, where it binds to some port, on the other system you connect to that port on the other machine. This creates a bi-directional pipe. On Linux you can choose from more variants (GNU netcat, BSD netcat, socat...). Apart from the obvious man page, you can also have a look at the wikipedia netcat article. Note: on both systems you run these in the terminal (Windows command line, Unix shell) - the copy-paste has to happen twice: on one machine you copy from the source and paste it into the terminal which is running ncat. The data is transferred to the other machine, where you copy it from the terminal to its final destination. Other option is to exchange files over a Windows share (Samba on Linux).
Copy-paste between Hyper-V guest and host
1,472,722,116,000
I am interested in setting environmental variables of one shell instance from another. So I decided to do some research. After reading a number of questions about this I decided to test it out. I spawned two shells A and B (PID 420), both running zsh. From shell A I ran the following. sudo gdb -p 420 (gdb) call setenv("FOO", "bar", 1) (gdb) detach From shell B when I run env I can see the variable FOO is indeed set with a value of bar. This makes me think that FOO has been successfully initialised in the environment of shell B. However, if I try to print FOO I get an empty line implying it is not set. To me, it feels like there is a contradiction here. This was tested on both my own Arch GNU/Linux system and an Ubuntu VM. I also tested this on bash where the variable didn't even show up in env. This although disappointing for me, makes sense if the shell caches a copy of its environment at spawn time and only uses that (which was suggested in one of the linked questions). This still doesn't answer why zsh can see the variable. Why is the output of echo $FOO empty? EDIT After the input in the comments I decided to do a bit more testing. The results can be seen in the tables below. In the first column is the shell which the FOO variable was injected into. The first row contains the command whose output can be seen below it. The variable FOO was injected using: sudo gdb -p 420 -batch -ex 'call setenv("FOO", "bar", 1)'. The commands specific to zsh: zsh -c '...' were also tested using bash. The results were identical, their output was omitted for brevity. Arch GNU/Linux, zsh 5.3.1, bash 4.4.12(1) | | env | grep FOO | echo $FOO | zsh -c 'env | grep FOO' | zsh -c 'echo $FOO' | After export FOO | |------|------------------|-----------|---------------------------|----------------------|-----------------------------------| | zsh | FOO=bar | | FOO=bar | bar | No Change | | bash | | bar | | | Value of FOO visible in all tests | Ubuntu 16.04.2 LTS, zsh 5.1.1, bash 4.3.48(1) | | env | grep FOO | echo $FOO | zsh -c 'env | grep FOO' | zsh -c 'echo $FOO' | After export FOO | |------|------------------|-----------|---------------------------|----------------------|-----------------------------------| | zsh | FOO=bar | | FOO=bar | bar | No Change | | bash | | bar | | | Value of FOO visible in all tests | The above seems to imply that the results are distribution agnostic. This doesn't tell me much more than zsh and bash handle setting of variables differently. Furthermore, export FOO has very different behaviour in this context depending on the shell. Hopefully these tests can make something clear to somebody else.
Most shells don't use the getenv()/setenv()/putenv() API. Upon start-up, they create shell variables for each environment variable they received. Those will be stored in internal structures that need to carry other information like whether the variable is exported, read-only... They can't use the libc's environ for that. Similarly, and for that reason, they won't use execlp(), execvp() to execute commands but call the execve() system call directly, computing the envp[] array based on the list of their exported variables. So in your gdb, you'd need to add an entry to that shells internal table of variables, or possibly call the right function that would make it interpret a export VAR=value code for it to update that table by itself. As to why you see a difference between bash and zsh when you call setenv() in gdb, I suspect that's because you're calling setenv() before the shell initialises, for instance upon entering main(). You'll notice bash's main() is int main(int argc, char* argv[], char* envp[]) (and bash maps variables from those env vars in envp[]) while zsh's is int main(int argc, char* argv[]) and zsh gets the variables from environ instead. setenv() does modify environ but cannot modify envp[] in-place (read-only on several systems as well as the strings those pointers point to). In any case, after the shell has read environ upon startup, using setenv() would be ineffective as the shell no longer uses environ (or getenv()) afterwards.
Why can't I print a variable I can see in the output of env?
1,472,722,116,000
A month or two ago, I installed the latest version of Puppy Linux to an old Eee PC which I hardly use any more. Well I'm on it now! But I can't figure out how to update it. It uses a weird Puppy package manager which only seems to have options for installing and uninstalling things. I found an option to update the database, but that didn't actually update any of the software on my system. I've looked through the menus several times and don't see anything that says update. How do I update Puppy Linux??
Please go through this blog as i think it exactly describes what you need : how-to-update/upgrade-kernel-for-puppy-linux I think this site could also help you.. flash-puppy Another link which might help you is : Update from 4.1.2 to 4.2 Note : Take a look at this site also : installing-puppy-linux-to-your-hard-drive Thanks, Sen
How to update Puppy Linux?
1,472,722,116,000
I'm trying to understand the Completely Fair Scheduler (CFS). According to Robert Love in Linux Kernel Development, 3rd edition(italics his, bold mine): Rather than assign each process a timeslice, CFS calculates how long a process should run as a function of the total number of runnable processes. Instead of using the nice value to calculate a timeslice, CFS uses the nice value to weight the proportion of processor a process is to receive: Higher valued (lower priority) processes receive a fractional weight relative to the default nice value, whereas lower valued (higher priority) processes receive a larger weight. Each process then runs for a “timeslice” proportional to its weight divided by the total weight of all runnable threads. To calculate the actual timeslice, CFS sets a target for its approximation of the “infinitely small” scheduling duration in perfect multitasking. This target is called the targeted latency....Let’s assume the targeted latency is 20 milliseconds and we have two runnable tasks at the same priority. Regardless of those task’s priority, each will run for 10 milliseconds before preempting in favor of the other. If we have four tasks at the same priority, each will run for 5 milliseconds. If there are 20 tasks, each will run for 1 millisecond.... Now, let’s again consider the case of two runnable processes, except with dissimilar nice values—say, one with the default nice value (zero) and one with a nice value of 5. These nice values have dissimilar weights and thus our two processes receive different proportions of the processor’s time. In this case, the weights work out to about a 1/3 penalty for the nice-5 process. If our target latency is again 20 milliseconds, our two processes will receive 15 milliseconds and 5 milliseconds each of processor time, respectively. The first bolded sentence says that tasks have the same timeslice regardless of priority, while the second says that the timeslice depends on nice value. Which is correct, or what am I missing?
The 2 sentences are just explaining 2 instances of how CFS could work - the former is when 2 tasks have the same nice value and the latter when the 2 tasks have different nice values. In general, the time slice calculated for each task boils down to this formula: timeslice = (weight/total_weight)*target_latency weight is the weight of the current task which is dependent on the nice value assigned to the task. total_weight is the sum of the weights of all tasks in the run queue. target_latency is the time interval that CFS will attempt to once schedule all tasks in the run queue by. Going back to the original formula, when 2 tasks have the same nice value, they will also have the same weight value. By treating weight as a constant, our new formula is: timeslice = (target_latency/total_weight) As you see, the time slice of each task in the run queue is no longer dependent on its weight value and thus each task will receive the same time slice. This is the first case the book mentions. The second case mentions the nice values differ, and thus the weight values will differ. Each task will receive its time slice accordingly.
Does the timeslice depend on process priority or not under Completely Fair Scheduling?
1,472,722,116,000
I'm aware that this article exists: Why are hard links only valid within the same filesystem? But it unfortunately didn't click with me. https://www.kernel.org/doc/html/latest/filesystems/ext4/directory.html I'm reading operating system concepts by Galvin and found some great beneficial resources like linux kernel documentation. There can be many directory entries across the filesystem that reference the same inode number--these are known as hard links, and that is why hard links cannot reference files on other filesystems. In the very beginning the author says this. But I don't understand the reason behind it. Information contained in an inode: Mode/permission (protection) Owner ID Group ID Size of file Number of hard links to the file Time last accessed Time last modified Time inode last modified https://www.grymoire.com/Unix/Inodes.html Now since the inode contains these information, what's the problem with letting hard links reference files on other filesystem? What problem would occur if hard link reference on other filesystems? About hard link: The term "hard link" is misleading, and a better term is "directory entry". A directory is a type of file that contains (at least) a pair considering of a file name and an inode. Every entry in a directory is a "hard link", including symbolic links. When you create a new "hard link", you're just adding a new entry to some directory that refers to the same inode as the existing directory entry. This is how I visualize what a directory concept looks like in an operating system. Each entry is a hardlink according to the above quoted text. The only problem that I can see is that multiple filesystem could have same range of inode(But I don't think so as inode is limited in an operating system). Also why would not it be nice to add new information about filesystem in inode itself? Would not that be really convenient?
A "hard link" just is the circumstance that two (or more) entries in the hierarchy of your file system refer to the same underlying data structure. Your figure illustrates that quite nicely! That's it; that's all there is to it. It's like if you have a cooking book with an index at the end, and the index says "Bread: see page 3", and "Bakery: see page 3". Now there's two names for what is on page 3. You can have as many index entries that point to the same page as you want. What does not work is that you have an index entry for something in another book. The other book simply doesn't exist within your current book, so referring to pages in it just can't work, especially because different versions of the other book could number pages differently over time. Because a single filesystem can only guarantee consistency for itself, you cannot refer to "underlying storage system details" like inodes of other filesystems without it breaking all the time. So, if you want to refer to a directory entry that's stored on a different file system, you'll have to do that by the path. UNIX helps you with that through the existence of symlinks. The only problem that I can see is that multiple filesystem could have same range of inode(But I don't think so as inode is limited in an operating system). That's both untrue and illogical: I can ship you my hard drive, right. How would I ensure that the file system on my hard drive has no inode numbers you already used in one of the many file systems that your computer might have? Also why would not it be nice to add new information about filesystem in inode itself? Would not that be really convenient? No. Think of a file system as an abstraction of "bytes on storage media": a file system in itself is an independent data structure containing data organized into files; it must not depend on any external data to be complete. Breaking that will just lead to inconsistencies, because independence means that I can change inode numbers on file system A without having to know about file system B. Now, if B depended on A, it would be broken afterwards.
Why can't hard links reference files on other filesystems?
1,472,722,116,000
I have a Dell Inspiron 15R N5110 Laptop (Core i5 2nd Gen/4 GB/500 GB/Windows 7). I previously installed Windows 10 on my system, but my computer was very slow, so I decided to install Linux on it. It is currently running Windows 7. My problem is: the only drivers I have for my laptop are Windows 7's and I can't find Linux drivers for it. How can I download my laptop's drivers for Linux? And does my laptop support Linux?
It is very unlikely that you will need any additional device drivers other than those that already come with most popular Linux distributions, specially on non brand new laptops. The only exception regards GPU devices used for games, such as NVidia and AMD Radeon GPUs. In such cases, some manufacturers sometimes provide their own device drivers but, even though, most are also supported by Linux community. Anyway, a possible lack of Linux support from the manufacturer might not prevent you from installing Linux: you can install the system and later install any manufacturer-provided device driver (if available/necessary). Though again, it is very rare on laptop devices. If you are not very familiar with Linux, I suggest you choose a distribution with a friendly, intuitive interface, such as Linux Mint Cinnamon Edition - that would be definitely my pick, if you ask me for a recommendation. You can also try Ubuntu, Pop!_OS, Elementary OS, DeepIn or Fedora. Hope this helps.
Do I need drivers to install Linux on my old laptop? Am I likely to face any problems if I install it? [closed]
1,472,722,116,000
I have a plain text file (not containing source code). I often modify it (adding lines, editing existing lines, or any other possible modification). For any modification, I would like to automatically record: what has been modified (the diff information); the date and time of the modification. (Ideally, I would also like to be able to obtain the version of my file at a specific time, but this is a plus, not essential). This is surely possible with Git, but it's too powerful and complex. I don't want to deal with add, commit messages, push, etc. each time. I would simply like to edit the file with vi (or equivalent), save it, and automatically record the modification as above (its diff and its time). Is there a tool to accomplish this in Linux? Update: Thanks for all the suggestions and the several solutions that have been introduced. I have nothing against git, but I explicitly wished to avoid it (for several reason, last but not least the fact that I don't know it enough). The tool which is closest to the above requirements (no git, no commit messages, little or nothing overhead) is RCS. It is file-based and it is exactly what I was looking for. This even avoids the use of a script, provides the previous versions of the file and avoids the customization for vi. The requirements of the question were precise; many opinions have been given, but the question is not - per se - that much opinion-based. Then, obviously, the same goal can be achieved through a tool or through a script, but this apply in many other cases as well.
You could try the venerable RCS (package "rcs") as @steeldriver mentioned, a non-modern version control system that works on a per-file basis with virtually no overhead or complication. There are multiple ways to use it, but one possibility: Create an RCS subdirectory, where the version history will be stored. Edit your file Check in your changes: ci -l -m -t- myfile Repeat If you store this text in your file: $RCSfile$ $Revision$ $Date$ then RCS will populate those strings with information about your revision and its datestamp, once you check it in (technically, when you check it out). The file stored in RCS/ will be called myfile,v and will contain the diffs between each revision. Of course there's more to learn about RCS. You can look at the manpages for ci, co, rcs, rcsdiff and others. Here's some more information: If you skip creating the RCS/ directory, then the archive will appear in the same directory as your file. You "check in" a file with ci to record a version of it in the archive (the *,v file in the RCS/ directory). Check-in has the weird side effect of removing your file, leaving your data only present in the *,v archive. To avoid this side effect, use -l or -u with the ci command. You "check out" a file with co to reconstitute it from the archive. You "lock" a file to make it writable and prevent others from writing to it, which would create a "merge" situation. In your case, with only one user modifying the file, "locked" means writable and "unlocked" means read-only. If you modify and "unlocked" file (by forcing a write to it), ci will complain when you try to check the changes in (so, avoid doing that). Since you're the only one editing your file, you have a choice of scenarios: you can keep your file read-only (unlocked) or writable (locked). I use unlocked mode for files that I don't expect to change often, as that prevents me from accidentally modifying them, because they're read-only, even for me. I use locked mode for files that I'm actively modifying, but when I want to keep a revision history of the contents. Using -l with ci or co will lock it, leaving it writable. Without -l it will be read-only with co or it will be removed altogether with ci. Use ci -u to leave the file in read-only mode after checking its contents into the archive. Using -m. will prevent ci from asking for a revision message. Using -t- will prevent ci from asking for an initial message (when the archive file is first created). Using -M with ci or co will keep the timestamp of a file in sync with the timestamp of the file at the time of check-in. co -r1.2 -p -q myfile will print revision 1.2 of myfile to stdout. Without the -p option, and assuming that myfile is "unlocked" (read-only), then co -r1.2 myfile will overwrite myfile with a read-only copy of revision 1.2 of myfile. -q disables the informational messages. You can create "branches", with revisions like 1.3.1.1. I don't recommend this as it gets confusing fast. I prefer to keep with a linear flow of revisions. So, if you prefer to keep your file always writable, you could use ci -l -M -m -t- myfile. You can use rcsdiff myfile to see the differences between the current contents of myfile and the most recent checked-in version. You can use rcsdiff -r1.2 -r1.4 myfile to see the differences between revisions 1.2 and 1.4 of myfile. The archive file is just a text file, whose format is documented in man rcsfile. However, don't attempt to edit the archive file directly. IMO, the text-based archive file, the absolute minimal extra baggage (only a single archive file), and keyword substitution are RCS's biggest strengths and what makes it a great tool for local-only, single-user, single-file-at-a-time versioning. If I were redesigning RCS, I would remove the complications beyond this scenario (e.g. multi-user, branching), which I think are better handled by more modern distributed version control systems. As with any command, there are some quirks; you should play around with test files until you understand the workflow you want for yourself. Then, for best results, embed your favorite options into a script so you don't have to remember the likes of -t-, for example.
Keep a history of all the modifications to a text file
1,472,722,116,000
I am trying to setup OpenVPN but I am getting this error: #./build-ca grep: /etc/openvpn/easy-rsa/2.0/openssl.cnf: No such file or directory pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong version of openssl.cnf: /etc/openvpn/easy-rsa/2.0/openssl.cnf The correct version should have a comment that says: easy-rsa version 2.x I have OpenSSL* installed. Do I need to set a location?
it's hard to tell without more information... anyhow, you have either not properly configured your installation via the vars file or you haven't activated the vars file by running source vars prior to running ./build-ca the vars file contains (among other things) the definition of the KEY_CONFIG variable. the default (on my Debian system) is to call a wrapper-script which will try to find the correct default openssl.conf file for you export KEY_CONFIG=`$EASY_RSA/whichopensslcnf $EASY_RSA` (on my system i have OpenSSL 1.0.1e 11 Feb 2013 installed, so KEY_CONFIG evaluates to .../openssl-1.0.0.cnf) if this doesn't work for you, you can manually set the KEY_CONFIG to a value that matches yours.
KEY_CONFIG pointing to the wrong version of openssl.cnf
1,472,722,116,000
On Linux, is there a way for a shell script to check if its standard input is redirected from the null device (1, 3) *, ideally without reading anything? The expected behavior would be: ./checkstdinnull -> no ./checkstdinnull < /dev/null -> yes echo -n | ./checkstdinnull -> no EDIT mknod secretunknownname c 1 3 exec 6<secretunknownname rm secretunknownname ./checkstdinnull <&6 -> yes I suspect I "just" need to read the maj/min number of the input device. But I can't find a way of doing that from the shell. *No necessary just /dev/null, but any null device even if manually created with mknod.
On linux, you can do it with: stdin_is_dev_null(){ test "`stat -Lc %t:%T /dev/stdin`" = "`stat -Lc %t:%T /dev/null`"; } On a linux without stat(1) (eg. the busybox on your router): stdin_is_dev_null(){ ls -Ll /proc/self/fd/0 | grep -q ' 1, *3 '; } On *bsd: stdin_is_dev_null(){ test "`stat -f %Z`" = "`stat -Lf %Z /dev/null`"; } On systems like *bsd and solaris, /dev/stdin, /dev/fd/0 and /proc/PID/fd/0 are not "magical" symlinks as on linux, but character devices which will switch to the real file when opened. A stat(2) on their path will return something different than a fstat(2) on the opened file descriptor. This means that the linux example will not work there, even with GNU coreutils installed. If the versions of GNU stat(1) is recent enough, you can use the - argument to let it do a fstat(2) on the file descriptor 0, just like the stat(1) from *bsd: stdin_is_dev_null(){ test "`stat -Lc %t:%T -`" = "`stat -Lc %t:%T /dev/null`"; } It's also very easy to do the check portably in any language which offers an interface to fstat(2), eg. in perl: stdin_is_dev_null(){ perl -e 'exit((stat STDIN)[6]!=(stat "/dev/null")[6])'; }
How to check if stdin is /dev/null from the shell?
1,472,722,116,000
I started using sed recently. One handy way I use it is to ignore unimportant lines of a log file: tail -f example.com-access.log | sed '/127.0.0.1/d;/ELB-/d;/408 0 "-" "-"/d;' But when I try to use it similarly with find, the results aren't as expected. I am trying to ignore any line that contains "Permission denied" like this: find . -name "openssl" | sed '/Permission denied/d;' However, I still get a whole bunch of "Permission denied" messages in stdout. EDIT As mentioned in the correct answer below, the "Permission denied" messages are appearing in stderr and NOT stdout.
The problem is error ouput printed to stderr, so the sed command can't catch the input. The simple solution is: redirecting stderr to stdout. find . -name "openssl" 2>&1 | sed '/Permission denied/d;'
How can I filter those "Permission denied" from find output? [duplicate]
1,472,722,116,000
May i know the max partition size supported by an Linux system. And how much logical and primary Partition as we can create in an disk installed by linux system?
How Many Partitions I believe other, faster and better people have already answered this perfectly. :) There Is Always One More Limit For the following discussion, always remember that limits are theoretical. Actual limitations are often less than the theoretical limits because either other theoretical limits constrain things. (PCs are very, very complex things indeed these days) there are always more bugs. (this answer not excluded) When Limits are Violated What happens when these limits are violated isn't simple, either. For instance, back in the days of 10GB disks, you could have multi-gigabyte partitions, but some machines couldn't boot code stored after the 1,024th cylinder. This is why so many Linux installers still insist on a separate, small /boot partition in the beginning of the disk. Once you managed to boot, things were just fine. Size of partitions: MS-DOS Partition Table (MBR) MS-DOS stores partitions in a (start,size) format, each of which is 32 bits wide. Each number used to encode cylinder-head-sector co-ordinates in the olden days. Now it simply includes an arbitrary sector number (the disk manages the translation from that to medium-specific co-ordinates). The kernel source for the ‘MS-DOS’ partition type suggests partition sizes are 32 bits wide, in sectors. Which gives us 2^32 * 512, or 2^41 bytes, or 2^21 binary Megabytes, or 2,097,152 Megabytes, or 2,048 Gigabytes, or 2 Terabytes (minus one sector). GUID Partition Table (GPT) If you're using the GUID Partition Table (GPT) disk label, your partition table is stored as a (start,end) pair. Both are 8 bytes long (64 bits), which allows for quite a lot more than you're likely to ever use: 2^64 512-byte sectors, or 2^73 bytes (8 binary zettabytes), or 2^33 terabytes. If you're booting off of a UEFI ROM rather than the traditional CP/M-era BIOS, you've already got GPT. If not you can always choose to use GPT as your disklabel. If you have a newish disk, you really should. Sector Sizes A sector has been 512 bytes for a long while. This is set to change to 4,096 bytes. Many disks already have this, but emulate 512 byte sectors. When the change comes to the foreground and the allocation unit becomes 4,096 byte sectors, and LBAs address 4,096 byte sectors, all the sizes above will change by 3 binary orders of magnitude: multiply them all by 8 to get the new, scary values. Logical Volume Manager If you use LVM, whatever volume you make must also be supported by LVM, since it sits between your partitions and filesystems. According to the LVM2 FAQ, LVM2 supports up to 8EB (exabytes) on Linux 2.6 on 64-bit architectures; 16TB (terabytes) on Linux 2.6 running on 32-bit architectures; and 2TB on Linux 2.4. Filesystem Limits Of course, these are the size limits per partition (or LVM volume), which is what you're asking. But the point of having partitions is usually to store filesystems, and filesystems have their own limits. In fact, what types of limits a filesystem has depends on the filesystem itself! The only global limits are the maximum size of the filesystem and the maximum size of each file in it. EXT4 allows partitions up to 16TB per file and 1EB (exabyte) per volume. However, it uses 32-bit block numbers, so you'd need to increase the default 4,096-byte block size. This may not be possible on your kernel and architecture, so 16TB per volume may be more realistic on a PC. ZFS allows 16EB files and 16EB volumes, but doubtless it has its own other, unforeseen limits too. Wikipedia has a very nice table of these limits for most filesystems known to man. In Practice If you're using Linux 2.6 or newer on 64-bit machines and GPT partitions, it looks like you should only worry about the choice of filesystem and its limits. Even then, it really shouldn't worry you that much. You probably shouldn't be creating single files of 16TB anyway, and 1 exabyte (1,048,576 TB) will be a surreal limitation for a while. If you're using MBR, and need more than 2 binary terabytes, you should switch to UEFI and GPT because you're operating under a 2TB-per-partition limit (this may be less than trivial on an already deployed computer) Please note that I'm an old fart, and I use binary units when I'm calculating multiples of powers of two. Disk manufacturers like to cheat (and have convinced us they always did this, even though we know they didn't) by using decimal units. So the largest ‘2TB’ disk is still smaller than 2 binary terabytes, and you won't have trouble. Unless you use Logical Volume Manager or RAID-0.
What is the max partition supported in linux?