date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,306,122,705,000
Is there a way to configure the Linux wireguard module to only listen on a specific IP address for incoming connections instead of it's default of listening on all available addresses? I cannot find any documentation for this.
WireGuard's Linux kernel module has no option to choose the IP address the interface will use for the tunnel. In particuliar, following OP's comment about wanting it not to bind with IPv4 but only IPv6, it will always use IPv4, as seen in the external compat module or the upstreamed module: int wg_socket_init(struct wg_device *wg, u16 port) { struct socket *new4 = NULL, *new6 = NULL; struct udp_port_cfg port4 = { .family = AF_INET, .local_ip.s_addr = htonl(INADDR_ANY), .local_udp_port = htons(port), .use_udp_checksums = true }; #if IS_ENABLED(CONFIG_IPV6) int retries = 0; struct udp_port_cfg port6 = { ret = udp_sock_create(net, &port4, &new4); if (ret < 0) { pr_err("%s: Could not create IPv4 socket\n", wg->dev->name); goto out; } The IPv4 socket creation (as well as IPv6 when available) is mandatory, and is also always done using INADDR_ANY. The code should probably have to be amended for multiple things to address the title of the question: select which protocol is used or disabled, select which address to use instead of INADDR_ANY and IN6ADDR_ANY_INIT/&in6addr_any (with possible interactions with previous bullet), and of course alter all parts of the code expecting differently. Then for cross-OS compatibility with use cases, this would have to be also done to userspace variants of WireGuard, and other kernel (such as FreeBSD) variants. Meanwhile for some use cases that would like to: allow the WireGuard tunnel envelope to be reachable only at a single address or single interface or single IP version (eg IPv6 only): Use a firewall to limit access to this address or interface or IP version/family only allow multiple different WireGuard interfaces to appear to use the same port on different interfaces Set them on different ports and use NAT rules (typically using both DNAT in prerouting for initial ingress case plus SNAT in postrouting for initial egress case) to have the visible ports match the actual ports in use. avoid binding the UDP port on the host Hide WireGuard in its own namespace, at least at its initial creation, where the WireGuard port will stay. Then add an additional layer of routing (plus possibly NAT rules, but then it's almost the same as binding a port) to reach the tunnel envelope (ie: this UDP port wherever it is), and if left in an other namespace, yet other additional layers of routing for the tunneled payload.
Wireguard specify listen address
1,306,122,705,000
I thought I could do something like: sudo unshare -T bash -c 'date -s "$1" && foobar' sh "$(date -d -1day)" so foobar would see a different system time from the rest of the system. However, it seems the change of system time is not contained. It changes the system time of the whole system. This LWN article seems to suggest this namespace was meant for the use I tried to give it. System calls that adjust the system time will, when called outside of the root time namespace, adjust the namespace-specific offsets instead. Looking at strace date -s ..., I see among other output: clock_settime(CLOCK_REALTIME, {tv_sec=1619044910, tv_nsec=0}) = 0 However, reading time_namespaces(7): This affects various APIs that measure against these clocks, including: clock_gettime(2), clock_nanosleep(2), nanosleep(2), timer_settime(2), timerfd_settime(2), and /proc/uptime. I see it doesn't mention clock_settime(2). The wording "including" tells me this is perhaps not the complete list, but maybe it is. I also don't understand --boottime/--monotonic. Looking at clock_settime(2), I see: CLOCK_MONOTONIC A nonsettable system-wide clock that represents monotonic time since—as described by POSIX—"some unspecified point in the past". On Linux, that point corresponds to the number of seconds that the system has been running since it was booted. CLOCK_BOOTTIME (since Linux 2.6.39; Linux-specific) A nonsettable system-wide clock that is identical to CLOCK_MONOTONIC, except that it also includes any time that the system is suspended. However, when trying them, they don't seem to change the uptime: $ uptime -s 2021-04-10 10:30:45 $ sudo unshare -T --boottime 1000000000 uptime -s 2021-04-10 10:30:45 $ sudo unshare -T --monotonic 1000000000 uptime -s 2021-04-10 10:30:45 $ sudo unshare -T --boottime -100000 uptime -s 2021-04-10 10:30:45 $ sudo unshare -T --monotonic -100000 uptime -s 2021-04-10 10:30:45 I see from strace uptime that it reads /proc/uptime instead of calling clock_gettime(2), and /proc/uptime doesn't seem to be affected by the unshare calls and their offsets, despite the documentation at time_namespaces(7) saying that it affects /proc/uptime as I quoted above. How is this namespace supposed to be used? I can't seem to find any command that would be affected by unshare --time.
I see three main points in your reasoning that need clarification: The first one is that unsharing a time-namespace affects the children spawned from then on by the process that called the unshare(2). The calling process itself is unaffected. This is a bit like PID namespaces and unlike the other namespace types so far. However, the calling process may still enter that newly created time-namespace, it's just that if it wants to do so then it also has to setns(2) (i.e. nsenter(1) in CLI parlance) itself into it. All this means that the unshare -T commands you've been running never really moved those commands into the newly made time-namespace. You can just add the -f option to unshare(1) to make it run the specified command as its child instead of execve(2) itself into it. That way the specified command will live into that time-namespace. Naturally, as you've been doing correctly, you also want to specify the --boottime and/or --monotonic options to "warp" that time-namespace's vision of those clocks, otherwise the child time-namespace would simply have the same vision as its parent. So, to cap it all, taking your attempt as an example, on my machine: $ sudo unshare -T --boottime 1000000000 uptime -s 2021-04-23 11:07:10 $ sudo unshare -fT --boottime 1000000000 uptime -s 1989-08-15 09:20:30 $ Alternatively to using those convenient options, you may set the /proc/self/timens_offsets file manually as long as you do it prior to spawning any child. The "manual" equivalent of the above would be something like: $ sudo unshare -T dash -c 'echo "boottime 1000000000 0" > /proc/self/timens_offsets; uptime -s' 1989-08-15 09:20:30 Here I'm using dash just to use a leaner shell that certainly does not spawn children for its own bootstrap and that also has a builtin echo (so as not to spawn a child for it) to go setting its own timens_offsets file. From then on, all subsequent commands run by dash will see a "warped" boottime. But not if you rather do exec uptime -s, because that would replace dash with uptime hence still living in the parent time-namespace, unless you also nsenter(1) beforehand. Consider: $ sudo unshare -T dash -c 'echo "boottime 1000000000 0" > /proc/self/timens_offsets; exec uptime -s' 2021-04-23 11:07:10 $ sudo unshare -T dash -c 'echo "boottime 1000000000 0" > /proc/self/timens_offsets; exec nsenter --time=/proc/self/ns/time_for_children uptime -s' 1989-08-15 09:20:30 $ The second point I see that needs clarification is regarding the date command specifically. Note that as of kernel v5.11 (as well as current latest v5.12-rc8) only CLOCK_BOOTIME and CLOCK_MONOTONIC can be "warped" in a new time-namespace, as it is also stated in the NOTES of time_namespaces(7): Note that time namespaces do not virtualize the CLOCK_REALTIME clock. Virtualization of this clock was avoided for reasons of complexity and overhead within the kernel. However, date specifically acts on the CLOCK_REALTIME clock as you can notice from your strace date -s ... command. This means that, at the time I'm writing this, the date command is still quite unaffected by the time-namespace it lives in. A quick example of command that is indeed affected by the time-namespace because it refers to CLOCK_MONOTONIC is dmesg. Try something like: $ sudo unshare -fT --monotonic 1000000000 dmesg -T The third point seems to be about this quote: System calls that adjust the system time will, when called outside of the root time namespace, adjust the namespace-specific offsets instead. Admittedly that is a bit misleading because (currently) it is clearly not possible to arbitrarily change a time-namespace's vision of time afterwards, simply because both CLOCK_MONOTONIC and CLOCK_BOOTTIME are in fact unchangeable. Those two clocks are meant to be unchangeable. The only operation allowed is thus to "bootstrap" them to some offset (related to the initial time namespace) before any process has joined that time-namespace, so that no process will ever experience jumps (not even forwards) of those two clocks. This is why the unshare(2) does not move the calling process into the newly created time-namespace: that way it (or some other process) has the opportunity to specify the "bootstrapping" offsets, which in fact cannot be changed after any one process entered the time-namespace. Computing the correct offset is obviously a delicate operation, and that is a job for a "namespace manager".
How are time namespaces supposed to be used?
1,306,122,705,000
I've been examining the Ubuntu 20.04 and Fedora 32 live images, and saw that the first (ISO 9660) partition is set to cover the entire image (at least on the MBR's partition table, didn't check GPT yet). For Ubuntu this is around 2.7 GB; for Fedora it's 1.3 GB. However, after copying these ISOs to a USB stick using dd, gparted shows that the ISO 9660 partition covers the entire 32 GB stick. Is this a gparted bug? The partition layout is a bit complicated, since the ISO 9660 partition is set to start at LBA 0, effectively covering even the MBR itself. I'm still not sure why this partition must cover the entire image though; I guess it's because when burning it to a DVD, the only filesystem you can have is ISO 9660.
We can say that it is a bug in gparted (and a corresponding bug in parted). These tools 'do not understand' the partition structure of iso files when cloned to USB pendrives (and other mass storage devices). You can look at the drive with modern versions of fdisk and lsblk and get better results. You can create a partition 'behind' the head of the drive and the image of the iso file. This partition can be used to store data, and even to serve as a partition for persistence in a persistent live system for example with Ubuntu 20.04 LTS and Debian 10 live. You can do it yourself with fdiskand mkfs, or easier with mkusb-plug. The mkusb-plug tools may not work in/with Fedora. Example where lsblk and fdisk see a cloned live USB drive with Lubuntu: $ lsblk -o model,name,size,fstype,label,mountpoint /dev/sdc MODEL NAME SIZE FSTYPE LABEL MOUNTPOINT Voyager GT 3.0 sdc 29,5G iso9660 Lubuntu 20.04.1 LTS amd64 ├─sdc1 1,7G iso9660 Lubuntu 20.04.1 LTS amd64 /media/sudodus/Lubuntu 20.04.1 LTS amd64 └─sdc2 3,9M vfat Lubuntu 20.04.1 LTS amd64 $ LANG=C sudo fdisk -lu /dev/sdc Disk /dev/sdc: 29,5 GiB, 31641829376 bytes, 61800448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x2d846e8c Device Boot Start End Sectors Size Id Type /dev/sdc1 * 0 3576319 3576320 1,7G 0 Empty /dev/sdc2 3541360 3549295 7936 3,9M ef EFI (FAT-12/16/32)
Linux live USB - Why does ISO 9660 partition cover the entire USB stick?
1,306,122,705,000
Is it possible to change a file "Birth date" (according to the stat file "Birth" field)? I can change the modification/access time with touch -t 200109110846 file, but can't find the corresponding option for "Birth".
Like the last change time, the birth time isn’t externally controllable. On file systems which support it, the birth timestamp is set when a file is created, and never changes after that. If you want to control it, you need to change the system’s notion of the current date and time, and create a new file.
Change file "Birth date" for ext4 files?
1,306,122,705,000
I know that envsubst replaces declared environment variables in the input. $ echo 'Hello $USER' | envsubst Hello myusername What I want is a way to replace the environment variable if it exists otherwise envsusbst (or any other command), leaves the variable string as it is. What I get is: $ echo 'Hello $USER $UNDEFINED_VARIABLE' | envsubst Hello myusername What I want is: $ echo 'Hello $USER $UNDEFINED_VARIABLE' | somecommand Hello myusername $UNDEFINED_VARIABLE
It you pass an argument like $USER$PATH to envsubst, then it expands only those variables that are referenced in that argument. So one way could be to pass it all the currently defined environment variables in that format. With zsh: echo 'Hello $USER ${USER} $UNDEFINED_VARIABLE' | envsubst \$${(kj:$:)parameters[(R)*export*]} $parameters is a special associative array that maps variable names to their type $parameters[(R)*export*] expands to all the elements of the associative array whose value contains export. with the k parameter expansion flag, the key instead of the value is returned j:$: joins those elements with $ in between, and we add one at the start. With other shells, you can always revert to perl to get that list: echo 'Hello $USER ${USER} $UNDEFINED_VARIABLE' | envsubst "$(perl -e 'print "\$$_" for grep /^[_a-zA-Z]\w*$/, keys %ENV')" Beware both disclose your environment variable names in the output of ps. Instead, you could also do the whole thing in perl: perl -pe 's{(?|\$\{([_a-zA-Z]\w*)\}|\$([_a-zA-Z]\w*))}{$ENV{$1}//$&}ge' Beware it has the same limitations as envsubst in that it won't expand things like ${VAR:-x} and would expand $HOME in things like \$HOME or $$HOME which a shell wouldn't.
Replace environment variables in text if they exist
1,306,122,705,000
I have been testing Linux 4.18.16-200.fc28.x86_64. My system has 7.7G total RAM, according to free -h. I have default values for the vm.dirty* sysctl's. dirty_background_ratio is 10, and dirty_ratio is 20. Based on everything I've read, I expect Linux to begin writeout of dirty cache when it reaches 10% of RAM: 0.77G. And buffered write() calls should block when dirty cache reaches 20% of RAM: 1.54G. I ran dd if=/dev/zero of=~/test bs=1M count=2000 and watched the dirty field in atop. While the dd command was running, the dirty value settled at around 0.5G. This is significantly less than the dirty background threshold (0.77G)! How can this be? What am I missing? dirty_expire_centisecs is 3000, so I don't think that can be the cause. I even tried lowering dirty_expire_centisecs to 100, and dirty_writeback_centisecs to 10, to see if that was limiting dirty. This did not change the result. I initially wrote these observations as part of this investigation: Why were "USB-stick stall" problems reported in 2013? Why wasn't this problem solved by the existing "No-I/O dirty throttling" code? I understand that half-way between the two thresholds - 15% = 1.155G - write() calls start being throttled (delayed) on a curve. But no delay is added when underneath this ceiling; the processes generating dirty pages are allowed "free run". As I understand it, the throttling aims to keep the dirty cache somewhere at or above 15%, and prevent hitting the 20% hard limit. It does not provide a guarantee for every situation. But I'm testing a simple case with one dd command; I think it should simply ratelimit the write() calls to match the writeout speed achieved by the device. (There is not a simple guarantee because there are some complex exceptions. For example, the throttle code limits the delay it will impose to a maximum of 200ms. But not if the target ratelimit for the process is less than one page per second; in that case it will apply a strict ratelimit.) Documentation/sysctl/vm.txt -- Linux v4.18 No-I/O dirty throttling -- 2011 LWN.net. (dirty_background_ratio + dirty_ratio)/2 dirty data in total ... is an amount of dirty data when we start to throttle processes -- Jan Kara, 2013 Users will notice that the applications will get throttled once crossing the global (background + dirty)/2=15% threshold, and then balanced around 17.5%. Before patch, the behavior is to just throttle it at 20% dirtyable memory -- commit 143dfe8611a6, "writeback: IO-less balance_dirty_pages()" The memory-management subsystem will, by default, try to limit dirty pages to a maximum of 15% of the memory on the system. There is a "magical function" called balance_dirty_pages() that will, if need be, throttle processes dirtying a lot of pages in order to match the rate at which pages are being dirtied and the rate at which they can be cleaned." -- Writeback and control groups, 2015 LWN.net. balance_dirty_pages() in Linux 4.18.16.
Look at Documentation/sysctl/vm.txt: dirty_ratio Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which a process which is generating disk writes will itself start writing out dirty data. The total available memory is not equal to total system memory. The available memory is calculated in global_dirtyable_memory(). It is equal to the amount of free memory plus the page cache. It does not include swappable pages (i.e. anonymous memory allocations, memory which is not backed by a file). This behaviour applies since Linux 3.14 (2014). Before this change, swappable pages were included in the global_dirtyable_memory() total. Example statistics while running the dd command: $ while true; do grep -E '^(Dirty:|Writeback:|MemFree:|Cached:)' /proc/meminfo | tr '\n' ' '; echo; sleep 1; done MemFree: 1793676 kB Cached: 1280812 kB Dirty: 4 kB Writeback: 0 kB MemFree: 1240728 kB Cached: 1826644 kB Dirty: 386128 kB Writeback: 67608 kB MemFree: 1079700 kB Cached: 1983696 kB Dirty: 319812 kB Writeback: 143536 kB MemFree: 937772 kB Cached: 2121424 kB Dirty: 312048 kB Writeback: 112520 kB MemFree: 755776 kB Cached: 2298276 kB Dirty: 389828 kB Writeback: 68408 kB ... MemFree: 136376 kB Cached: 2984308 kB Dirty: 485332 kB Writeback: 51300 kB MemFree: 101340 kB Cached: 3028996 kB Dirty: 450176 kB Writeback: 119348 kB MemFree: 122304 kB Cached: 3021836 kB Dirty: 552620 kB Writeback: 8484 kB MemFree: 101016 kB Cached: 3053628 kB Dirty: 501128 kB Writeback: 61028 kB The last line shows about 3,150,000 kB "available" memory, and a total of 562,000 kB data either being written back or waiting for writeback. That makes it 17.8%. Although it seemed the proportion fluctuated above and below that level, and was more often closer to 15%. EDIT: although these figures look closer, please do not trust this method. It is still not the right calculation and it could give very wrong results. See the followup here. I found this the hard way: I noticed there is a tracepoint in balance_dirty_pages(), which can be used for "analyzing the dynamics of the throttling algorithms". So I used perf: $ sudo perf list '*balance_dirty_pages' List of pre-defined events (to be used in -e): writeback:balance_dirty_pages [Tracepoint event] ... $ sudo perf record -e writeback:balance_dirty_pages dd if=/dev/zero of=~/test bs=1M count=2000 $ sudo perf script It showed that dirty (measured in 4096-byte pages) was lower than I expected, because setpoint was low. I traced the code; it meant there must be a similarly low value for freerun in the tracepoint definition, which is set to (thresh + bg_thresh) / 2 ... and worked my way back to global_dirtyable_memory().
Writeback cache (`dirty`) seems to be limited to even less than dirty_background_ratio. What is it being limited by? How is this limit calculated?
1,306,122,705,000
This is the result of looking at virtual memory of a process in gdb; I have some questions regarding this: Why are some parts of the virtual memory are repeated? For example, our program (stack6) and libc library is repeated 4 times; if they have partitioned them into different parts, then why? Why not just put them all together? Is the top path (/opt/pro...) the instruction section (text section) of our virtual memory and only contains the instructions? Why are the sizes of the 4 libc's different? What's the deal with the offset, if we already have the size and starting addr, then what is offset for? Where are the data, bss, kernel and heap sections and why do some parts of the above picture have no info about them? Is there any better option in gdb that actually shows all the parts? Is there any better program than gdb that shows the virtual memory part of our process much better? I just want to have a good visual of an actual virtual memory, which debugging program provides the best result. The sections that I mentioned :
There’s one important piece of information missing from gdb’s output: the pages’ permissions. (They’re shown on Solaris and FreeBSD, but not on Linux.) You can see those by looking at /proc/<pid>/maps; the maps for your Protostar example show $ cat /proc/.../maps 08048000-08049000 r-xp 00000000 00:0f 2925 /opt/protostar/bin/stack6 08049000-0804a000 rwxp 00000000 00:0f 2925 /opt/protostar/bin/stack6 b7e96000-b7e97000 rwxp 00000000 00:00 0 b7e97000-b7fd5000 r-xp 00000000 00:0f 759 /lib/libc-2.11.2.so b7fd5000-b7fd6000 ---p 0013e000 00:0f 759 /lib/libc-2.11.2.so b7fd6000-b7fd8000 r-xp 0013e000 00:0f 759 /lib/libc-2.11.2.so b7fd8000-b7fd9000 rwxp 00140000 00:0f 759 /lib/libc-2.11.2.so b7fd9000-b7fdc000 rwxp 00000000 00:00 0 b7fe0000-b7fe2000 rwxp 00000000 00:00 0 b7fe2000-b7fe3000 r-xp 00000000 00:00 0 [vdso] b7fe3000-b7ffe000 r-xp 00000000 00:0f 741 /lib/ld-2.11.2.so b7ffe000-b7fff000 r-xp 0001a000 00:0f 741 /lib/ld-2.11.2.so b7fff000-b8000000 rwxp 0001b000 00:0f 741 /lib/ld-2.11.2.so bffeb000-c0000000 rwxp 00000000 00:0f 0 [stack] (The Protostar example runs in a VM which is easy to hack, presumably to make the exercises tractable: there’s no NX protection, and no ASLR.) You’ll see above that what appears to be repeated mappings in gdb actually corresponds to different mappings with different permissions. The text segment is mapped read-only and executable; the data segment is mapped read-only; BSS and the heap are mapped read-write. Ideally, the data segment, BSS and heap are not executable, but this example lacks NX support so they are executable. Each shared library gets its own mapping for its text segment, data segment and BSS. The fourth mapping is a non-readable, non-writable, non-executable segment typically used to guard against buffer overflows (although given the age of the kernel and C library used here this might be something different). The offset, when given, indicates the offset of the data within the file, which doesn’t necessarily have much to do with its position in the address space. When loaded, this is subject to alignment constraints; for example, libc-2.11.2.so’s program headers specify two “LOAD” headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x000000 0x00000000 0x00000000 0x13d2f4 0x13d2f4 R E 0x1000 LOAD 0x13e1cc 0x0013f1cc 0x0013f1cc 0x027b0 0x0577c RW 0x1000 (Use readelf -l to see this.) These can result in multiple mappings at the same offset, with different virtual addresses, if the sections mapped to the segments have different protection flags. In stack6’s case: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x000000 0x08048000 0x08048000 0x00604 0x00604 R E 0x1000 LOAD 0x000604 0x08049604 0x08049604 0x00114 0x00128 RW 0x1000 (This also explains the small size shown by proc info mappings for stack6: each header requests less than 4KiB, with a 4KiB alignment, so it gets two 4KiB mappings with the same offset at different addresses.) Blank mappings correspond to anonymous mappings; see man 5 proc for details. You’d need to break on mmap in gdb to determine what they correspond to. You can’t see the kernel mappings (apart from the legacy vsyscall on some architectures) because they don’t matter from the process’s perspective (they’re inaccessible). I don’t know of a better gdb option, I always use /proc/$$/maps. See How programs get run: ELF binaries for details of the ELF format as read by the kernel, and how it maps to memory allocations; it has pointers to lots more reference material.
Why some libraries and other parts get repeated in the linux virtual memory with gdb?
1,306,122,705,000
In the ubuntu server I run, I added customisations in /etc/bash.bashrc and as .sh files in /etc/profile.d, to add some useful aliases and functionalities for my users. All of them run correctly for my account. However, logging into other accounts (including ones in the same groups as mine), be it via su or ssh, these customisations aren't loaded. Manually running . /etc/profile does the trick, but afaik this should happen automatically at startup in interactive login shells. Running echo $0 from all accounts returns -bash, so I assume they effectively are in such shells. Why else could this be happening, and how can I fix it?
Anything in ~/.profile and ~/.bashrc is run after /etc/profile and /bash.bashrc. As such, any aliases or variables set in the first will supersede those set in the latter if they share the same name. For whoever this might help: the specific issue I was facing is because I'm migrating servers, and I've asked my users to backup any important files they had and put it back in the new server. When they did this, they included ~/.bashrc because it had "the trick that makes python work" (i.e. it was setting the PATH variable to include the anaconda directory), as well as some of the customisations I had made to their computers in the previous server. This was in conflict with new aliases I'm setting (e.g. the alias for source activate, which became conda activate), and as of conda 4.4, conda.sh should be added to /etc/profiles.d/ rather than manually setting the PATH variable.
/etc/profile not sourced for users
1,306,122,705,000
lspci gives me the following information: $ lspci|grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) This is all correct, but this is generic name of the GPU. But Driver Manager — KDE Control Module — gives me much more interesting information: above all the options of drivers to install it says NVIDIA Corporation N460GTX Cyclone 1GD5/OC This is exactly the name the vendor (MSI) gave it. How can I find out such names without using KDE utilities? I'd prefer a console-based solution. In other words, where does the KCM take this name from?
You can use udevadm to get this information. For example on my system lspci gives me: # lspci|grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation GK106 [GeForce GTX 650 Ti Boost] (rev a1) Querying udev instead I get: # udevadm info -q property -p /sys/bus/pci/devices/0000:01:00.0 DEVPATH=/devices/pci0000:00/0000:00:02.0/0000:01:00.0 DRIVER=nvidia ID_MODEL_FROM_DATABASE=GK106 [GeForce GTX 650 Ti Boost] (GeForce GTX 650 Ti Boost TwinFrozr II OC) ID_PCI_CLASS_FROM_DATABASE=Display controller ID_PCI_INTERFACE_FROM_DATABASE=VGA controller ID_PCI_SUBCLASS_FROM_DATABASE=VGA compatible controller ID_VENDOR_FROM_DATABASE=NVIDIA Corporation MODALIAS=pci:v000010DEd000011C2sv00001462sd00002874bc03sc00i00 PCI_CLASS=30000 PCI_ID=10DE:11C2 PCI_SLOT_NAME=0000:01:00.0 PCI_SUBSYS_ID=1462:2874 SUBSYSTEM=pci USEC_INITIALIZED=22791556 The ID_MODEL_FROM_DATABASE gives a more detailed description of the card. As for how to know the value to use for the -p argument, use the first part of the lspci output. For example if lspci showed 12:34.5, you would use /sys/bus/pci/devices/0000:12:34.5
How do I get vendor-given name of my video card?
1,306,122,705,000
On a device I get among others the following strange entries for mount: none on net:[4026532603] type proc (rw,relatime) none on net:[4026532424] type proc (rw,relatime) Any idea what or for what this could be? It is the first time I see procfs used for anything but /proc. And what's this "net:"? Something like sockets or pipes? I am running an 3.8 rt kernel on an embedded device with some form of BusyBox-based Linux Potentially relevant entries from /proc/mounts: rootfs / rootfs rw 0 0 none /proc proc rw,relatime 0 0 none net:[4026532603] proc rw,relatime 0 0 none net:[4026532424] proc rw,relatime 0 0 mgmt /sys sysfs rw,relatime 0 0 Update: Thanks to @VenkatC's answer I know now that it has something to do with namespaces, as the following output confirms: $ ls -l /proc/$$/ns total 0 lrwxrwxrwx 1 root root 0 Nov 3 18:59 ipc -> ipc:[4026531839] lrwxrwxrwx 1 root root 0 Nov 3 18:59 mnt -> mnt:[4026532733] lrwxrwxrwx 1 root root 0 Nov 3 18:59 net -> net:[4026532603] lrwxrwxrwx 1 root root 0 Nov 3 18:59 pid -> pid:[4026531836] lrwxrwxrwx 1 root root 0 Nov 3 18:59 uts -> uts:[4026531838]
These entries are related to Network namespaces. From man namespaces(7) The /proc/[pid]/ns/ directory Each process has a /proc/[pid]/ns/ subdirectory containing one entry for each namespace that supports being manipulated by setns(2): $ ls -l /proc/$$/ns total 0 lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 ipc -> ipc:[4026531839] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 mnt -> mnt:[4026531840] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 net -> net:[4026531956] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 pid -> pid:[4026531836] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 user -> user:[4026531837] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 uts -> uts:[4026531838] As you see above net entry refers to a network namespace. I understand the device in question could be running different process with multiple namespaces I was able to create a test namespace and see similar mounts in /proc/mounts [cv@cent2 ~]$ ip netns list netns1 [cv@cent2 ~]$ grep net: /proc/mounts proc net:[4026532238] proc rw,nosuid,nodev,noexec,relatime 0 0 proc net:[4026532238] proc rw,nosuid,nodev,noexec,relatime 0 0
Strange mount entries, procfs on net:
1,306,122,705,000
I recently acquired an old netbook which has no backslash/pipe key. I have successfully remapped the caps lock to backslash using loadkeys. I would like to map SHIFT+CAPS LOCK to the pipe key in a similar way, in particular not using anything like xkb as I want all this to work on my VTs. Is it possible to do this using loadkeys or some other tool?
Found my own answer in the keymaps man page. On my keyboard the CAPS LOCK has keycode 41. To remap it, you need the following keymap line, keycode 41 = backslash bar This will map CAPS LOCK to the backslash character, and SHIFT + CAPS LOCK to the bar (pipe) character.
Can I remap SHIFT+CAPS LOCK in Linux console?
1,306,122,705,000
I'm designing an app that will be deployed/installed to a linux machine (perhaps Archlinux, but could be any distro and in fact I would actually prefer something lightweight and inside the Debian family if at all possible). This machine will not be a "general purpose" or multiple application machine: it's sole purpose is to run my app at startup and close my app at shutdown. No other end-user apps would be installed on this machine. I'm looking for a way to: When the user powers on the machine, instead of the normal Ubuntu/OS startup --> BIOS --> splash screen --> login process, they just see a splash screen for my app (while the system and app boots up) and then my app loads with its own look and feel While using the machine, they cannot access any other apps, shells or other part of the operating system; all they have access to is my app When running, the app takes up the entire screen and its window cannot be minimized or resized The app (as software) cannot be turned off or killed while running, except... Turning off the machine (physically powering it down) shuts down the app gracefully and shuts down the underlying OS as well Hence, the end user never knows that the machine is running on top of linux; to them, the app is the only thing "living" on the machine. This has to be possible seeing that Android is just a wrapper around Linux, and there are thousands of other devices that just run a single app and nothing more. This will likely be a C binary that launches a Java desktop application as the actual app. Any ideas as to how I can accomplish the items mentioned above?
I would strongly recommend Archlinux for this task. It manages to strike the delicate balance between installing very few "end-user" applications by default and still leaving a sensible system upon which you can build. As for steps to take to accomplish your goal, after you have Arch installed, fine-tune what services you want to run at startup (off the top of my head, it sounds like you might want less ttys). After that, install and configure X (that link also has a link to starting X at login, which you will want). If you want a splash screen on-boot, you'll need to setup something like Plymouth. And, finally, systemd tends to handle physical shutdowns (pressing a power button on consumer hardware a single time, for example) pretty gracefully. However, it may be worth considering adding a shutdown function to the app you'll be running. Your $HOME/.xinitrc might be very basic if you do not need a lot of functionality. E.g.: exec /path/to/your/program/here
Dedicated-purpose, single application linux boxes [duplicate]
1,306,122,705,000
I have two NIC and they are in the same subnet. ip route commnad show above. Kernel was build WITHOUT policy routing. My question is: When receiving packet whether from eth0 or eth1, why all the traffic are coming from eth0? (I use "ifconfig" to watch the RX bytes) is it because kernel don't have policy routing, the local routing table doesn't work? or there is something I didn't notice. Thanks!
There are several settings which control this behavior. In short the solution is: sysctl -w net.ipv4.conf.eth0.arp_announce=2 sysctl -w net.ipv4.conf.eth1.arp_announce=2 sysctl -w net.ipv4.conf.eth0.arp_filter=1 sysctl -w net.ipv4.conf.eth1.arp_filter=1 sysctl -w net.ipv4.conf.eth0.arp_ignore=1 sysctl -w net.ipv4.conf.eth1.arp_ignore=1 The kernel documentation covers what these parameters do: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise arp_announce - INTEGER Define different restriction levels for announcing the local source IP address from IP packets in ARP requests sent on interface: 0 - (default) Use any local address, configured on any interface 1 - Try to avoid local addresses that are not in the target's subnet for this interface. This mode is useful when target hosts reachable via this interface require the source IP address in ARP requests to be part of their logical network configured on the receiving interface. When we generate the request we will check all our subnets that include the target IP and will preserve the source address if it is from such subnet. If there is no such subnet we select source address according to the rules for level 2. 2 - Always use the best local address for this target. In this mode we ignore the source address in the IP packet and try to select local address that we prefer for talks with the target host. Such local address is selected by looking for primary IP addresses on all our subnets on the outgoing interface that include the target IP address. If no suitable local address is found we select the first local address we have on the outgoing interface or on all other interfaces, with the hope we will receive reply for our request and even sometimes no matter the source IP address we announce. The max value from conf/{all,interface}/arp_announce is used. Increasing the restriction level gives more chance for receiving answer from the resolved target while decreasing the level announces more valid sender's information. arp_ignore - INTEGER Define different modes for sending replies in response to received ARP requests that resolve local target IP addresses: 0 - (default): reply for any local target IP address, configured on any interface 1 - reply only if the target IP address is local address configured on the incoming interface 2 - reply only if the target IP address is local address configured on the incoming interface and both with the sender's IP address are part from same subnet on this interface 3 - do not reply for local addresses configured with scope host, only resolutions for global and link addresses are replied 4-7 - reserved 8 - do not reply for all local addresses The max value from conf/{all,interface}/arp_ignore is used when ARP request is received on the {interface}
Linux routing for receiving packet
1,306,122,705,000
I'm using Fedora 18. I'm trying to configure my synaptics touchpad, I need the tapping and the horizontal scrolling inside Awesome Window Manager. I've created a file at /etc/X11/xorg.conf.d/50-synaptics.conf with the following contents: Section "InputDevice" Identifier "touchpad" Driver "synaptics" MatchIsTouchpad "on" Option "HorizEdgeScroll" "on" EndSection But when I start the system, it hangs at different points, one that I saw frequently is : Failed to start Wait for Plymouth Boot Screen to Quit. See 'systemctl status plymouth-quit-wait.service' for details. If you want a log file or something just tell me how to get it.
Change the first line in 50-synaptics.conf to Section "InputClass" InputDevice was used to define rules and options for a specific device and I'm not sure if it's still supported. InputClass is a newer section that allows for matching a number of connected devices depending on various match rules. Because you have the line MatchIsTouchpad you should be using InputClass. This way you're telling xorg to match these rules to all touchpads. You can see the Fedora documentation for more details.
X Server won't load when I add a 50-synaptics.conf file inside the xorg.conf.d directory
1,306,122,705,000
How do I, from the command line, confirm on the host that the wireless network connection uses WPA2? The wireless router is set to use WPA2 Personal (WPA2 with a pre-shared key) and AES on the network, and I have added wpa-ssid, wpa-psk and wpa-proto RSN to /etc/network/interfaces, but iwconfig prints Encryption key:off. I am running Debian Wheezy/7.0. I checked the system logs but saw nothing of relevance, and the only current wpa_supplicant.conf on my system is one for D-Bus.
You can check what the access point is broadcasting in its beacons by doing this (you'll need the wireless-tools package): $ sudo iwlist wlan0 scanning The output varies by device, and will display every SSID the interface can see. My WPA2 access point gives this (from iwlist's very verbose output): IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK You can also interrogate wpa_supplicant directly, which might be more what you're after: $ sudo wpa_cli status Selected interface 'wlan0' bssid=c8:d7:19:01:02:03 ssid=whatever-SSID-you-are-using id=0 mode=station pairwise_cipher=CCMP group_cipher=TKIP <-- cipher key_mgmt=WPA2-PSK <-- key mode wpa_state=COMPLETED ip_address=10.20.30.4 address=88:53:2e:01:02:03
How to confirm/verify WiFi is WPA2?
1,306,122,705,000
A server was set to accept three login attempts. The ssh system is checking three identity files before choosing the correct one. The ssh command is as follows: ssh -i ~/.ssh/username [email protected] -v The three identity files are as follows: debug2: key: /path/to/.ssh/identity1 debug2: key: /path/to/.ssh/identity2 debug2: key: /path/to/.ssh/identity3 debug2: key: /path/to/.ssh/username How can I remove the three incorrect identity files? I have already tried deleting them from the directory, also I tried updating ~/.ssh/config as follows: Host xx.xx.xx.xxx User username IdentityFile ~/.ssh/username How can I get have ssh to use the correct identity file?
See if the identity file is listed ssh-add -l If not, add it ssh-add ~/.ssh/username Was then able to select the proper identity file
Changing the order of private keys passed via ssh login
1,306,122,705,000
I was trying to study the debugging of kernel using QEMU. I tried initially and failed due to the fact that there was no virtual file system. The answers to this post suggests that there should be a virtual file system. But it doesn't talk about how to create virtual FS for kernel debugging and how to pass it over to the qemu. Can you help me out?
Depending on the distribution you'd like to use, there are various ways to create a file system image, e.g. this article walks you through the laborious way to a "Linux from Scratch" system. In general, you'd either create a QEMU image using qemu-img, fetch some distribution's installation media and use QEMU with the installation medium to prepare the image (this page explains the process for Debian GNU/Linux) or use an image prepared by someone else. This section of the QEMU Wikibook contains all the information you need. Edit: As Gilles' answer to the linked question suggests, you don't need a full-blown root file system for testing, you could just use an initrd image (say, Arch Linux's initrd like here)
Debugging Linux Kernel with QEMU
1,306,122,705,000
I have Windows 7 installed on one of my disks. I want to boot this Windows system in my Xen HVM, but I do not want the changes to be permanent. So I want to create a snapshot of my Windows 7 partition. I checked with LVM, but it seems LVM can only create snapshot for LV it created. I am looking for a more general snapshot method.
You can use the lower level dmsetup command to direct the kernel device mapper to create a snapshot. If you are otherwise using LVM aside from the Windows partition, then create a logical volume to use as the backing store of the snapshot. lvcreate -n store -L 10g vg echo 0 `blockdev --getsz /dev/sda1` snapshot-origin /dev/sda1 | dmsetup create origin echo 0 `blockdev --getsz /dev/sda1` snapshot /dev/mapper/origin /dev/mapper/vg-store N 128 | dmsetup create snap Now you can point xen to /dev/mapper/snap instead of /dev/sda1 and any changes it makes will be discarded when you finish and use dmsetup remove to remove the snap and origin devices, and lvremove to remove the store volume. If you aren't using LVM then you will need another partition or loop device to use as the backing store instead.
How to create a snapshot of a physical disk?
1,306,122,705,000
I'm looking to build a medium sized (6TB, mini ITX board) server for personal use. Most importantly, it will serve as a seedbox and will store a whole lot of data. I will be accessing the data over my home network on a regular basis. I was looking around to decide which OS to use and fell upon FreeNAS. It looks pretty cool, but I was wondering if it would be able to do everything a regular server distro could do (package manager, easy updating, web server, etc). Is FreeNAS really only good for an actual NAS setup and not a server? (Bonus points if you also include FreeBSD as a possible server OS).
One of the greatest things with FreeNAS is that it uses ZFS. ZFS has a powerful feature called snapshots. You can take file system snapshots damn fast. With snapshotting you can make backups easy and more often. And also I am not sure why would you need package managers and web server on a dedicated storage server? Btw FreeNAS has web based administration tools. And I really don't recommend you to install anything besides OS on any storage servers unless you are not doing mission critical operations. Also read this! http://www.freenas.org/about/news/item/freenas-803-release-notes With FreeNAS you just install it and use. Nice web administration tools. BUT. Personally just going to use FreeNAS on more serious things. Before that I was just playing with it. So I really don't know about hidden rock when using FreeNAS. With Linux you would have more flexibility but also you would need to configure everything by yourself. You have a choice.
FreeNAS versus a regular (CentOS/Ubuntu) linux server?
1,306,122,705,000
I just installed VirtualBox 4.1 on my Windows system, and then added a Solaris 11 Express guest and an Oracle Linux 6.1 guest. Both installs went smoothly. But while the Solaris 11 guest has network access, the Oracle Linux box can't connect to the network. Both guests are using the same default network settings (NAT). I'm at a loss -- not sure what I need to configure on the OL6.1 side. To test basic network connectivity, I tried: ping www.google.com. No problems with the Solaris guest. On the OL6.1 guest: # ping www.google.com ping: unknown host www.google.com # ping 209.85.175.99 connect: Network is unreachable Is there some sort of network setup that's required on OL6.1 that wasn't required on Solaris11? Thanks in advance. output from ifconfig -a: # ifconfig -a eth0 Link encap:Ethernet HWaddr 08:00:27:8E:A1:42 inet6 addr: fe80::a00:27ff:fe8e:a142/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:328 (328.0 b) lo Link encap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0 inet6addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen: 0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Given that Oracle Linux is heavily based on Red Hat Enterprise Linux, the network configuration is probably the same. If you didn't need to enter any network parameters during the installation of Solaris, then you're picking up a network address through DHCP. There isn't much call for doing anything else in a NATted virtual machine anyway. To configure a DHCP client on RHEL, edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 to contain the following lines: DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes Or you can use Network Manager instead (it'll give you the same kind of network configuration through a desktop icon that Solaris has, and in fact I believe it is more powerful than Solaris's — not that you really need that in a VM).
Oracle Linux 6.1 guest on Virtualbox 4.1 can't connect to network
1,306,122,705,000
I have a single boot Kali Linux installation on MacBook Air 2018 hardware. After solving some issues to make all working I'm in trouble with this: I'm trying to disable the startup sound before the boot. The typical sound of a MacBook. I found on Apple docs that it's possible to modify the sound running from the terminal sudo nvram SystemAudioVolume=%80 but nvram command is not available on Linux; there is however, another program called nvramtool. Reading the man of nvramtool it's possible to get all coreboot parameters running nvramtool -a but the output of the command is: nvramtool: coreboot table not found. coreboot does not appear to be installed on this system. So, after investigating a little bit I found a program called efivar that permits modifying EFI variables. Typing efivar -l | grep -i SystemAudioVolume I get the variable indicated by Apple (SystemAudioVolume) with this value: 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume Now typing , efivar --print --name 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume I get this kind of output: GUID: 7c436110-ab2a-4bbb-a880-fe41995c9f82 Name: "SystemAudioVolume" Attributes: Non-Volatile Boot Service Access Runtime Service Access Value: 00000000 69 |i | EDIT I tried creating a boot-able USB key of macOS Mojave. Turning on the Mac, inserting the key and holding the alt I can go into the installation process from where I can get an instance of Terminal.app so I can try to run nvram from there. But I think, as suggested from Apple docs, that administrator permissions are needed. Trying to execute nvram -p I get a list of all variables, executing nvram -p | grep -i SystemAudioVolume I get 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume=i typing nvram SystemAudioVolume=%80 and rerunning nvram -p | grep -i SystemAudioVolume I get 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume=%80 but after rebooting the sound is still there and returning in the installation process running nvram -p | grep -i SystemAudioVolume I get again 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume=i Do you know how to modify the value? (if possible) PS. I can't create the TAG efivar cause I'm less than 300. But I think it should be added.
Per this article, Disabling MacBook Startup Sound in Linux, several Internet sources suggest that writing EFI variables from Linux may sometimes corrupt your Apple firmware. I didn't research this any further. If you happen to figure out how to successfully write to these variables under Linux please let everyone know in the comments (in case OS X recovery mode goes missing, you know). Their solution was to simply use nvram to disable the sound via the following command: nvram SystemAudioVolume=%00 They also used recovery mode to do this by holding Cmd+Option+R. Another option is to simply write to the variable using printf, a method discussed in the comments of the blog. Note: This method is potentially dangerous, it is advised to use the previous method first. # Ensure efivars are mounted mount | grep efivars efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,relatime) # Remove immutable bit, allows modification chattr -i /sys/firmware/efi/efivars/SystemAudioVolume-7c436110-ab2a-4bbb-a880-fe41995c9f82 # Set volume to 00 printf "\x07\x00\x00\x00\x00" > /sys/firmware/efi/efivars/SystemAudioVolume-7c436110-ab2a-4bbb-a880-fe41995c9f82 # Display new value efivar -n 7c436110-ab2a-4bbb-a880-fe41995c9f82-SystemAudioVolume -p GUID: 7c436110-ab2a-4bbb-a880-fe41995c9f82 Name: "SystemAudioVolume" Attributes: Non-Volatile Boot Service Access Runtime Service Access Value: 00000000 00
Linux - Modify an efi var with efivar
1,306,122,705,000
I am trying to get Blender to work over a setup where Blender itself runs on a remote machine and its UI is presented to a local machine via X11. Detailed information about that is available here. This seems to be a frequently required use case and Blender itself will work, through the blender-softwaregl executable that is provided along the zip archive download option from blender.org but only up to version 2.79. On version 2.80, the same executable seems to be trying to setup a shared memory "object" which requires the MIT-SHM X11 extension. Specifically, Blender's executable complains (in the remote machine terminal) with: error code: 159, request code: 143, minor code: 34, error text: 159 and finally concludes with: Xlib: extension "MIT-SHM" missing on display "localhost:10.0". After this, the X11 window on the local machine remains open, as if the software runs without problems but displays nothing of Blender's GUI. At the same time, as the mouse gets dragged along the local X11 window, the remote terminal still produces XLib: extension "MIT-SHM"... errors. I have tried to find out more information about working with the MIT-SHM (installing, configuring, enabling / disabling, etc) but apart from this, this and this passing reference I have not had much luck. While I am still working on this, I would appreciate anyone's help with the MIT-SHM as I suspect that Blender is not the only piece of software that might make use of it. It seems like a cool X11 feature but I do not think that I have full control over it on my Ubuntu bionic 18.04 that runs on the server of my setup. How can I enable the MIT-SHM so that it shows up in the xdpyinfo listing? Is there a specific set of libraries I should have installed for it to fully work? Is there anything else that is implied by its use? (For example, do I need any extra ports enabled for this functionality to work?)
You cannot use MIT-SHM from a remote X11 client. Just think about its acronym: SHM = shared memory. If the client and the server are running on different machines, that cannot share memory. That extension is supposed to speed up X11 requests which are transferring a lot of data by using the SySV shared memory API instead of writing it through a socket (eg. XPutImage -> XShmPutImage). Its benefits on modern computers are discutable IMHO.
Working with the MIT-SHM X11 extension on Linux
1,306,122,705,000
My question is: why does setting Link Aggregation Groups on the smart switch lower the bandwidth between two machines? I have finally achieved higher throughput (bandwidth) between two machines (servers running ubuntu 18.04 server) connected via 2 bonded 10G CAT7 cables via a TP-LINK T1700X-16TS smart switch. The cables are connected to single intel X550-T2 NIC in each machine (which has 2 RJ45 ports on each card), which is plugged into a PCI-E x8. The first thing I did was create in the switch's configuration to create static LAG groups containing the two ports that each machine was connected to. This ended up being my first mistake. On each box, created a bond which contained the two ports on the intel X550-T2 card. I am using netplan (and networkd). E.g.: network: ethernets: ens11f0: dhcp4: no optional: true ens11f1: dhcp4: no optional: true bonds: bond0: mtu: 9000 #1500 dhcp4: no interfaces: [ens11f0,ens11f1] addresses: [192.168.0.10/24] parameters: mode: balance-rr transmit-hash-policy: layer3+4 #REV: only good for xor ? mii-monitor-interval: 1 packets-per-slave: 1 Note the 9000 byte MTU (for jumbo packets) and balance-rr. Given these settings, I can now use iperf (iperf3) to test bandwidth between the machines: iperf3 -s (on machine1) iperf3 -c machine1 (on machine2) I get something like 9.9 Gbits per second (very close to theoretical max of single 10G connection) Something is wrong though. I'm using round-robin, and I have two 10G cables between the machines (theoretically). I should be able to get 20G bandwidth, right? Wrong. Weirdly, I next deleted the LAG groups from the smart switch. Now, on the linux side I have bonded interfaces, but to the switch, there are no bonds (no LAG). Now I run iperf3 again: [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 1.77 GBytes 15.2 Gbits/sec 540 952 KBytes [ 4] 1.00-2.00 sec 1.79 GBytes 15.4 Gbits/sec 758 865 KBytes [ 4] 2.00-3.00 sec 1.84 GBytes 15.8 Gbits/sec 736 454 KBytes [ 4] 3.00-4.00 sec 1.82 GBytes 15.7 Gbits/sec 782 507 KBytes [ 4] 4.00-5.00 sec 1.82 GBytes 15.6 Gbits/sec 582 1.19 MBytes [ 4] 5.00-6.00 sec 1.79 GBytes 15.4 Gbits/sec 773 708 KBytes [ 4] 6.00-7.00 sec 1.84 GBytes 15.8 Gbits/sec 667 1.23 MBytes [ 4] 7.00-8.00 sec 1.77 GBytes 15.2 Gbits/sec 563 585 KBytes [ 4] 8.00-9.00 sec 1.75 GBytes 15.0 Gbits/sec 407 839 KBytes [ 4] 9.00-10.00 sec 1.75 GBytes 15.0 Gbits/sec 438 786 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 17.9 GBytes 15.4 Gbits/sec 6246 sender [ 4] 0.00-10.00 sec 17.9 GBytes 15.4 Gbits/sec receiver Huh, now I get 15.4 Gbits/sec (sometimes up to 16.0). The resends worry me (I was getting zero when I had the LAGs set up), but now I am getting at least some advantage. Note, if I disable jumbo packets or set MTU to 1500, I get only about 4Gbps to 5Gbps. Does anyone know why setting the Link Aggregation Groups on the smart switch (which I thought should help), instead limits the performance? On the other hand, not setting them (heck I could have saved my money and bought an unmanaged switch!) lets me send more packets which are routed correctly? What is the point of the switch's LAG groups? Am I doing something wrong somewhere? I would like to increase bandwidth even more than 16Gbps if possible. edit Copying from my comment below (update): I verified real application 11Gbps (1.25 GiB/sec) over my bonded connection, using nc (netcat) to copy a 60 GB file from a ramdisk on one system to another. I verified file integrity using hash, it is the same file on both sides. Using only one of the 10G ports at a time (or bonded using balance-xor etc.), I get 1.15 GiB/sec (about 9.9 Gbps). Both iperf and nc use a TCP connection by default. Copying it to the local machine (via loopback) gets a speed of 1.5 GiB/sec. Looking at port usage on the switch, I see roughly equal usage on the sender Tx side (70% in the case of iperf, ~55% in the case of the nc file copy), and equal usage between the 2 bonded ports on the Rx side. So, in the current setup (balance-rr, MTU 9000, no LAG groups defined on the switch), I can achieve more than 10Gbps, but only barely. Oddly enough, defining LAG groups on the switch now breaks everything (iperf and file transfers now send 0 bytes). Probably just takes time for it to figure out new switching situation, but I re-ran many times and re-booted / reset the switch several times. So, I'm not sure why that is. edit 2 I actually found mention of striping and balance-rr allowing higher than single port bandwidth in the kernel.org docs. https://www.kernel.org/doc/Documentation/networking/bonding.txt Specifically 12.1.1 MT Bonding Mode Selection for Single Switch Topology This configuration is the easiest to set up and to understand, although you will have to decide which bonding mode best suits your needs. The trade offs for each mode are detailed below: balance-rr: This mode is the only mode that will permit a single TCP/IP connection to stripe traffic across multiple interfaces. It is therefore the only mode that will allow a single TCP/IP stream to utilize more than one interface's worth of throughput. This comes at a cost, however: the striping generally results in peer systems receiving packets out of order, causing TCP/IP's congestion control system to kick in, often by retransmitting segments. It is possible to adjust TCP/IP's congestion limits by altering the net.ipv4.tcp_reordering sysctl parameter. The usual default value is 3. But keep in mind TCP stack is able to automatically increase this when it detects reorders. Note that the fraction of packets that will be delivered out of order is highly variable, and is unlikely to be zero. The level of reordering depends upon a variety of factors, including the networking interfaces, the switch, and the topology of the configuration. Speaking in general terms, higher speed network cards produce more reordering (due to factors such as packet coalescing), and a "many to many" topology will reorder at a higher rate than a "many slow to one fast" configuration. Many switches do not support any modes that stripe traffic (instead choosing a port based upon IP or MAC level addresses); for those devices, traffic for a particular connection flowing through the switch to a balance-rr bond will not utilize greater than one interface's worth of bandwidth. If you are utilizing protocols other than TCP/IP, UDP for example, and your application can tolerate out of order delivery, then this mode can allow for single stream datagram performance that scales near linearly as interfaces are added to the bond. This mode requires the switch to have the appropriate ports configured for "etherchannel" or "trunking." So, theoretically, balance-rr will allow me to stripe single TCP connection's packets. But, they may arrive out of order, etc. However, it mentions that most switches do not support the striping. Which seems to be the case with my switch. Watching traffic during a real file transfer, Rx packets (i.e. sending_machine->switch) arrive evenly distributed over both bonded ports. However, Tx packets (switch->receiving_machine) only go out over one of the ports (and achieve 90% or more saturation). By not explicitly setting up the Link Aggregation groups in the switch, I'm able to achieve higher throughput, but I'm not sure how the receiving machine is telling the switch to send one down one port, next down another, etc. Conclusion: The Switch Link Aggregation Groups do not support round-robin (i.e. port striping) for sending of packets. So, ignoring them allows me to get high throughput, but actual writing to memory (ramdisk) seems to hit a memory, CPU processing, or packet reordering saturation point. I tried increasing/decreasing reordering, as well as read and write memory buffers for TCP using sysctl, with no change in performance. E.g. sudo sysctl -w net.ipv4.tcp_reordering=50 sudo sysctl -w net.ipv4.tcp_max_reordering=1000 sudo sysctl -w net.core.rmem_default=800000000 sudo sysctl -w net.core.wmem_default=800000000 sudo sysctl -w net.core.rmem_max=800000000 sudo sysctl -w net.core.wmem_max=800000000 sudo sysctl -w net.ipv4.tcp_rmem=800000000 sudo sysctl -w net.ipv4.tcp_wmem=800000000 The only change in performance I notice is between machines with: 1) stronger processor (slightly higher single core clock, doesn't care about L3 cache) 2) faster memory? (or fewer DIMM for same amount of memory) This seems to imply that I am hitting bus, CPU or memory read/write. A simple "copy" locally within a ramdisk (e.g. dd if=file1 of=file2 bs=1M) results in optimal speed of roughly 2.3GiB/sec on 2.6 Ghz, 2.2GiB/sec on 2.4 Ghz, and 2.0GiB/sec on 2.2 Ghz. The second one furthermore has slower memory, but it doesn't seem to matter. All TCP copies TO the 2.6 Ghz ramdisk from the slower machines go at 1.15 GiB/s, from 2.4 Ghz go at 1.30 GiB/s, from fastest machine to middle machine go at 1.02 GiB/s, to slower machine (with faster memory) at 1.03 GiB/s, etc. Biggest effect seems to be the single-core CPU and the memory clock on the receiving end. I have not compared BIOS settings, but all are running the same bios versions and use same motherboards, eth cards, etc.. Rearranging CAT7 cables or switch ports does not seem to have an effect. I did find http://louwrentius.com/achieving-340-mbs-network-file-transfers-using-linux-bonding.html Who does this with four 1GbE connections. I tried setting up separate VLAN, but it did not work (did not increase speed). Finally, sending to self using the same method seems to invoke a 0.3 GiB - 0.45 GiB/sec penalty. So, my observed values are not that much lower than the "theoretical" max for this method. edit 3 (adding more info for posterity) Even with balance-rr and LAG set on switch, I just realized that despite seeing 9.9 Gbps, retries in balance-rr are actually higher than in the case without the LAG! 2500 per second average with the groups, 1000 average without! However, with groups set, I get average real file transfer speed memory to memory of 1.15 GiB/s (9.9 Gbps). If I only plug a single port in per machine, I see the same speed (1.15 GiB/s), and very few retries. If I switch the mode to balance-xor, I get 1.15 GiB/s (9.9 Gbps), and no resends. So, balance-rr mode is trying to stripe on the output to switch side of things, and that is causing a lot of out-of-order packets I guess. Since my max (real-world) performance for memory-to-memory transfers is similar or higher using switch LAG and balance-xor, while having less resends (congestion), I am using that. However, since the eventual goal is NFS and MPI send, I will need to somehow find a way to saturate and measure network speed in those situations, which may depend upon how MPI connections are implemented... Final Edit I moved back to using balance-rr (with no LAG set on the switch side), since XOR will always hash to the same port for the same two peers. So it will only ever use one of the ports. Using balance-rr, if I run 2 or more (ram to ram) file transfers simultaneously, I can get net 18-19 Gbps, quite close to theoretic max of 20 Gbps. Final Final Edit (after using for a few months) I had to set the LAG groups in the switch, because I was getting errors where I could no longer SSH into machines, I assume because of packets getting confused where they were supposed to go with some addressing stuff. Now, I get only the maximum per connection of 10GBPS, but it is stable.
As I mentioned in my final edit, the reason that I am not able to get higher bandwidth using round-robin bonding when the switch has Link Aggregation Groups set is that switch Link Aggregation Groups do not do round-robin striping of packets on a single TCP connection, whereas the linux bonding does. This is mentioned in the kernel.org docs: https://www.kernel.org/doc/Documentation/networking/bonding.txt 12.1.1 MT Bonding Mode Selection for Single Switch Topology This configuration is the easiest to set up and to understand, although you will have to decide which bonding mode best suits your needs. The trade offs for each mode are detailed below: balance-rr: This mode is the only mode that will permit a single TCP/IP connection to stripe traffic across multiple interfaces. It is therefore the only mode that will allow a single TCP/IP stream to utilize more than one interface's worth of throughput. This comes at a cost, however: the striping generally results in peer systems receiving packets out of order, causing TCP/IP's congestion control system to kick in, often by retransmitting segments. It is possible to adjust TCP/IP's congestion limits by altering the net.ipv4.tcp_reordering sysctl parameter. The usual default value is 3. But keep in mind TCP stack is able to automatically increase this when it detects reorders. Note that the fraction of packets that will be delivered out of order is highly variable, and is unlikely to be zero. The level of reordering depends upon a variety of factors, including the networking interfaces, the switch, and the topology of the configuration. Speaking in general terms, higher speed network cards produce more reordering (due to factors such as packet coalescing), and a "many to many" topology will reorder at a higher rate than a "many slow to one fast" configuration. Many switches do not support any modes that stripe traffic (instead choosing a port based upon IP or MAC level addresses); for those devices, traffic for a particular connection flowing through the switch to a balance-rr bond will not utilize greater than one interface's worth of bandwidth. If you are utilizing protocols other than TCP/IP, UDP for example, and your application can tolerate out of order delivery, then this mode can allow for single stream datagram performance that scales near linearly as interfaces are added to the bond. This mode requires the switch to have the appropriate ports configured for "etherchannel" or "trunking." The last note about having ports configured for "trunking" is odd, since when I make the ports in a LAG, all outgoing Tx from the switch go down a single port. Removing the LAG makes it send and receive half and half on each port, but results in many resends, I assume due to out-of-order packets. However, I still get an increase in bandwidth.
Link Aggregation (Bonding) for bandwidth does not work when Link Aggregation Groups (LAG) set on smart switch
1,306,122,705,000
Consider the following scenario. You have a slow read-only media (e.g. write-protected Thumb Drive, CD/DVD, whatever) that you installed Linux on (not a Live CD per se, but a normal build), and use it on a computer with literally no other forms of storage. It's slow, because it is USB 2. The root filesystem is mounted as overlayfs so that it's "writeable" for logs and a lot of other temporary work you do, but all writes go to RAM (tmpfs upperdir). Pretty typical scenario for a Live distro situation. Since there is no other forms of storage, swap is mounted on zram. So when Linux decides to swap, it compresses those pages and stores them still in RAM, but at least they're compressed. This is actually decent, since the RAM of most applications is easily compressible (RAM is usually very redundant in data since it's meant to be "fast"). This works well for application memory, but not for tmpfs. Here's the thing: zram is fast, incredibly so. The Thumb Drive, on the other hand, is slow. Let's say it's 20 MiB/s, which is really slow in comparison. You can see the problem and why the kernel will not do the right thing here. Note that this question is not a duplication of How to make files inside TMPFS more likely to swap. The question is pretty much the same, but I'm not satisfied with that answer in that question whatsoever, sorry. The kernel definitely does not do the "right thing" by itself, regardless of how smart the people designing it are. I dislike it when people don't understand the situation and think they know better. They cater to the average case. That's why Linux is so tweakable, because no matter how smart it is, it can't predict what it will be used for. For example, I can (and did) set vm.swappiness (/proc/sys/vm/swappiness) to 100, which tells it to swap application memory aggressively and keep the file cache. This option is nice, but it's not everything, unfortunately. I want it to prioritize keeping the file cache over any other RAM use when dealing with swap. That's because dropping the file cache results in it having to read back from the slow 20 MiB/s drive, which is much much slower than swapping to zram. For applications, the vm.swappiness works, but not for tmpfs. tmpfs is mounted as page cache, so it has the same priority as the file cache. If you read a file from tmpfs, it will prioritize it over an older file cache entry (most recently used). But that's bad, the kernel clearly does not do the right thing here. It should consider that swapping tmpfs to zram is much better even if it's "used more recently" than the file cache because reading from the drive is very slow. So I need to explicitly tell it to swap from tmpfs more often compared to the file cache: that it should preserve the file cache more than tmpfs. There are so many options in /proc/sys/vm but nothing for this that I could find. Disappointing really. Failing that, is there a way to tell the kernel that some devices/drives are just that much more slower than others, and it should prefer to keep the cache for them more than others? tmpfs and zram are fast. The thumb drive is not. Can I tell the kernel this information? It can't do "the right thing" by itself if it treats all drives the same. It's much faster to swap tmpfs to a fast drive like zram than to drop caches from a slow drive, even if tmpfs is used more recently. When it runs out of free memory it will start to either swap application memory (good) due to swappiness, or drop old file caches (bad). If I end up re-reading from those files, it will be very slow. Much slower than if it decided to swap some tmpfs, even if recently used, and then read from it again. Because zram is an order of magnitude faster.
Increasing the swappiness value makes the kernel more willing to swap tmpfs pages, and less willing to evict cached pages from the other filesystems which are not backed by swap. Since zram swap is faster than your thumb drive, you ideally want to increase swappiness above 100. This is only possible in kernel version 5.8 or above. Linux 5.8 allows swappiness to be set to a maximum of 200. For in-memory swap, like zram or zswap, [...] values beyond 100 can be considered. For example, if the random IO against the swap device is on average 2x faster than IO from the filesystem, swappiness should be 133 (x + 2x = 200, 2x = 133.33). -- Documentation/admin-guide/sysctl/vm.rst Further reading tmpfs is treated the same as any other swappable memory See the kernel commit "vmscan: split LRU lists into anon & file sets" - Split the LRU lists in two, one set for pages that are backed by real file systems ("file") and one for pages that are backed by memory and swap ("anon"). The latter includes tmpfs. - and the code at linux-4.16/mm/vmscan.c:2108 - /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined * by looking at the fraction of the pages scanned we did rotate back * onto the active list instead of evict. * * nr[0] = anon inactive pages to scan; nr[1] = anon active pages to scan * nr[2] = file inactive pages to scan; nr[3] = file active pages to scan */ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, struct scan_control *sc, unsigned long *nr, unsigned long *lru_pages) { int swappiness = mem_cgroup_swappiness(memcg); ... /* * With swappiness at 100, anonymous and file have the same priority. * This scanning priority is essentially the inverse of IO cost. */ anon_prio = swappiness; file_prio = 200 - anon_prio; Linux 5.8 allows swappiness values up to 200 mm: allow swappiness that prefers reclaiming anon over the file workingset With the advent of fast random IO devices (SSDs, PMEM) and in-memory swap devices such as zswap, it's possible for swap to be much faster than filesystems, and for swapping to be preferable over thrashing filesystem caches. Allow setting swappiness - which defines the rough relative IO cost of cache misses between page cache and swap-backed pages - to reflect such situations by making the swap-preferred range configurable. This was part of a series of patches in Linux 5.8. In previous versions, Linux "mostly goes for page cache and defers swapping until the VM is under significant memory pressure". This is because "the high seek cost of rotational drives under which the algorithm evolved also meant that mistakes could quickly result in lockups from too aggressive swapping (which is predominantly random IO)." This series sets out to address this. Since commit ("a528910e12ec mm: thrash detection-based file cache sizing") we have exact tracking of refault IO - the ultimate cost of reclaiming the wrong pages. This allows us to use an IO cost based balancing model that is more aggressive about scanning anonymous memory when the cache is thrashing, while being able to avoid unnecessary swap storms. These patches base the LRU balance on the rate of refaults on each list, times the relative IO cost between swap device and filesystem (swappiness), in order to optimize reclaim for least IO cost incurred. -- [PATCH 00/14] mm: balance LRU lists based on relative thrashing v2
Make or force tmpfs to swap before the file cache
1,306,122,705,000
Is there a command to check if the container services are running on a Linux system? Someone suggested unshare but I am not sure if that is the best way to do it.
UPDATE: Upon re-reading your question, I realized that I had answered a slightly different one. You want to know whether a service is running, and I had originally answered how to tell if a package was installed. To answer your actual question, it depends upon your init system. systemd - the basic command is systemctl, which will list all services and their states, so you could either manually browse it manually or pipe it through a grep command, like so: systemctl | grep -e cgmanager -e cgproxy -e cgroupfs-mount. Or, as user muru suggests in the comments, simply systemctl status 'cg*'. sysVinit - the basic command is service --status-all and the grep command would be service --status-all 2>&1 | grep -e cgmanager -e cgproxy -e cgroupfs-mount. Note that in this case, running services are denoted with a [+] prefix symbol. Also note that for the grep to work, the redirect 2>&1 must be made for the service command. ORIGINAL ANSWER: Maybe the simplest thing to do is try man cgroups. If that brings up a documentation page, then your host has the package installed. However, some installs are 'stingy' and don't install man pages. You could try cgm and see if that produces output. Most installs of cgroups will include that command, but not necessarily. You could look up the package list of your host distribution. On debian derivatives, that would be dpkg -l |grep cgroup, but occasionally a system will restrict access to root or sudo for dpkg. There will be a lot of other ways.
How can I check if cgroups are available on my Linux host?
1,306,122,705,000
I don't know if "single user mode" is the correct term for that but here I go: On the GRUB menu, I pressed E to edit the run configurations. To the line that starts with linux, I have appended the following: rw init=/bin/bash and pressed F10. The computer boots up to a root shell without asking any password. The problem is, signals are not working. For example, when I run a command, I cannot exit from that command by pressing Ctrl + C. Is this expected? If yes, what is the reason for this and how can I fix this? Is it related to the terminal emulator in single user mode?
Single user mode has not been the right term for quite some time. In the 1990s, what used to be single user mode split into emergency mode and rescue mode. You are not in fact using either one. What you are actually doing is a poor idea, because it involves running a program as process #1 that is not designed to do the jobs that process #1 actually needs to do. You would be better off using emergency or rescue modes, the latter of which is invoked by the accepted answer that the answer that you point to, itself points to. Yes, signals will act oddly. Process #1 has special semantics for signals, for starters, which is one of the several reasons that init=/bin/bash is a poor idea. Furthermore, job control shells cannot do job control when /dev/console is their standard I/O but nothing has set up a proper session with a controlling terminal, as the Bourne Again shell actually told you as soon as it started up. It is possible with some simple chain-loading tools to set up a proper session with a controlling terminal, and thence enable job control and signal delivery to a foreground process group, but that does not fix all of the other things that will go wrong with /bin/bash as the process #1 program because you have to carefully do them explicitly, by hand. Just use rescue mode or emergency mode. Further reading Jonathan de Boyne Pollard (2016). The gen on emergency and rescue mode bootstrap. Frequently Given Answers. https://unix.stackexchange.com/a/251228/5132 https://unix.stackexchange.com/a/197472/5132 https://unix.stackexchange.com/a/392612/5132 Jonathan de Boyne Pollard. open-controlling-tty. nosh toolset. Softwares. Jonathan de Boyne Pollard. setsid. nosh toolset. Softwares. Jonathan de Boyne Pollard. vc-get-tty. nosh toolset. Softwares.
Ctrl + C not working on single user mode on Linux
1,306,122,705,000
I need a command that deletes all files, folders and sub-folders that were not updated longer than 31 days. I tried this one find . -mindepth 1 -mtime +31 -exec rm -rf "{}" \; But if I have hierarchy like this . ├── old_sub_folder1 └── old_sub_folder2 ├── old_file └── old_sub_folder3 └── new_file where old_* are old folders\files and new_file is a new file. This command will delete all contents. Because old_sub_folder2 date was not updated after new_file was created. I need a command that would not delete old_sub_folder2/old_sub_folder3/new_file
The problem is that you added the -r option to your rm command. This will delete the folders even if they are not empty. You need to do this in two steps: Delete only the old files: find . -type f -mtime +31 -delete To delete any old folders, if they are empty, we can take a peek here, and tweak it a bit: find . -type d -empty -mtime +31 -delete
Command that deletes all old files, folders and sub-folders
1,306,122,705,000
I prefer a Dvorak layout, so I have a nice USB Das Keyboard and I have assigned it a layout that works for me on Virtual Console and in X11. I used loadkeys and install-keymap to arrange for it to take effect from boot onwards, and I'm very happy with that. However, most of my colleagues prefer Qwerty layout, and this is an impediment to pair programming. I do have a selection of usable USB keyboards that I could attach for this task, but they all pick up my Dvorak layout when they are plugged in. Is there a way that I can tell udev (or even just X11) to use a Qwerty layout for my additional keyboards? They have distinct USB vendor and device identifiers that I can use to distinguish them. My system is Debian Testing, with udev version 232. It got infected with systemd when I reinstalled after a disk failure, so the standard (SysV-style) approaches I'm used to won't work. The similar question Different keyboard layout for each keyboard didn't provide any help to me.
General background: Keys get assigned three different sets of "codes", first the scancode (arbitrary hardware dependent number the represents the key on the keyboard), then the keycode (more abstract number that represents a particular key, e.g. shift or 1 / !), and finally the keysym (key symbol, the actual symbol like á produced by a key or combination of keys). I recently learned that each /dev/input/event* device carries its own scancode-to-keycode mapping. These mappings can be read and altered by iotcls (EVIOCGKEYCODE_V2, EVIOCSKEYCODE_V2), but funnily enough, there don't seem to be general tools available to access these mappings (I quickly wrote a simple C program dump it, as I was curious). Both the Linux kernel and X then map keycodes to keysyms. For the kernel, there's just one global mapping, the kbd handler (or at least one global mapping for very virtual console, I'm not sure if different virtual consoles can have different mappings). X maintains a mapping for each device. So if you want differences between keyboards on the virtual console, the only choice left is to use the scancode-to-keycode mapping. For Dvorak vs. Qwerty this might actually work as long as you just remap letter keys, and don't want to remap symbols in shifted and non-shifted state differently. More recent versions of udev use a hardware database (/etc/udev/hwdb.d) to initialize special scancode-to-keycode mappings, and you can add your own custom versions. The alternative is to live with either Dvorak or Qwerty on the virtual console, but setup X to use different keycode-to-keysym mappings for each, as described in the answer you linked that didn't help you (probably because you don't want this variant). The advantage of this method is that you can also map symbols, dead keys, compositions etc. differently.
Can I give two keyboards different layouts?
1,306,122,705,000
I need to monitor a shared folder, in this specific case the host is windows and the guest is Ubuntu linux, for new files or a file that has changed. Ideally the solution should work independent of the host machine or the machine that puts a file into the shared directory. The new file will be the input for a different process. The inotifywait set of tools don't detect new files if the files are created by the host and put into the shared folder. What are my options?
You may be able to use one of the polling tools that pre-date dnotify and inotify: gamin or fam, along with something like fileschanged which is an inotifywait-like CLI tool. The gamin and fam projects are related, and both quite old (though gamin slightly less so). For simple and portable tasks I have used something like this via cron: if mkdir /var/lock/mylock; then ( cd /mnt/mypath; find . -type f -mmin +2 ) | myprocess rmdir /var/lock/mylock else logger -p local0.notice "mylock found, skipping run" fi This uses primitive locking, and a GNU find conditional to only find files older than two minutes so I could be sure that files were completely written. In my case myprocess was an rsync --remove-source-files --files-from=- so that files were removed once they were processed. This approach also lets you use find -print0/xargs -0/rsync -0 to handle troublesome filenames. If you must keep all (old and new) files in the same directory hierarchy, then building directory-listing snapshots and diff-ing them might also work for you: if mkdir /var/lock/mylock; then ( export LC_COLLATE=C # for sort cd /mnt/mypath find . -type f -a \! -name ".dirlist.*" -printf '%p\0' | while read -d '' file; do printf "%q\n" "${file}" done > .dirlist.new [[ -f .dirlist.old ]] && { comm -13 <(sort .dirlist.old) <(sort .dirlist.new) | while read -r file; do myprocess "${file}" done } mv .dirlist.new .dirlist.new ) rmdir /var/lock/mylock else logger -p local0.notice "mylock found, skipping run" fi This bash script: uses find -printf to print a \0 (nul) delimited list of files uses read -d '' to process that list, and printf %q to escape filenames where necessary compares the new and previous .dirlist files invokes myprocess with each new file (safely quoted) (Also handling modified files would require slightly more effort, a double-line format with find ... -printf '%p\0%s %Ts\0' could be used, with associated changes to the while loops.)
script to monitor for new files in a shared folder (windows host, linux guest)
1,471,019,294,000
I am running Wine on a Linux Server so as to run some old Windows Applications. I now need to write a script to make sure they are running. Is it possible to create an ssh connection to the server and start the application? e.g. if I am on the desktop, open a terminal window and run wine "Z:\home\user\Desktop\application" the application opens. But If I connect by SSH and run wine "Z:\home\user\Desktop\application" I get: Application tried to create a window, but no driver could be loaded. Make sure that your X server is running and that $DISPLAY is set correctly. err:systray:initialize_systray Could not create tray window Application tried to create a window, but no driver could be loaded. Make sure that your X server is running and that $DISPLAY is set correctly. I'm assuming I need to tell it where to start the application rather than just starting it, but can't see how to do this? ADDITIONAL INFO: I am currently working on a Windows PC, and connecting with Putty to the Linux/Wine server. (I also have a RDP connection so I can see the desktop). Long term I will be running the script on another Linux server (MgmtSrv) that will make an ssh connection to the Linux/Wine server to manage it. The MgmtSrv does not have Wine installed, and does not have an X-Display set up.
As you surmise, you need to tell Wine where to display its applications. Since your Wine server has an X display, it's probably :0: DISPLAY=:0 wine ... should do the trick (assuming your X authentication cookies are OK; if they're not you'll get an Invalid MIT-MAGIC-COOKIE error).
How to start Application in Wine From a terminal window
1,471,019,294,000
Given the partition device file /dev/sdh1, I need to find out the label of this device. dmesg doesn't mention its label while GParted reveals that it is called H2N_SD: I need to build a way to be able to run something similar to $ partlabel /dev/sdh1 H2N_SD This question is nearly the opposite of getting device name & mount point from label.
Use blkid: $ blkid -s LABEL -o value /dev/sdh1 H2N_SD
Get label of Linux storage partition device file
1,471,019,294,000
I made the same mistake as in this question: Debian chroot blocking PTTYs on host I mounted a "devpts" filesystem inside a chroot, and now urxvt can't create ptys. Oddly enough xterm still can. Remounting /dev/pts doesn't fix the issue. What can I do to get my system working as normal again without rebooting?
Thanks to the comment by @mikeserv I've found out how to revive it. I have only tested this on Linux 4.0.7, so for much earlier or much later versions it may not work. mount /dev/pts -o remount,gid=5,mode=620 Mounting a devpts filesystem in a chroot without using the newinstance option caused it to mount the same "instance" of /dev/pts, containing the same ptys. Passing no gid argument, according to the man page, causes new ptys to be created with the same gid as the process that spawned it. Apparently this (lack of) mount option affects the entire devpts instance, so the original /dev/pts is no longer reassigning ptys to the tty group. I still don't know why urxvt needs its ptys to be in that group while xterm doesn't. Some more notes on this: It seems normal that /dev/pts/ptmx has mode 000 (root:root) while /dev/ptmx has mode 666 (root:tty). They do however point to the same block device, so setting ptmxmode seems unnecessary but harmless. The default mode (600) seems to work, but the tty gets created with mode 620 anyway. Something might be changing its mode. When my system boots it passes mode=620, overriding the default mode, so I've put that in the command line above in the interest of better restoring the default functionality of /dev/pts. Don't set uid. It will lead you either to security problems or to the same problem of terminals not spawning. Adding newinstance is optional, but can improve security. With this option, containers can't mount the "real" /dev/pts because the host system isn't using it. If this is used, you should ensure ptmxmode=666 and that /dev/ptmx is a symlink to pts/ptmx. Mounting a new devpts instance over /dev/pts may cause strange behaviours in existing terminals (e.g. gpg not working), so you should restart those if you use this option.
How can I fix /dev/pts after mounting it?
1,471,019,294,000
I love file. I use it multiple times a day. I love it so much that I install Cygwin on my Windows machines just so I can use it. Anyway, in going through older files on my system, I find there are many files that just report "data" from the file command. Understandably. Some of these files however do have an indicator in their header of what kind of file they are, but are not found in the magic file database yet. My questions are three-fold: Is there an online repository of magic file definitions that I can use to supplement or update the default ones that came with my OS? (My folder /usr/share/file/magic shows the most recent entry as almost one year ago, and I know people are continually updating these definitions) How can I submit a new definition that I've developed so that the rest of the *nix community can benefit? The online repo? Is it as simple as dropping the magic definition file in the folder, and my OS will magically find it, or do I have to somehow rebuild the definition library? Do I have to do anything with the magic.mgc file, or just the folder of individual definitions? Thank you ahead of time for your help.
In the past I've had changes included in the magic file by submitting a Debian bug report but it's probably faster to submit them upstream directly. In answer to your questions: The latest released source can be found here - there's a link to a mirror of the source repo there. Yes, I believe either submitting a bug report or emailing the mailing list should be all that's needed to add a file definition. You can create your own magic file and point file file to it by using the -m option.
Update magic file list and/or submit my own
1,471,019,294,000
I want to run a task with limits on the kernel objects that they will indirectly trigger. Note that this is not about the memory, threads, etc. used by the application, but about memory used by the kernel. Specifically, I want to limit the amount of inode cache that the task can use. My motivating example is updatedb. It can use a considerable amount of inode cache, for things that mostly won't be needed afterwards. Specifically, I want to limit the value that is indicated by the ext4_inode_cache line in /proc/slabinfo. (Note that this is not included in the “buffers” or “cache” lines shown by free: that's only file content cache, the slab content is kernel memory and recorded in the “used” column.) echo 2 >/proc/sys/vm/drop_caches afterwards frees the cache, but that doesn't do me any good: the useless stuff has displaced things that I wanted to keep in memory, such as running applications and their frequently-used files. The system is Linux with a recent (≥ 3.8) kernel. I can use root access to set things up. How can I run a command in a limited environment (a container?) such that the contribution of that environment to the (ext4) inode cache is limited to a value that I set?
Following my own question on LKML this can be archived using Control Group v2: Pre-requisits Make sure your Linux kernel has MEMCG_KMEM enabled, e.g. grep CONFIG_MEMCG_KMEM "/boot/config-$(uname -r)" Depending on the OS (and systemd version) enable the use of cgroups2 by specifying systemd.unified_cgroup_hierarchy=1 on the Linux kernel command line, e.g. via /boot/grub/grub.cfg. Make sure the cgroup2 file system is mounted on /sys/fs/cgroup/, e.g. mount -t cgroup2 none /sys/fs/cgroup or the equivalent in /etc/fstab. (systemd will do this for you automatically by default) Invocation Create a new group my-find (once per boot) for your process: mkdir /sys/fs/cgroup/my-find Attach the (current) process (and all its future child processes) to that group: echo $$ >/sys/fs/cgroup/my-find/cgroup.procs Configure a soft-limit, e.g. 2 MiB: echo 2M >/sys/fs/cgroup/my-find/memory.high Finding the right value requires tuning and experimenting. You can get the current values from memory.current and/or memory.stat. Over time you should see high incrementing in memory.events, as the Linux kernel is now repeatedly forces to shrink the caches. Appendix Notice that the limit applies both to user-space memory and kernel memory. It also applies to all processes of the group, which includes child-processes started by updatedb, which basically does a find | sort | frcode, where: find is the program trashing the dentry and inode caches, which we want to constrain. Otherwise its user-space memory requirement (theoretically) is constant. sort want lots of memory, otherwise it will fall back to using temporary files, which will result in additional IO. frcode will write the result to disk - e.g. a single file - which requires constant memory. So basically you should put only find into a separate cgroup to limit its cache trashing, but not sort and frcode. Post scriptum It does not work with cgroup v1 as setting memory.kmem.limit_in_bytes is both deprecated and results in an "out-of-memory" event as soon as the processes go over the configured limit, which gets your processes killed immediately instead of forcing the Linux kernel to shrink the memory usage by dropping old data. Quoting from section CONFIG_MEMCG_KMEM Currently no soft limit is implemented for kernel memory. It is future work to trigger slab reclaim when those limits are reached.
Limit the inode cache used by a command
1,471,019,294,000
I have a file-server with three disk's that are ext2 file-systems, is it possible to change/convert these to ext4 which have much improved characteristics, while data is on the disk's and without data-loss? If so, how is that accomplished? My system is Debian Wheezy, and I use lvm. I've found this, but I don't know if it is relevant for ext2 to ext4, does this work for me?
The process of going from ext2 to ext4 is similar to your linked article for 3->4. You need to enable the features using tune2fs. The difference between going from 3->4 and 2->4 is that you also need to enable the journal feature. The complete command is this: tune2fs -O extents,uninit_bg,dir_index,has_journal /dev/sdxx You should fsck the filesystem after making the changes. As with any filesystem or disk changes, you should ensure you have a reliable backup before making the change.
Converting ext2 to ext4
1,471,019,294,000
I have a problem where my server running NginX with php-fpm loads blank PHP pages (strangely except for my phpinfo.php file, which loads normally). If I put an index.html page in the same directory and browse to it, it loads. The fact that phpinfo.php (which calls the phpinfo(); function) loads, confirms that php-fpm works. I am hoping somebody might have some advice. I apologise in advance for the bulk of info I am about to give, but I would rather give too much information than too little. Here are my configuration files. /etc/nginx/conf.d/default.conf: server { listen 80; server_name 45.55.182.120; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; root /usr/share/nginx/html; index index.php index.html index.htm; location / { try_files $uri $uri/ =404; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi_params; } server { listen 80; server_name 45.55.182.120; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; root /usr/share/nginx/html; index index.php index.html index.htm; location / { try_files $uri $uri/ =404; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi_params; } /etc/php-fpm.d/www.conf: ; Start a new pool named 'www'. [www] ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /var/run/php-fpm/php-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: -1 ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = nobody ;listen.group = nobody ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. ; RPM: apache Choosed to be able to access some dir as httpd user = nginx ; RPM: Keep a group allowed to write in log dir. ; accepted from any ip address. ; Default Value: any listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = nobody ;listen.group = nobody ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. ; RPM: apache Choosed to be able to access some dir as httpd user = nginx ; RPM: Keep a group allowed to write in log dir. group = nginx ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 5 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 5 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set slowlog = /var/log/php-fpm/www-slow.log ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set slowlog = /var/log/php-fpm/www-slow.log ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. This value must be an absolute path. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Default Value: no ;catch_workers_output = yes ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. This value must be an absolute path. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Default Value: no ;catch_workers_output = yes ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ;security.limit_extensions = .php .php3 .php4 .php5 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 128M ; Set session path to a directory owned by process user php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session What do you think could be the issue?
According to your configuration, you have two server{...} blocks which are exactly the same. So before I start explain what's wrong with your configuration, you need to provide more details. See down below for some troubleshooting hints. For now, I'll post mine here and highlight a few directives that do matter. My /etx/nginx/conf.d/default.conf looks as follow server { # Replace this port with the right one for your requirements listen 80; # Multiple hostnames separated by spaces. Replace these as well. server_name mydomain.nl; root /var/www/mydomain.nl/public_html/; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php index.html; location / { # This is cool because no php is touched for static content. try_files $uri $uri/ /index.php; } location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { expires max; } location ~* \.php$ { try_files $uri =404 fastcgi_intercept_errors on; fastcgi_index index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\.(ht|ssh) { deny all; } location /status { include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } } The following directives are important: server_name mydmaiin.nl; <-- This is unique for every server block. root /var/www/mydomain.nl/public_html/; <-- This is the root that holds your website / data. The rest is trivial. So Let's take the /etc/php-fpm.d/www.conf file and examine. You chose to use a file socket listen = /var/run/php-fpm/php-fpm.sock <-- php-fpm will communicate with nginx through this file. So this is my www.conf file unless you missed something. I've filtered out all commented lines. So these are the lines that are uncommented. [www] listen = /var/run/php5-fpm.sock listen.allowed_clients = 127.0.0.1 listen.owner = nginx listen.group = nginx listen.mode = 0666 user = apache group = apache pm = dynamic pm.max_children = 50 pm.start_servers = 5 pm.min_spare_servers = 5 pm.max_spare_servers = 35 slowlog = /var/log/php-fpm/www-slow.log security.limit_extensions = .php php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session Troubleshooting 1) See directory permissions. In this case /usr/share/nginx/html 2) See php-fpm error logging. See if the configuration file is loading OK, by running php-fpm -y /etc/php-fpm.conf 3) Change log_level = debug in /etc/php-fpm.conf 4) Come back with more details!
NginX + PHP-FPM displays blank php pages
1,471,019,294,000
Is there a way to invoke unzip (from Info-ZIP) on a Linux system without having it restore the permissions stored in the zip file? The zip files I'm restoring are enormous, so going back over the contents with something like "chmod -R" will take a while. I do not control the source of the archives, so my only choice is to handle the permissions on extraction.
Restoring permissions is a feature of unzip (from the man page, version 6.00): Dates, times and permissions of stored directories are not restored except under Unix. (On Windows NT and successors, timestamps are now restored.) and there is no option to switch if off. It might be that an older version of unzip did not support restoring permission, but investigating that route is probably more cumbersome than trying to change the latest unzip source to do what you want. If running chmod -R is unacceptable you can take a look at using Python's zipfile library, it is easy to use and gives you full control over the way you write the files that you extract from the zip file.
Unzip (Info-ZIP) Permissions
1,471,019,294,000
I'm trying to run dbus-send in a remote system but somehow I'm not able to run it. But the same dbus-send, if I run it in the local system, is working fine. COMMAND: ssh [email protected] "dbus-send --print-reply --dest=service.name /object/path object.path.Service.method string:"XYZ"" How can I run dbus-send command from the remote system? SYSTEM INFO Linux 3.13.0-29-generic Ubuntu i686 GNU/Linux
dbus-send needs some evironment variables to connect to the dbus-session. First you need to estimate them. First ssh to your machine. The $DISPLAY variable: DISPLAY=$(strings /proc/$(pgrep -n Xorg)/environ | awk -F== '$1 ~ "DISPLAY"{print $2}') The dbus session variables: source ~/.dbus/session-bus/$(cat /var/lib/dbus/machine-id)-0 Now you can place your dbus-send command.
Run `dbus-send` in a remote system
1,471,019,294,000
I would like to delete an alias I created using: ip addr add 192.168.1.1 dev eth0 label eth0:100 without having to know the IP address. Basically, I would like to do ip addr del dev eth0 label eth0.100 which, according to documentation should be valid, but rather gives me: ip: RTNETLINK answers: Operation not supported In the mean time, I worked around using ip addr del $(ip addr list label eth0:100 | awk '{ print $2 }') dev eth0 label eth0.100
What you have is the best route (though I would use grep over awk, but that's personal preference). The reason being is because you can have multiple addresses per 'label'. Thus you have to specify which address you want to delete. # ip addr help Usage: ip addr {add|change|replace} IFADDR dev STRING [ LIFETIME ] [ CONFFLAG-LIST ] ip addr del IFADDR dev STRING ip addr {show|save|flush} [ dev STRING ] [ scope SCOPE-ID ] [ to PREFIX ] [ FLAG-LIST ] [ label PATTERN ] ip addr {showdump|restore} IFADDR := PREFIX | ADDR peer PREFIX [ broadcast ADDR ] [ anycast ADDR ] [ label STRING ] [ scope SCOPE-ID ] Note the ip addr del syntax which says the parameters are IFADDR and STRING. IFADDR is defined below that, and says PREFIX is a required parameter (things in [] are optional). PREFIX is your IP/subnet combination. Thus it is not optional. As for what I meant about using grep, is this: ip addr del $(ip addr show label eth0:100 | grep -oP 'inet \K\S+') dev eth0 label eth0:100 The reason for this is in case the position of the parameter changes. The fields positions in the ip addr output can change based on optional fields. I don't think the inet field changes, but it's just my preference.
Delete IP address alias by label name
1,471,019,294,000
I need a solution that allows me to control the time that each user spend on the computer, excluding idle time (some kind of parental control). I've tried timekpr, but it doesn't work on Ubuntu 13.10. pam_time is not what I'm looking for... Here is how my situation looks like in practice: There are several user's accounts in the system, let's call them U1, U2 and U3. I would like to know how long and at what time each of them was active, BUT if i. e. U2 doesn't perform any operations (just idle logged in) or lock his account without logging off, it shouldn't be counted against the total time spent by him on the computer. If user exceed the time limit assigned to his account, I would like to be able to run a bash script (and the same when he tried to use a computer in some specified hours). My computer works as a server, so more than one user can use it at the same time. Users work in graphical mode (Unity, X Window System) without using the terminal. Is there any third-hand software, a system command, some suggestions what can I use developing my own solution, et cetera?
`w U1` - login informations for example for user U1 you can see in output something like this User tty login@ idle JCPU PCPU what also you can make a bash script and put in crontab for example to be run a few times per one day, with varibles w U1, w U2, w U3 and make some otput into file also `/var/run/utmp` - List of current login sessions. `/var/log/wtmp` - List of previous login sessions `/var/log/btmp` - List all the bad login attempt
How to control the time that users spend on
1,471,019,294,000
I'm looking for ways to make use of an SSD to speed up my system. In “Linux equivalent to ReadyBoost?” (and the research that triggered for me) I've learned about bcache, dm-cache and EnhanceIO. All three of these seem capable of caching read data on SSD. However, unless I'm missing something, all three seem to store a file/block/extent/whatever in cache the first time it is read. Large sequential reads might be an exception, but otherwise it seems as if every read cache miss would cause something to get cached. I'd like the cache to cache those reads I use often. I'm worried that a search over the bodies of all my maildir files or a recursive grep in some large directory might evict large portions of stuff I read far more often. Is there any technology to cache frequently read files, instead of recently read ones? Something which builds up some form of active set or some such? I guess adaptive replacement might be a term describing what I'm after. Lacking that, I wonder whether it might make sense to use LVM as a bottom layer, and build up several bcache-enabled devices on top of that. The idea is that e.g. mail reads would not evict caches for /usr and the likes. Each mounted file system would get its own cache of fixed size, or none at all. Does anyone have experience with bcache on top of lvm? Is there a reason against this approach? Any alternative suggestions are welcome as well. Note however that I'm looking for something ready for production use on Linux. I feel ZFS with its L2ARC feature doesn't fall in that category (yet), although you are welcome to argue that point if you are convinced of the opposite. The reason for LVM is that I want to be able to resize space allocated for those various file systems as needed, which is a pain using static partitioning. So proposed solutions should also provide that kind of flexibility. Edit 1: Some clarifications. My main concern is bootup time. I'd like to see all the files which are used for every boot readily accessible on that SSD. And I'd rather not have to worry about keeping the SSD in sync e.g. after package upgrades (which occur rather often on my Gentoo testing). If often-used data which I don't use during boot ends up in the cache as well, that's an added bonus. My current work project e.g. would be a nice candidate. But I'd guess 90% of the files I use every day will be used within the first 5 minutes after pressing the power button. One consequence of this aim is that approaches which wipe the cache after boot, like ZFS L2ARC apparently does, are not a feasible solution. The answer by goldilocks moved the focus from cache insertion to cache eviction. But that doesn't change the fundamental nature of the problem. Unless the cache keeps track of how often or frequently an item is used, things might still drop out of the cache too soon. Particularly since I expect those files I use all the time to reside in the RAM cache from boot till shutdown, so they will be read from disk only once for every boot. The cache eviction policies I found for bcache and dm-cache, namely LRU and FIFO, both would evict those boot-time files in preference to other files read on that same working day. Thus my concern.
To my best understanding, dm-cache does what you are asking for. I could not find a definite source for this, but here the author explains that he should have called it dm-hotspot, because it tries to find "hot spots", i.e. areas of high activity and only caches those. In the output of dmsetup status you will find two variables, namely read_promote_adjustment and write_promote_adjustment. The cache-policies file explains that Internally the mq policy determines a promotion threshold. If the hit count of a block not in the cache goes above this threshold it gets promoted to the cache. So by adjusting read_promote_adjustment and write_promote_adjustment you can determine what exactly you mean by frequently read/written data and once the number of reads/writes exceed this threshold, the block will be "promoted" to, that is, stored in, the cache. Remember that this (pre-cache) metadata is usually kept in memory and only written to disk/SSD when the cache device is suspended.
SSD as a read cache for FREQUENTLY read data
1,471,019,294,000
It seems to be extremely difficult to install xdotool on CentOS because of it's requirements. such as yum groupinstall 'Development Tools' -y yum install libXi-devel libXtst-devel libXinerama-devel -y the top one especially is difficult to move to a folder and install locally. ( without internet ). ( for extra speed ). Currently I have to run those two commands and then I have to run these in order to install xdotool on CentOS Linux. cat > /etc/ld.so.conf << "EOF" /usr/local/lib EOF # rm -rf xdotool-2.20110530.1 # tar -xvf xdot* cd xdot* make install I tried adding epel and rpmforge repos to my yum and then I searched for xdotool nothing was found. I was wondering if there is a known rpm version so that installing it would be simple on CentOS Linux.
Looking closer into it. It looks like xdotool is provided by the epel repository (the previous source, Nux dextop, is now defunct): [root@nctirlwb07 ~]# yum info xdotool Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile Available Packages Name : xdotool Arch : i686 Epoch : 1 Version : 2.20110530.1 Release : 7.el6 Size : 43 k Repo : epel Summary : Fake keyboard/mouse input URL : http://www.semicomplete.com/projects/xdotool/ License : BSD Description : This tool lets you programmatically (or manually) simulate keyboard input and mouse activity, move and re-size windows, etc.
is there a "xdotool" rpm available for Centos Linux?
1,471,019,294,000
What elements of /proc/meminfo sum up to MemTotal? Example of tee /tmp/proc/meminfo < /proc/meminfo MemTotal: 1279296 kB MemFree: 164092 kB Buffers: 62392 kB Cached: 378116 kB SwapCached: 0 kB Active: 715176 kB Inactive: 307800 kB Active(anon): 583268 kB Inactive(anon): 3384 kB Active(file): 131908 kB Inactive(file): 304416 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 44 kB Writeback: 0 kB AnonPages: 582480 kB Mapped: 112904 kB Shmem: 4192 kB Slab: 47524 kB SReclaimable: 33588 kB SUnreclaim: 13936 kB KernelStack: 1568 kB PageTables: 12092 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 639648 kB Committed_AS: 1298132 kB VmallocTotal: 34359738367 kB VmallocUsed: 24012 kB VmallocChunk: 34359696868 kB HardwareCorrupted: 0 kB AnonHugePages: 77824 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8832 kB DirectMap2M: 1300480 kB Here are snippets that helps me in checking different configurations: # Let's load them into CLI variables all=$(sed 's!:[^0-9]\+!=!;s! kB!!;s![()]!_!g' /tmp/proc/meminfo) ; eval $all # Let's make overview sorted by values (helps in tracking missing one) echo $all | sed 's! !\n!g' | sort -n -k 2 -t '=' # Let's try Memtotal=MemFree+Active+Cached+Buffers ? (should be zero) echo $[ $MemTotal - $MemFree - $Active - $Cached - $Buffers ] # But gives -40480 What am I missing? Which elements of /proc/meminfo should I sum to get MemTotal?
I'm not sure everything you need is exposed in /proc/meminfo's output so that you can calculate MemTotal yourself. From the Linux Kernel's documentation proc.txt file: excerpt MemTotal: Total usable ram (i.e. physical ram minus a few reserved bits and the kernel binary code) dmesg If you look through either the output of dmesg or the log file /var/log/dmesg you can find the following information: $ grep -E "total|Memory:.*available" /var/log/dmesg [ 0.000000] total RAM covered: 8064M [ 0.000000] On node 0 totalpages: 2044843 [ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init) I believe this information can be used to determine MemTotal. This blog post covers it in more details, it's titled: Understanding “vmalloc region overlap”. Also this post, which provides some additional info, titled: Anatomy of a Program in Memory. References How do I account for all of the memory in meminfo? How to Calculate MemTotal in /proc/meminfo
/proc/meminfo MemTotal =?
1,471,019,294,000
I'm having trouble with the thinkpad_acpi module on my Thinkpad T400. While the processor temperature can vary, depending on the CPU activity, from 40 to 85 deg. Celsius, my fan speed remains almost constant, in the range of 2600-3000 rpm. I was expecting the fan to speed up as the temperature rises, but it's not happening. Before digging deeper into why it doesn't work as one would expect, I tried to check if manually increasing the fan speed works. According to this README, I should be able to control the fan speed by writing level [1-7] to /proc/acpi/ibm/fan, but I get the "Invalid argument" error whatever the value. I realize this is an obscure problem, but may be someone has an idea what might be valid arguments. Here are some of my attempts: # cat /proc/acpi/ibm/fan status: enabled speed: 2966 level: auto # echo 5 >/proc/acpi/ibm/fan bash: echo: write error: Invalid argument # echo 'level 5' >/proc/acpi/ibm/fan bash: echo: write error: Invalid argument # echo 'enable' >/proc/acpi/ibm/fan bash: echo: write error: Invalid argument # echo 'level auto' >/proc/acpi/ibm/fan bash: echo: write error: Invalid argument
I think you're running into this: excerpt from thinkwiki - How to control fan speed Fan control operations are disabled by default for safety reasons. To enable fan control, the module parameter fan_control=1 must be given to thinkpad-acpi. You should be able to create the following file /etc/modprobe.d/thinkpad_acpi.conf with: options thinkpad_acpi fan_control=1 inside. Once you're enabled the fan_control option you should be able to do the following types of commands: $ echo level 0 > /proc/acpi/ibm/fan # (fan off) $ echo level 7 > /proc/acpi/ibm/fan # (maximum speed) $ echo level auto > /proc/acpi/ibm/fan # (automatic - default) If you receive a PERMISSION DENIED error you can use the following command syntax instead as a work-around: $ echo level 0 | sudo tee /proc/acpi/ibm/fan # (fan off) $ echo level 7 | sudo tee /proc/acpi/ibm/fan # (maximum speed) $ echo level auto | sudo tee /proc/acpi/ibm/fan # (automatic - default)
How to control thinkpad_acpi via procfs, RHEL 6.4
1,471,019,294,000
I need to externally limit a process/session to a certain number of cores. Are there any other possibilities than CPU affinity (I don't like the need to specify the actual cores) and cgroups (hard to integrate into our project)?
We went with cgroups in the end, since there really doesn't seem to be any other approach that would accomplish this. Cgroups allow CPU utilization limiting through the kernel scheduler, using cpu.cfs_period_us and cpu.cfs_quota_us. This avoids the explicit specification of CPU cores.
Externaly limiting number of CPU cores used
1,471,019,294,000
vmstat 1 Above will print virtual memory statistics each seconds. It will also show the CPU utilization for last second. I have a web server at hand which runs httpd and MySQL. I need to find how much CPU httpd consumed in last second. Like vmstat particularly for httpd. I tried this : ps -e -o %mem,%cpu,cmd | grep mysql | awk '{memory+=$1;cpu+=$2} END {print memory,cpu}' But it will show me the ratio of CPU used since the start of the process. So, with above, if my process caused a spike and then went to sleep for long time, I won't know it. It's like windows process manager, which shows which process is using how much CPU. I hope I am making my question understandable. I will clarify if anything is missing.
You could use top -b -d 1 to achieve that for CPU usage. top displays process CPU usage relative to the last output.
Linux : See CPU usage by a process for the last second
1,471,019,294,000
Directly related: Prevent claiming of novelty usb device by usbhid so I can control it with libusb? I want to access an RFID reader (works as HID device) from a program that uses libusb-0.1. In the code, the kernel driver is correctly detached with usb_detach_kernel_driver_np (no errors), but is seems that whenever my program tries to access the USB device, the usbhid module reclaims it. The following error always appears in dmesg: usb 1-1.3: usbfs: interface 0 claimed by usbhid while 'MyProgram' sets config #1 I've added the following udev rule, restarted udevd and replugged the device, but without effect. It is supposed to blacklist the device from being used by usbhid. # I anonymized the vendor/product IDs here ATTRS{idVendor}=="dead", ATTRS{idProduct}=="beef", OPTIONS=="ignore_device" Apart from dmesg output, I can see in /sys/bus/usb/drivers/usbhid/ that the device 1-1.3:1.0 is recreated every time, so the blacklisting doesn't seem to work. Anything else I could try? The operating system is Raspbian (on a Raspberry Pi) with kernel 3.2.27.
I've solved this part of the problem: OPTIONS=="ignore_device" was removed from the kernel (commit) blacklist usbhid didn't do anything, not even blocked my keyboard A configuration file in /etc/modprobe.d with options usbhid quirks=0xdead:0xbeef:0x0004 did not work because usbhid was not compiled as module So, I added usbhid.quirks=0xdead:0xbeef:0x4 to the boot command line (on Raspbian, that's in /boot/cmdline.txt) and usbhid does not bind the device anymore. My original problem, however, still remains. I always get a read/timeout error when accessing the RFID reader the first time.
Prevent usbhid from claiming USB device
1,471,019,294,000
I have started tuning a bit Linux VM performance on my system (yes. I know that vm.swappiness=0 will kill kittens but I found 30-40 as much better for me as it improved my latency - probably at cost of throughput). I would like to ask how the tmpfs is counted (is it cache or program) for purpose of swapping and vm.swappiness. To give higher level I need a folder which: Usually is empty but usage might increase to up to 8x my main memory size Do not need to be preserved about reboots Is low priority as far as I/O is concerned (i.e. programs using it might wait) but it would be nice if it was fast Currently I'm using a normal FS. I heard (not tested) about problems with large tmpfs pushing the data on swap. Since I assume that tests were done with default vm.swappiness=60 and tmpfs is simply occupying only cache the decreased vm.swappiness would make it easier swappable during memory pressure. Am I correct?
tmpfs is implemented as cache pages, so a low value for vm.swappiness will make tmpfs files more likely to be swapped out, since the system will favor stealing cache pages over application pages.
tmpfs and vm.swappiness
1,471,019,294,000
For example, one of the output fields of this BSD style command, ps aux, is %CPU. The alternative command, ps -efl outputs the C (or CP) field. As per the ps man page: %CPU is the cpu utilization of the process in "##.#" format. Currently, it is the CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage. C is essentially %CPU expressed as an integer That is how the ps man page details %CPU or C. But most books and websites on the internet simply say, %CPU or C is CPU usage of a process. One would think that it means % of the CPU's processing power used by a process out of the total available processing power from the CPU. Or is it only me?
The ratio of CPU time to real time (computed in one of the many sensible ways) is the measure of the percent of CPU processing power used by a process out of the total processing power available from the CPU. Each process in the system can be in two kinds of state: it is either running on a processor or it is waiting (reality is a bit more complex than that and there are more process states, but for the sake of simplicity this answer doesn't differentiate between non-running states, like runnable, interruptible wait, non-interruptible wait etc). Ordinary process usually spends some time running on a processor and then ends up waiting for an event to happen (e.g. data arriving on a network connection, disk I/O completing, lock becoming available, CPU becoming available again for a runnable process after it has used up its time quantum). The ratio of the time that a process spends running on a processor in a certain time interval to the length of this interval is a very interesting characteristic. Processes may differ in this characteristic significantly, e.g. a process running scientific computing program will very likely end up using a lot of CPU and little I/O while your shell mostly waits for I/O and does a bit of processing sporadically. In idealized situation (no overhead from the scheduler, no interrupts etc) and with perfect measurement the sum of CPU time used by each process on a system within one second would be less than one second, the remaining time being the idle CPU time. As you add more processes, especially CPU-bound ones the idle CPU time fraction shrinks and the amount of total CPU time used by all processes within each second approaches one second. At that point addition of extra processes may result in runnable processes waiting for CPU and thus increasing run queues lengths (and hence load averages) and eventually slowing the system down. Note that taking a simple ratio of process's entire CPU time to the time elapsed since it started ends up representing process's average CPU usage. Since some processes change behavior during runtime (e.g. database server waiting for queries vs the same database server executing a number of complex queries) it is often more interesting to know the most recent CPU usage. For this reason some systems (e.g. FreeBSD, Mac OS X) employ a decaying average as per this manpage: The CPU utilization of the process; this is a decaying average over up to a minute of previous (real) time. Since the time base over which this is computed varies (since processes may be very young) it is possible for the sum of all %cpu fields to exceed 100%. Linux has a simplified accounting which gives you CPU usage as per this manpage: CPU usage is currently expressed as the percentage of time spent running during the entire lifetime of a process. This is not ideal, and it does not conform to the standards that ps otherwise conforms to. CPU usage is unlikely to add up to exactly 100%.
Among "ps" command output fields, %CPU isn't the actual CPU usage of the process?
1,471,019,294,000
I'm using Ubuntu as my primary OS and alternative is Windows 7 for gaming, and another stuffs. I want to have menu to boot some live CD ISO. Is there anyway to make menu entry in Grub2/Burg to boot ISO file like the CD way? I see there are some ways to make it possible but almost method need specified boot arguments (kernel parameters). But I have mixed kind of Live OS wan to boot up using boot loader included: Linux, Unix, DOS (for recovery purpose)... I'm looking for more generic way to make it easy to discover and add to the menu config file.
I have got a perfect chain loader with SysLinux, Grub4Dos and Grub2, and here is my configs: Syslinux LABEL DSL KERNEL memdisk INITRD /iso/dsl.iso APPEND iso raw LABEL GRUB4DOS KERNEL /boot/grub.exe Grub4Dos title Paragon Partition Manager map (hd0,0)/iso/paragon-bootable-media.iso (hd32) map --hook chainloader (hd32) boot title Syslinux chainloader /boot/syslinux/syslinux.bin title GRUB2 Chainload root (hd0,0) kernel /boot/grub/core.img boot Grub2 menuentry "Ubuntu 13.10 Desktop ISO" { loopback loop /iso/ubuntu-desktop-amd64-13.10.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/iso/ubuntu-desktop-amd64-13.10.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } menuentry "Tinycore ISO" { loopback loop /iso/tinycore.iso linux (loop)/boot/bzImage -- initrd (loop)/boot/tinycore.gz } menuentry "GRUB4DOS" { linux16 /boot/grub.exe } menuentry "SYSLINUX" { chainloader=/boot/syslinux/syslinux.bin }
How to boot from iso with Grub2/Burg boot loader
1,471,019,294,000
How can I create a menu item that points to a URL? I've tried creating a mylink.desktop entry like this: [Desktop Entry] Encoding=UTF-8 Name=My Link Name Icon=my-icon Type=Link Categories=Office; URL=http://www.example.com/ then using xdg-desktop-menu install mylink.desktop should put this entry in the current user's menu. This does not work however. The file is copied into ~/.local/share/applications/ but the entry doesn't show up in the menu. If I change Type to Application and define Exec instead of URL then it works. But I don't want to have menu entry for a local application. I want a default browser to launch on a specified address when the menu entry is selected. How can I do that? Also, by using this command: xdg-desktop-icon install mylink.desktop the result is as expected - a new link is created on the desktop. So why doesn't it work in the menu? I tested this on RedHat Enterprise Linux 6 with KDE, but I would like to know how to do it in Gnome as well.
While reading up on stuff I stumbled uppon this question. That gave me an idea for a workaround: [Desktop Entry] Encoding=UTF-8 Name=My Link Name Icon=my-icon Type=Application Categories=Office; Exec=xdg-open http://www.example.com/ This does exactly what I need and is a local application, so I can use xdg-desktop-menu to install this entry without problems.
Create url link in menu
1,471,019,294,000
On Debian Stable, I would like to be able to create a new instance of the OS, use apt-get to install some Unstable packages with dependencies, then cleanly delete the whole thing when I'm done. VirtualBox or QEMU would work, but Xen/KVM/LXC seem to be lighter and faster. How do they compare for this use? Edit: To clarify, in this case, I want to set up to be able to install-use-remove dangerous things without messing up the base system. Looking for what would be most lightweight/fast.
For this kind of use, I'd go with a specialized Linux-on-Linux virtual machine technology (as opposed to a more general technology such ax Xen, KVM, VirtualBox or Qemu): LXC, OpenVZ, user-mode Linux, Vserver… You could even use a chrooted installation. The schroot package is convenient for this.
Xen/KVM/LXC for testing packages
1,471,019,294,000
My system has to auto-mount USB devices; how can I be notified when a USB device is plugged in? Where can I read more about this subject? I would like to handle this problem via C or a shell script.
Udev support running external programs KERNEL=="sdb", RUN+="/usr/bin/my_program"
How to be notified when a USB device was plugged in?
1,471,019,294,000
For a user process, I want to mount a directory in other location but in user space without root privilieges. Something like mount --bind /origin /dest, but with a vfs wrapper. Like a usermode fine-tuned chroot. The program would wrapper the syscalls to files to "substitute" the paths needed. It could be called with a command line like: bindvfs /fake-home:/home ls /home I am sure that this alredy exists! :)
You can use PRoot almost the same way as in your example: proot -b /fake-home:/home ls /home Unlike BindFS/FUSE, PRoot is able to bind over files and directories you don't own.
Is there a linux vfs tool that allows bind a directory in different location (like mount --bind) in user space?
1,471,019,294,000
I have compiled and installed the 2.6 kernel on an ARM board. I am using the ARM mini2440 board. I would like to know if there is already a way to access the General Purpose I/O port pins? Or will I have to do ioctl() and access them directly from the memory?
Use the sysfs control files in /sys/class/gpio. The following links will hopefully be useful to helping you get started: http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/GPIO Have seen reports of this article on the Beagle Board also working with the mini2440: http://blog.makezine.com/archive/2009/02/blinking_leds_with_the_beagle_board.html In your Linux kernel documentation, look at Documentation/gpio.txt too.
Linux kernel 2.6 on ARM
1,471,019,294,000
I run a software raid array for my backups, but my data has outgrown capacity. considering I have a full 2.4TB array with 5*600GB drives and also have 5*2TB drives I would like to swap in. What would be the nicest way to upgrade the array? I thought of faulting 1 drive at a time and swapping in a new drive and rebuilding, but I am not sure if at the end of the process I will be able to resize the array Thoughts?
Assuming this is linux, this is doable and pretty easy actually. It is covered on the software raid wiki but the basic steps are: Fail and remove drive. Replace with a larger drive. Partition the drive so the partitions are the same size or larger than the ones in the existing software raid partition. Add the partitions to software RAID and wait for it to sync. Repeat above steps until all drives have been replaced. mdadm --grow /dev/mdX --size=max to resize the mdadm device. resize2fs /dev/mdX to resize the file system assuming you have ext3. You can grow the mdadm device and the file system while the server is live too. If your drives are hot swappable you can do everything without downtime.
In place upgrade of a software raid 5 array
1,471,019,294,000
There are exactly 169 empty .bash_history-*.tmp in my home folder that were created on the same day (April 16 2021) without my knowledge. Files have only read and write permission for the owner. I am not sure what made this to happen. It has never happened in 5 years of my Linux journey(Both desktop and servers). Even stranger thing is that my default shell is not bash but its zsh. It would be great if someone could help me figure out what actually happened(if possible) or has it happened to someone else before? thank you in advance. Here it is .-(~)(user@host) `-->> find . -name '.*.tmp' -rw------- 1 user user 0 Apr 16 17:40 .bash_history-01407.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-01810.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-02487.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-03675.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08255.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08260.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08283.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08326.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08434.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08450.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08550.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08581.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08649.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08676.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08683.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08697.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08698.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08712.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08717.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08742.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08743.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08819.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08841.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08878.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08884.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08904.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08914.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-08962.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09060.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09116.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09157.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09201.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09212.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09228.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09247.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09248.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09265.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09274.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09283.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09331.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09366.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09397.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09445.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09501.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09507.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09548.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09597.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09632.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09701.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09760.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09904.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-09992.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10059.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10158.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10166.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10170.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10320.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10536.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10594.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10631.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10714.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-10753.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11127.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11189.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11494.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11514.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11697.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11774.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11827.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-11973.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12002.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12266.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12316.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12331.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12357.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12377.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12393.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12399.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12400.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12405.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12413.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12417.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12435.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12475.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12513.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12563.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12644.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12648.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12656.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12743.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12779.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12801.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12803.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12817.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12868.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-12971.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13005.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13013.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13020.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13033.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13042.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13047.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13065.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13074.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13089.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13090.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13092.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13094.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13097.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13099.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13145.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13162.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13184.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13202.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13203.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13206.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13208.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13218.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13219.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13220.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13250.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13313.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13316.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13320.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13322.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13323.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13341.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13360.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13388.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13489.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13530.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13566.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13575.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13576.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13630.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13640.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13675.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-13717.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14153.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14156.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14167.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14204.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14254.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14256.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14265.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14267.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14331.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14332.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14359.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14368.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14693.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14792.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14922.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14923.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14928.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14931.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14933.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14943.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14947.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14951.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14955.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-14968.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-30961.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-31005.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-31110.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-31142.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-32057.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-32358.tmp -rw------- 1 user user 0 Apr 16 17:40 .bash_history-32434.tmp FYI: someone helped me find a similar post on archlinuxforum but it doesn't answer my question.
The bash source (available on Debian with apt-get source bash) writes its history file using the function history_do_write in the file bash-5.0/lib/readline/histfile.c. It creates a temporary file, writes the history lines to that, and then uses this to replace the actual history file tempname = (overwrite && exists && S_ISREG (finfo.st_mode)) ? history_tempfile (histname) : 0; output = tempname ? tempname : histname; ... if (rv == 0 && histname && tempname) rv = histfile_restore (tempname, histname); There are a number of places where the write could fail, and in these instances the temporary file is unlinked (deleted) and the original left alone. However, you mentioned in a passing comment that you ran a forkbomb. This could well have been the underlying cause of these temporary files. It's possible that with the extreme memory and process pressure triggered by the uncontrolled forkbomb, bash wasn't able to get enough temporary memory to complete even this recovery process and simply crashed out during the attempted update. (Please note that this isn't hard fact, just a hypothesis.) If you are in an environment where it's likely users will run forkbombs, it's worth enabling resource control.
Strange empty bash_history-*.tmp files in my $HOME folder
1,471,019,294,000
On any PC where USB host controller is connected to the PCI/PCIE bus I see the following: $ cat /sys/bus/usb/devices/usb1/{idVendor,idProduct,manufacturer,product,serial} 1d6b 0002 Linux 4.14.157-amd64-x32 ehci_hcd EHCI Host Controller 0000:00:1a.0 I.e. the EHCI host controller, which in this example has the PCI device position 0000:00:1a.0, is represented with a bogus set of string descriptors and vendor/product identifiers. Looking up the vendor id 1d6b in usb.ids I find that it corresponds to Linux Foundation. (lsusb lists it as "Linux Foundation 2.0 root hub".) But the PCI device referenced by the serial is real and has the following properties: $ cat /sys/bus/pci/devices/0000:00:1a.0/{vendor,device} 0x8086 0x8c2d Looking these ids up in pci.ids we can find that it's Intel 8 Series/C220 Series Chipset Family USB EHCI (the same as lspci would say). A real piece of hardware from a real HW manufacturer. So why does Linux represent this Intel hardware with some strange set of ids? I do realize that PCI and USB vendor/product ids may collide, so it wouldn't be possible to start the USB device tree with ids from PCI namespace. But why the string descriptors? My guess is that this is because the whole USB entity named "*HCI Host Controller" is a fictitious one. But on the other hand, it appears to have an address (always =1), which is never assigned to a newly-connected device on this bus. So it looks like there might be something real about this USB entity. But this reserved address might also be just a way of bookkeeping. Is my guess correct? Is the host controller—as a USB entity—entirely fictitious? Does it never appear as an actual addressable device on the wire? Or is there something real to it, something we could actually send standard USB requests to, and not have their processing simply emulated by the kernel?
Linux has an abstraction that lets Host Controller Drivers share code. As a comment in drivers/usb/core/hcd.c says: * USB Host Controller Driver framework * * Plugs into usbcore (usb_bus) and lets HCDs share code, minimizing * HCD-specific behaviors/bugs. * * This does error checks, tracks devices and urbs, and delegates to a * "hc_driver" only for code (and data) that really needs to know about * hardware differences. That includes root hub registers, i/o queues, * and so on ... but as little else as possible. * * Shared code includes most of the "root hub" code (these are emulated, * though each HC's hardware works differently) and PCI glue, plus request * tracking overhead. The HCD code should only block on spinlocks or on * hardware handshaking; blocking on software events (such as other kernel * threads releasing resources, or completing actions) is all generic. * * Happens the USB 2.0 spec says this would be invisible inside the "USBD", * and includes mostly a "HCDI" (HCD Interface) along with some APIs used * only by the hub driver ... and that neither should be seen or used by * usb client device drivers. * USB device address 1 is assigned to the root hub in register_root_hub(), as commented right above this function: * register_root_hub - called by usb_add_hcd() to register a root hub * @hcd: host controller for this root hub * * This function registers the root hub with the USB subsystem. It sets up * the device properly in the device tree and then calls usb_new_device() * to register the usb device. It also assigns the root hub's USB address * (always 1). This is corroborated by usb.ids database that says that for vendor Id 1d6b the product Ids 1,2,3 correspond to 1.1, 2.0, 3.0 root hub, respectively. What we have in the Linux device tree at this device is a kind of mixture of the USB host controller (a real device) and the USB root hub (also a real device), abstracted by the USB HCD framework discussed above. Now, some modern systems with xHCI have also EHCI controllers that have Intel Corp. Integrated Rate Matching Hub always attached. These are not the root hubs, they have address 2, not 1. From an Intel manual, chapter 5.19.1: The Hubs convert low and full-speed traffic into high-speed traffic.
Why does Linux list USB Host Controller' vendor as "Linux Foundation"?
1,471,019,294,000
musl libc allows you to change uid to root even after supposedly dropping permissions with setuid(1000). I am not able to reproduce the problem with glibc. Code: #define _GNU_SOURCE #include <unistd.h> #include <stdio.h> int main(void) { uid_t r, e, s; getresuid(&r, &e, &s); printf("%d %d %d\n", r, e, s); if (setuid(1000) != 0) puts("setuid(1000) failed"); else puts("setuid(1000) succeded"); getresuid(&r, &e, &s); printf("%d %d %d\n", r, e, s); if (setuid(0) != 0) puts("setuid(0) failed"); else puts("setuid(0) succeded"); getresuid(&r, &e, &s); printf("%d %d %d\n", r, e, s); return 0; } which, after compiling with gcc -o setuidtest setuidtest.c, produces the following output when running as root 0 0 0 setuid(1000) succeded 1000 1000 1000 setuid(0) succeded 0 0 0 I am running Void Linux with kernel version 4.18_1 and musl version 1.1.20_2, glibc version 2.28_3 Where does this problem come from? Is it my kernel, musl libc or is my testing code just incorrect? Is this problem reproducible by other people? strace output (musl unpriviledged) $ strace ./setuidtest (unpriviledged) execve("./setuidtest", ["./setuidtest"], 0x7ffdd15f69e0 /* 24 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x7f538925cb28) = 0 set_tid_address(0x7f538925cb68) = 6299 mprotect(0x7f5389259000, 4096, PROT_READ) = 0 mprotect(0x55e9c5e75000, 4096, PROT_READ) = 0 getresuid([1000], [1000], [1000]) = 0 ioctl(1, TIOCGWINSZ, 0x7ffd2faf3160) = -1 ENOTTY (Not a tty) writev(1, [{iov_base="1000 1000 1000", iov_len=14}, {iov_base="\n", iov_len=1}], 21000 1000 1000 ) = 15 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [], 8) = 0 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0 setuid(1000) = 0 futex(0x7f538925cfd8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 getresuid([1000], [1000], [1000]) = 0 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [], 8) = 0 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0 setuid(0) = -1 EPERM (Operation not permitted) futex(0x7f538925cfd8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 getresuid([1000], [1000], [1000]) = 0 writev(1, [{iov_base="setuid(1000) succeded\n1000 1000 "..., iov_len=69}, {iov_base=NULL, iov_len=0}], 2setuid(1000) succeded 1000 1000 1000 setuid(0) failed 1000 1000 1000 ) = 69 exit_group(0) = ? +++ exited with 0 +++ strace output (musl as root) # strace ./setuidtest (as root) execve("./setuidtest", ["./setuidtest"], 0x7ffe19286eb0 /* 18 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x7f08b2619b28) = 0 set_tid_address(0x7f08b2619b68) = 6409 mprotect(0x7f08b2616000, 4096, PROT_READ) = 0 mprotect(0x561507eb4000, 4096, PROT_READ) = 0 getresuid([0], [0], [0]) = 0 ioctl(1, TIOCGWINSZ, 0x7fffb222bff0) = -1 ENOTTY (Not a tty) writev(1, [{iov_base="0 0 0", iov_len=5}, {iov_base="\n", iov_len=1}], 20 0 0 ) = 6 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [], 8) = 0 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0 setuid(1000) = 0 futex(0x7f08b2619fd8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 getresuid([1000], [1000], [1000]) = 0 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [], 8) = 0 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0 setuid(0) = 0 futex(0x7f08b2619fd8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 getresuid([0], [0], [0]) = 0 writev(1, [{iov_base="setuid(1000) succeded\n1000 1000 "..., iov_len=62}, {iov_base=NULL, iov_len=0}], 2setuid(1000) succeded 1000 1000 1000 setuid(0) succeded 0 0 0 ) = 62 exit_group(0) = ? +++ exited with 0 +++ strace output (glibc unprivileged) $ strace ./setuidtest execve("./setuidtest", ["./setuidtest"], 0x7fff673b2c20 /* 14 vars */) = 0 brk(NULL) = 0x558b5fbb1000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64", 0x7ffc25da1650) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3604\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=18248368, ...}) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fbcd7093000 mmap(NULL, 3921920, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fbcd6ab1000 mprotect(0x7fbcd6c65000, 2097152, PROT_NONE) = 0 mmap(0x7fbcd6e65000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x7fbcd6e65000 mmap(0x7fbcd6e6b000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fbcd6e6b000 close(3) = 0 arch_prctl(ARCH_SET_FS, 0x7fbcd7094500) = 0 mprotect(0x7fbcd6e65000, 16384, PROT_READ) = 0 mprotect(0x558b5f5c0000, 4096, PROT_READ) = 0 mprotect(0x7fbcd7095000, 4096, PROT_READ) = 0 getresuid([1000], [1000], [1000]) = 0 fstat(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 brk(NULL) = 0x558b5fbb1000 brk(0x558b5fbd2000) = 0x558b5fbd2000 setuid(1000) = 0 getresuid([1000], [1000], [1000]) = 0 setuid(0) = -1 EPERM (Operation not permitted) getresuid([1000], [1000], [1000]) = 0 write(1, "1000 1000 1000\nsetuid(1000) succ"..., 841000 1000 1000 setuid(1000) succeded 1000 1000 1000 setuid(0) failed 1000 1000 1000 ) = 84 exit_group(0) = ? +++ exited with 0 +++ strace output (glibc as root) # strace ./setuidtest execve("./test", ["./test"], 0x7ffc9176d3f0 /* 15 vars */) = 0 brk(NULL) = 0x5653f4910000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/tls/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/tls", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/usr/lib/x86_64", 0x7ffe7d0c9550) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3604\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=18248368, ...}) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb18f5af000 mmap(NULL, 3921920, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb18efcd000 mprotect(0x7fb18f181000, 2097152, PROT_NONE) = 0 mmap(0x7fb18f381000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x7fb18f381000 mmap(0x7fb18f387000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb18f387000 close(3) = 0 arch_prctl(ARCH_SET_FS, 0x7fb18f5b0500) = 0 mprotect(0x7fb18f381000, 16384, PROT_READ) = 0 mprotect(0x5653f40df000, 4096, PROT_READ) = 0 mprotect(0x7fb18f5b1000, 4096, PROT_READ) = 0 getresuid([0], [0], [0]) = 0 fstat(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 brk(NULL) = 0x5653f4910000 brk(0x5653f4931000) = 0x5653f4931000 setuid(1000) = 0 getresuid([1000], [1000], [1000]) = 0 setuid(0) = -1 EPERM (Operation not permitted) getresuid([1000], [1000], [1000]) = 0 write(1, "0 0 0\nsetuid(1000) succeded\n1000"..., 750 0 0 setuid(1000) succeded 1000 1000 1000 setuid(0) failed 1000 1000 1000 ) = 75 exit_group(0) = ? +++ exited with 0 +++
The issue did indeed not come from either the kernel or musl but capabilities. There turned out to be a difference between my two testing systems - glibc and musl - which made it seem like it had something to do with them. The problem was actually caused by the pam_rundir.so module which set the SECBIT_NO_SETUID_FIXUP secure bit, which makes the kernel keep capabilities for the program after setuid (therefore allowing me to setuid back to root).
Root privileges can be restored after setuid(1000) in musl libc
1,471,019,294,000
I want to remove all "blank" characters from the very beginning and the very end of a text file, including \n if exists. (basically mimicking the behaviour of trim() function of most programming languages, if the "file" was a big string).
Use sed: sed -z 's/^\s*//; s/\s*$//' infile s/^\s*//, deletes whitespaces/empty lines at very begging of the infile as a input file. s/\s*$//, deleted whitespaces/empty lines at very end of the infile as a input file including \n at very end of infile. Example cat -e infile: $ $ $ Three blank lines above$ $ $ Two blank lines in the middle$ a blank lines below$ $ a line with trailing whitespaces $ a line with leading whitespaces$ below are two empty lines + one whitespaces then an empty line again$ $ $ $ $ The output: Three blank lines above Two blank lines in the middle a blank lines below a line with trailing whitespaces a line with leading whitespaces below are two empty lines + one whitespaces then an empty line again Or you can use printf to print the result of sed that removed very first whitespaces/empty lines and used it within command substitution that deletes empty lines only at very end and \n. printf '%s' "$(sed -z 's/^\s*//' infile)"
How can I trim the contents of a text file?
1,471,019,294,000
We have a Redhat 7 machine. And the filesystem for device /dev/sdc is ext4. When we perform: mount -o rw,remount /grop/sdc We get write protected error like: /dev/sdc read-write, is write-protected in spite the /etc/fstab allow read and write and all sub folder under /grop/sdc have full write/read permissions: /dev/sdc /grop/sdc ext4 defaults,noatime 0 0 Then we do umount -l /grop/sdc and from df -h, we see that the disk is currently not mounted. Then we perform mount /grop/sdc but we get busy. :-( So we do not have a choice and we perform a reboot. And from history we do not see that someone limited the disk for read only by mount. This is very strange, how the disk device became write protected? In order to solve this we perform a full reboot, and now the disk is write/read as it should be. What happens here, after reboot we check the dmesg and we see the following: EXT4-fs warning (device sdc): ext4_clear_journal_err:4698: Marking fs in need of filesystem check. EXT4-fs (sdc): warning: mounting fs with errors, running e2fsck is recommended EXT4-fs (sdc): recovery complete can we say that during boot - e2fsck was performed ? dmesg | grep sdc [sdc] Disabling DIF Type 2 protection [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.27 TiB) [sdc] 4096-byte physical blocks [sdc] Write Protect is off [sdc] Mode Sense: d7 00 10 08 [sdc] Write cache: disabled, read cache: enabled, supports DPO and FUA sdc: unknown partition table [sdc] Attached SCSI disk EXT4-fs warning (device sdc): ext4_clear_journal_err:4697: Filesystem error recorded from previous mount: IO failure EXT4-fs warning (device sdc): ext4_clear_journal_err:4698: Marking fs in need of filesystem check. EXT4-fs (sdc): warning: mounting fs with errors, running e2fsck is recommended EXT4-fs (sdc): recovery complete EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: (null) EXT4-fs (sdc): error count since last fsck: 5 EXT4-fs (sdc): initial error at time 1510277668: ext4_journal_check_start:56 EXT4-fs (sdc): last error at time 1510496990: ext4_put_super:791
It appears your filesystem has become corrupt somehow. Most filesystems switch to read-only mode once they encounter an error. Please perform the following commands in a terminal: umount /dev/sdc e2fsck /dev/sdc mount /dev/sdc If /dev/sdc is the harddisk which has your operating system on it, use a startup DVD or usb stick to boot from.
How an ext4 disk became suddenly write protected in spite configuration is read/write?
1,471,019,294,000
Is there a way to make PhantomJS (or any headless browser) use an alternate font cache besides /usr/share/fonts/? One way to use more fonts (e.g. CJK fonts) with PhantomJS is to install them to this directory. However, this is a shared server and cannot be done. I cannot seem to find a CLI parameter for this. Please forgive me if this is a silly question. This is a RedHat build, and yum and rpm are disabled. Screenshot with PhantomJS - fonts not loading: Desired result (http://v1.jontangerine.com/silo/typography/web-fonts/): SOLVED: @grochmal showed me that fonts can be installed in the home folder. I ran fc-cache -vf and the system fonts and the ~/.fonts/TTF fonts get cached. For example, running fc-list "impact" finds the Impact font (for personal use only): > fc-list impact Impact:style=Regular,Normal,obyčejné,Standard,Κανονικά,Normaali,Normál,Normale,Standaard,Normalny,Обычный,Normálne,Navadno,Arrunta I confirmed this with the stack trace cleverly suggested by @grochmal: strace ./phantomjs ../examples/rasterize.js http://example.com img.jpg 2>&1 | grep font and found out that PhantomJS is indeed looking in my user fonts directory open("/home/user1/.fonts/TTF/verdana.ttf", O_RDONLY) = 11 open("/home/user1/.fonts/TTF/AndaleMo.TTF", O_RDONLY) = 11 open("/home/user1/.fonts/TTF/arial.ttf", O_RDONLY) = 11 open("/home/user1/.fonts/TTF/cour.ttf", O_RDONLY) = 11 open("/home/user1/.fonts/TTF/georgia.ttf", O_RDONLY) = 11 open("/home/user1/.fonts/TTF/impact.ttf", O_RDONLY) = 11 ...
PhantomJS respects fontconfig directories and even the old font.dir/font.scale postscript font configuration. For example I have and old Type 1 font: $ find ~/.fonts/Type1/ /home/grochmal/.fonts/Type1/ /home/grochmal/.fonts/Type1/augie___.pfb /home/grochmal/.fonts/Type1/fonts.scale /home/grochmal/.fonts/Type1/fonts.dir (That was created with the ol' X11 mkfontdir) And, for a better example, I'll copy a fotnconfig font into my home directory: $ mkdir -p ~/.local/share/fonts/TTF $ cp /usr/share/fonts/TTF/HomemadeApple.ttf ~/.local/share/fonts/TTF $ fc-cache # just in case Now let's see how PhantomJS uses them (using a classic example from the PhantomJS github): $ wget https://raw.githubusercontent.com/ariya/phantomjs/master/examples/rasterize.js strace prints all system calls (including filesystem access): $ strace phantomjs rasterize.js 2>&1 | grep font | grep grochmal |grep -v cache stat("/home/grochmal/.config/fontconfig/conf.d", 0x7ffff95fbbc0) = -1 ENOENT (No such file or directory) stat("/home/grochmal/.config/fontconfig/conf.d", 0x7ffff95fbbc0) = -1 ENOENT (No such file or directory) access("/home/grochmal/.config/fontconfig/conf.d", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.config/fontconfig/conf.d", R_OK) = -1 ENOENT (No such file or directory) stat("/home/grochmal/.config/fontconfig/fonts.conf", 0x7ffff95fbbc0) = -1 ENOENT (No such file or directory) stat("/home/grochmal/.config/fontconfig/fonts.conf", 0x7ffff95fbbc0) = -1 ENOENT (No such file or directory) access("/home/grochmal/.config/fontconfig/fonts.conf", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.config/fontconfig/fonts.conf", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.fonts.conf.d", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.fonts.conf.d", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.fonts.conf", R_OK) = -1 ENOENT (No such file or directory) access("/home/grochmal/.fonts.conf", R_OK) = -1 ENOENT (No such file or directory) stat("/home/grochmal/.local/share/fonts", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.local/share/fonts", O_RDONLY|O_CLOEXEC) = 4 stat("/home/grochmal/.local/share/fonts", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.local/share/fonts", O_RDONLY|O_CLOEXEC) = 4 open("/home/grochmal/.local/share/fonts", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 5 stat("/home/grochmal/.local/share/fonts/HomemadeApple.ttf", {st_mode=S_IFREG|0644, st_size=110080, ...}) = 0 open("/home/grochmal/.local/share/fonts/HomemadeApple.ttf", O_RDONLY) = 6 stat("/home/grochmal/.local/share/fonts/TTF", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 stat("/home/grochmal/.fonts", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.fonts", O_RDONLY|O_CLOEXEC) = 4 stat("/home/grochmal/.local/share/fonts/TTF", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.local/share/fonts/TTF", O_RDONLY|O_CLOEXEC) = 4 stat("/home/grochmal/.local/share/fonts/TTF", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.local/share/fonts/TTF", O_RDONLY|O_CLOEXEC) = 4 open("/home/grochmal/.local/share/fonts/TTF", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4 stat("/home/grochmal/.local/share/fonts/TTF/HomemadeApple.ttf", {st_mode=S_IFREG|0644, st_size=110080, ...}) = 0 open("/home/grochmal/.local/share/fonts/TTF/HomemadeApple.ttf", O_RDONLY) = 5 stat("/home/grochmal/.fonts/Type1", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/home/grochmal/.fonts/Type1", O_RDONLY|O_CLOEXEC) = 4 And PhantomJS went to the font directories and loaded them! I do not have a ~/.config/fontconfig/fonts.conf which may be needed for CJK fonts (because those may need some actual configuration), but you can copy a file from /etc/fonts/conf.d/* (notably some nonlatin font, to get a sample configuration). Yet, you can probably get away with most fonts by simply dropping them into ~/.local/share/fonts/TTF and then running fc-cache. Disclaimer: An old RedHat (5 for sure, not sure about 6) may not be using fontconfig, that's why I included the PFB font in the example. In that case you need to use ttmkfdir and mkfontdir to generate the font.scale and font.dir files. References: Arch Linux's extensive article on fontconfig
Is there a way to make PhantomJS (or any headless browser) use an alternate font cache?
1,459,596,229,000
The question: Using Linux and mdadm, how can I read/copy data as files from disk images made from hard disks used in an Intel Rapid Storage Technology RAID-0 array (formatted as NTFS, Windows 7 installed)? The problem: One of the drives in the array is going bad, so I'd like to copy as much data as possible before replacing the drive (and thus destroying the array). I am open to alternative solutions to this question if they solve my problem. Background I have a laptop with an Intel Rapid Storage Technology controller (referred to in various contexts as RST, RSTe, or IMSM) that has two (2) hard disks configured in RAID-0 (FakeRAID-0). RAID-0 was not my choice as the laptop was delivered to me in this configuration. One of the disks seems to have accumulated a lot of bad sectors, while the other disk is perfectly healthy. Together, the disks are still healthy enough to boot into the OS (Windows 7 64-bit), but the OS will sometimes hang when accessing damaged disk areas, and it seems like a bad idea to continue trying to use damaged disks. I'd like to copy as much data as possible off of the disks and then replace the damaged drive. Since operating live on the damaged disk is considered bad, I decided to image both disks so I could later mount the images using mdadm or something equivalent. I've spent a lot of time and done a lot of reading, but I still haven't successfully managed to mount the disk images as a (Fake)RAID-0 array. I'll try to recall the steps I performed here. Grab some snacks and a beverage, because this is lengthy. First, I got a USB external drive to run Ubuntu 15.10 64-bit off of a partition. Using a LiveCD or small USB thumb drive was easier to boot, but slower than an external (and a LiveCD isn't a persistent install). I installed ddrescue and used it to produce an image of each hard disk. There were no notable issues with creating the images. Once I got the images, I installed mdadm using apt. However, this installed an older version of mdadm from 2013. The changelogs for more recent versions indicated better support for IMSM, so I compiled and installed mdadm 3.4 using this guide, including upgrading to a kernel at or above 4.4.2. The only notable issue here was that some tests did not succeed, but the guide seemed to indicate that that was acceptable. After that, I read in a few places that I would need to use loopback devices to be able to use the images. I mounted the disk images as /dev/loop0 and /dev/loop1 with no issue. Here is some relevant info at this point of the process... mdadm --detail-platform: $ sudo mdadm --detail-platform Platform : Intel(R) Rapid Storage Technology Version : 10.1.0.1008 RAID Levels : raid0 raid1 raid5 Chunk Sizes : 4k 8k 16k 32k 64k 128k 2TB volumes : supported 2TB disks : not supported Max Disks : 7 Max Volumes : 2 per array, 4 per controller I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA) Port0 : /dev/sda (W0Q6DV7Z) Port3 : - non-disk device (HL-DT-ST DVD+-RW GS30N) - Port1 : /dev/sdb (W0Q6CJM1) Port2 : - no device attached - Port4 : - no device attached - Port5 : - no device attached - fdisk -l: $ sudo fdisk -l Disk /dev/loop0: 298.1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x2bd2c32a Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 2048 4196351 4194304 2G 7 HPFS/NTFS/exFAT /dev/loop0p2 4196352 1250273279 1246076928 594.2G 7 HPFS/NTFS/exFAT Disk /dev/loop1: 298.1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x2bd2c32a Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 4196351 4194304 2G 7 HPFS/NTFS/exFAT /dev/sda2 4196352 1250273279 1246076928 594.2G 7 HPFS/NTFS/exFAT Disk /dev/sdb: 298.1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes mdadm --examine --verbose /dev/sda: $ sudo mdadm --examine --verbose /dev/sda /dev/sda: Magic : Intel Raid ISM Cfg Sig. Version : 1.0.00 Orig Family : 81bdf089 Family : 81bdf089 Generation : 00001796 Attributes : All supported UUID : acf55f6b:49f936c5:787fa66e:620d7df0 Checksum : 6cf37d06 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 [ARRAY]: UUID : e4d3f954:2f449bfd:43495615:e040960c RAID Level : 0 Members : 2 Slots : [_U] Failed disk : 0 This Slot : ? Array Size : 1250275328 (596.18 GiB 640.14 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 128 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : clean Disk00 Serial : W0Q6DV7Z State : active failed Id : 00000000 Usable Size : 625136142 (298.09 GiB 320.07 GB) Disk01 Serial : W0Q6CJM1 State : active Id : 00010000 Usable Size : 625136142 (298.09 GiB 320.07 GB) mdadm --examine --verbose /dev/sdb: $ sudo mdadm --examine --verbose /dev/sdb /dev/sdb: Magic : Intel Raid ISM Cfg Sig. Version : 1.0.00 Orig Family : 81bdf089 Family : 81bdf089 Generation : 00001796 Attributes : All supported UUID : acf55f6b:49f936c5:787fa66e:620d7df0 Checksum : 6cf37d06 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk01 Serial : W0Q6CJM1 State : active Id : 00010000 Usable Size : 625137928 (298.09 GiB 320.07 GB) [ARRAY]: UUID : e4d3f954:2f449bfd:43495615:e040960c RAID Level : 0 Members : 2 Slots : [_U] Failed disk : 0 This Slot : 1 Array Size : 1250275328 (596.18 GiB 640.14 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 128 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : clean Disk00 Serial : W0Q6DV7Z State : active failed Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) Here is where I ran into difficulty. I tried to assemble the array. $ sudo mdadm --assemble --verbose /dev/md0 /dev/loop0 /dev/loop1 mdadm: looking for devices for /dev/md0 mdadm: Cannot assemble mbr metadata on /dev/loop0 mdadm: /dev/loop0 has no superblock - assembly aborted I get the same result by using --force or by swapping /dev/loop0 and /dev/loop1. Since IMSM is a CONTAINER type FakeRAID, I'd seen some indications that you have to create the container instead of assembling it. I tried... $ sudo mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/loop[01] mdadm: /dev/loop0 is not attached to Intel(R) RAID controller. mdadm: /dev/loop0 is not suitable for this array. mdadm: /dev/loop1 is not attached to Intel(R) RAID controller. mdadm: /dev/loop1 is not suitable for this array. mdadm: create aborted After reading a few more things, it seemed that the culprit here were IMSM_NO_PLATFORM and IMSM_DEVNAME_AS_SERIAL. After futzing around with trying to get environment variables to persist with sudo, I tried... $ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/loop[01] mdadm: /dev/loop0 appears to be part of a raid array: level=container devices=0 ctime=Wed Dec 31 19:00:00 1969 mdadm: metadata will over-write last partition on /dev/loop0. mdadm: /dev/loop1 appears to be part of a raid array: level=container devices=0 ctime=Wed Dec 31 19:00:00 1969 mdadm: container /dev/md/imsm prepared. That's something. Taking a closer look... $ ls -l /dev/md total 0 lrwxrwxrwx 1 root root 8 Apr 2 05:32 imsm -> ../md126 lrwxrwxrwx 1 root root 8 Apr 2 05:20 imsm0 -> ../md127 /dev/md/imsm0 and /dev/md127 are associated with the physical disk drives (/dev/sda and /dev/sdb). /dev/md/imsm (pointing to /dev/md126) is the newly created container based on the loopback devices. Taking a closer look at that... $ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -Ev /dev/md/imsm /dev/md/imsm: Magic : Intel Raid ISM Cfg Sig. Version : 1.0.00 Orig Family : 00000000 Family : ff3cb556 Generation : 00000001 Attributes : All supported UUID : 00000000:00000000:00000000:00000000 Checksum : 7edb0f81 correct MPB Sectors : 1 Disks : 1 RAID Devices : 0 Disk00 Serial : /dev/loop0 State : spare Id : 00000000 Usable Size : 625140238 (298.09 GiB 320.07 GB) Disk Serial : /dev/loop1 State : spare Id : 00000000 Usable Size : 625140238 (298.09 GiB 320.07 GB) Disk Serial : /dev/loop0 State : spare Id : 00000000 Usable Size : 625140238 (298.09 GiB 320.07 GB) That looks okay. Let's try to start the array. I found information (here and here) that said to use Incremental Assembly mode to start a container. $ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -I /dev/md/imsm That gave me nothing. Let's use the verbose flag. $ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -Iv /dev/md/imsm mdadm: not enough devices to start the container Oh, bother. Let's check /proc/mdstat. $ sudo cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : inactive loop1[1](S) loop0[0](S) 2210 blocks super external:imsm md127 : inactive sdb[1](S) sda[0](S) 5413 blocks super external:imsm unused devices: <none> Well, that doesn't look right - the number of blocks don't match. Looking closely at the messages from when I tried to assemble, it seems mdadm said "metadata will over-write last partition on /dev/loop0", so I'm guessing that the image file associated with /dev/loop0 is hosed. Thankfully, I have backup copies of these images, so I can grab those and start over, but it takes a while to re-copy 300-600GB even over USB3. Anyway, at this point, I'm stumped. I hope someone out there has an idea, because at this point I've got no clue what to try next. Is this the right path for addressing this problem, and I just need to get some settings right? Or is the above approach completely wrong for mounting IMSM RAID-0 disk images?
Looking at the partition table for /dev/loop0 and the disk image sizes reported for /dev/loop0 and /dev/loop1, I'm inclined to suggest that the two disks were simply bolted together and then the partition table was built for the resulting virtual disk: Disk /dev/loop0: 298.1 GiB, 320072933376 bytes, 625142448 sectors Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 2048 4196351 4194304 2G 7 HPFS/NTFS/exFAT /dev/loop0p2 4196352 1250273279 1246076928 594.2G 7 HPFS/NTFS/exFAT and Disk /dev/loop1: 298.1 GiB, 320072933376 bytes, 625142448 sectors If we take the two disks at 298.1 GiB and 298.1 GiB we get 596.2 GiB total. If we then take the sizes of the two partitions 2G + 594.2G we also get 596.2 GiB. (This assumes the "G" indicates GiB.) You have already warned that you cannot get mdadm to recognise the superblock information, so purely on the basis of the disk partition labels I would attempt to build the array like this: mdadm --build /dev/md0 --raid-devices=2 --level=0 --chunk=128 /dev/loop0 /dev/loop1 cat /proc/mdstat I have a chunk size of 128KiB to match the chunk size described by the metadata still present on the disks. If that works you can then proceed to access the partition in the resulting RAID0. ld=$(losetup --show --find --offset=$((4196352*512)) /dev/md0) echo loop device is $ld mkdir -p /mnt/dsk mount -t ntfs -o ro $ld /mnt/dsk We already have a couple of loop devices in use, so I've avoided assuming the name of the next free loop device and instead asked the losetup command to tell me the one it's used; this is put into $ld. The offset of 4196532 sectors (each of 512 bytes) corresponds to the offset into the image of the second partition. We could equally have omitted the offset from the losetup command and added it to the mount options.
How do I (re)build/create/assemble an IMSM RAID-0 array from disk images instead of disk drives using mdadm?
1,459,596,229,000
Grub can be installed to a device (grub-install /dev/sda) and to a certain partition (grub-install /dev/sda1) - as I understood. My question - if we install it to a partition will grub-installer write something to MBR? If not, how bios/uefi will find out from what partition to load?
If you install Grub to a partition, nothing is modified outside that partition. In particular, the MBR (if the disk has classical DOS partitions) is not modified. If you do that, Linux can only be booted if the bootloader in the BIOS or UEFI knows where to look for it. The reason to install Grub on a partition is when you already have another bootloader in the MBR that knows where to load it, typically another operating system that has its own boot menu. Many bootloaders from other operating systems offer to boot primary partitions that have the “active” flag set.
Grub-install: device vs partition
1,459,596,229,000
I'm using ACL to control access to individual roots of webs for different instances of Apache and different groups of admins. I've got a Unix group admins-web22 for admins of a particular website and user apache-web22 for a particular instance of apache. These are the permissions set on the root directory of the web: # file: web22 # owner: root # group: root user::rwx user:apache-web22:r-x group::rwx group:webmaster:rwx group:admins-web22:rwx mask::rwx other::--- default:user::rwx default:user:apache-web22:r-x default:group::rwx default:group:admins-web22:rwx default:mask::rwx default:other::r-x There is a user fred which is a member of admins-web22. This user has full read-write access to the directory (as stated above). This works correctly. However, this user is unable to grant write permissions to user apache-web22 for some files and directories, which is important (e.g., the web admin wants to set an upload directory for Drupal). The setfacl command gives "Operation not permitted.". My question is, who can grant privileges using setfacl, and how can I let users of group admins-web22 change permissions (for apache-web22) themselves? I'm running Debian Wheezy and it's an ext4 partition if it's important.
The setfacl manual page explains who can grant privileges: The file owner and processes capable of CAP_FOWNER are granted the right to modify ACLs of a file. This is analogous to the permissions required for accessing the file mode. (On current Linux systems, root is the only user with the CAP_FOWNER capability.) So fred can only use setfacl on files he owns. Depending on your exact security requirements, you may be able to allow members of admins-web22 to run setfacl as apache-web22 (using sudo), which would allow them to change the ACLs on files owned by apache-web22....
Who can change ACL permissions?
1,459,596,229,000
xterm has a modifyOtherKeys option that tells it to construct an escape sequence for various key combinations that are normally not available. That option can be enabled in .Xdefaults, or with a control sequence from within the terminal (echo -n -e '\033[>4;1m' does the trick for me). This option allows for more key combinations to be bound to commands in text mode programs, e.g. in Emacs. For example C-' (Ctrl + ') will normally generate a single ' character, which is useless, but will turn into a complex - but usable - escape sequence when the modifyOtherKeys option is enabled. Does gnome-terminal have a similar ability? I could not find anything in the menus on my system (Mageia Linux 4, gnome-terminal-3.10.2), but perhaps there is some control sequence with the same effect?
No, but there's an open bug asking for this: https://bugzilla.gnome.org/show_bug.cgi?id=730157
Does gnome-terminal have an equivalent for xterm's modifyOtherKeys?
1,459,596,229,000
I'm trying to get used to systemd, because it seems to be the way that Debian is going. I want to run Xorg in a chroot on hardware, rather than using networking (which seems to be the canonical way of doing it in a systemd container), because I don't want to install an X server on my host system. I want the host to be a thin, low-maintenance OS. It is my understanding that systemd-nspawn virtualizes /dev, and therefore does not allow access to hardware. Running a standard chroot seems to work fine in practice, though I am not sure if there will be any subtle problems with this. Aside from the guest having direct access to the hardware, is running a "real" chroot on a systemd machine a bad idea? If so, what problems will it cause? If it is bad practice, is there a way to do this with systemd-nspawn; such as some "unsafe" flag? I'm not finding one on the man page, but according to this page, there is a --share-system flag; which doesn't work for me.
The systemd developers are pretty against allowing nspawn to access real hardware as this quote from Poettering says: Well, the way we see it containers are really about getting access to virtualized environments only, i.e. /dev should be mostly empty (modulo /dev/null, /dev/random and friends), and the container really never should get access to physical hardware. This will then of course not allow you to run an X server inside the container. Other container solutions do support passing through hardware from the host to the container, we just believe it's a bit out of focus for the simple tool nspawn is and should stay. A "standard" install Arch Linux is systemd based and the wiki says nothing about a traditional chroot as being bad. Assuming that a traditional chroot meets your needs on a non-systemd system, then it should be fine on a systemd system. There may be situations in which the additional "virtualization" of nspawn is helpful, but there may be cases where it is limiting.
Real chroot on a systemd machine
1,459,596,229,000
I'm running the following command which is supposed to find specific directories according to their Access-Time metadata detail however for some reason the find command changes the access time of these directories. find /my/directory/ -mindepth 3 -maxdepth 3 -atime +2 -type d Every time the above command runs it changes the access-time of the directories to the time in which it was executed. I couldn't find any option for the "find" command that speaks of preserving metadata. Any ideas here would be greatly appreciated. Thanks!
access times are a feature of the filesystem. individual programs cannot prevent this. it has to be disabled on the filesystem. you can disable access time updates for the files: chattr -R +A /my/directory or mount the filesystem using noatime to disable access time updates for the entire filesystem. note that with both options above access times updates are only disabled for reading. writing to the file will still update access times.
Can "find" command preserve access-time
1,459,596,229,000
Using QNX 6.4.1, there is a command called pidin times that shows information about processes. I think it means PID INformation. Among other things, you can see how much CPU a process has used since it was started. I have a system that is showing almost 2 minutes of processor utilization for /usr/sbin/random after the system has been running for about 10 hours. That seems like a lot, as nothing in my code calls /usr/sbin/random. There is a lot of network activity (UDP and TCP) right now though, so I'm wondering if the network driver is calling random to get dynamic collision backoff times because of packet collisions. Could this theory be correct? (Okay, how plausible is it?) If not, is there something else I should check? There are currently latency problems with this system that did not exist yesterday and I'd like to find out what's going on. This particular clue may help isolate the problem. Update Further investigation using nicinfo revealed no packet collisions at all. So there goes my dynamic collision backoff time theory. Any other ideas? Another Update While this helped be find the answer to my problem (SSHD was using random, of course!!), be careful. If you use SSH, it needs a working random to allow you to log on. For some reason, the call in my script to random.old didn't work, and I just about bricked my embedded system. So be careful.
Crazy troubleshooting idea: make a honeypot / poor-man's process accounting. Make a backup of /usr/bin/random cp -p /usr/bin/random /usr/bin/random.bak touch /tmp/who_is_calling_random.log ; chmod 622 /tmp/who_is_calling_random.log Replace /usr/bin/random with this shell script (note you can use a different path than /tmp if you need to, but make sure it's world writable). #!/bin/sh echo "`date` $USER $$ $@" >> /tmp/who_is_calling_random.log /usr/bin/random.bak "$@" chmod 755 /usr/bin/random Reboot the system. See what gathers in the honeypot log. This should be a log of who/what is behind the use of the random program. tail -f /tmp/who_is_calling_random.log Restore random from the backup you made in step #1. Reboot system.
/usr/bin/random using a lot of CPU
1,459,596,229,000
I want to use the ioctl EVIOCGRAB function in a C based program, and from googling around I have found various bits of example source code that use the function, but I am struggling to find explicit documentation that correctly describes how to correctly use it. I see that from ioctl(2), ioctl function is defined as int ioctl(int d, unsigned long request, …); And that: The third argument is an untyped pointer to memory. It's traditionally char *argp (from the days before void * was valid C), and will be so named for this discussion. And I hoped to find EVIOCGRAB listed in ioctl_list(2), but it wasn't. So I don't know what the third argument should be for the EVIOCGRAB function. After seeing various bits of example code all I can do is assume that a non-zero value grabs the device and that a zero value releases it. Which I got from random code examples like int grab = 1; ioctl(fd, EVIOCGRAB, &grab); .. ioctl(fd, EVIOCGRAB, NULL); or ioctl(fd, EVIOCGRAB, (void*)1); .. ioctl(fd, EVIOCGRAB, (void*)0); or ioctl(fd, EVIOCGRAB, 1); .. ioctl(fd, EVIOCGRAB, 0); (Which seems to smell a bit of cargo cult programming.) So where can I find a definitive explanation of the EVIOCGRAB control parameter?
A definitive explanation you can at least find in the kernel sources, more specifically drivers/input/evdev.c: static long evdev_do_ioctl(struct file *file, unsigned int cmd, void __user *p, int compat_mode) { […] switch (cmd) { […] case EVIOCGRAB: if (p) return evdev_grab(evdev, client); else return evdev_ungrab(evdev, client); […] } […] } As I understand, everything that evaluates to »false« (0) will lead to evdev_ungrab ((void*)0, 0, …), everything that's »true« (not 0) will cause an evdev_grab ((void*)1, 1, 0xDEADBEEF…). One thing worth mentioning is that your first example, int grab = 1; ioctl(fd, EVIOCGRAB, &grab); .. ioctl(fd, EVIOCGRAB, NULL); only works unintentionally. It's not the value inside of grab, but the fact that &grab is non-zero (you could have guessed this, since the counter-case isn't grab = 0; ioctl(…, &grab); but ioctl(…, NULL);. Funny. :)
Where do I find ioctl EVIOCGRAB documented?
1,459,596,229,000
I use Ubuntu 12.04.1 Linux. I see a difference between %CPU and C output format of ps command for a process. It is not clearly noted in the ps man page. Man pages says: CODE HEADER DESCRIPTION %cpu %CPU cpu utilization of the process in "##.#" format. Currently, it is the CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage. It will not add up to 100% unless you are lucky. (alias pcpu). c C processor utilization. Currently, this is the integer value of the percent usage over the lifetime of the process. (see %cpu). So basically it should be the same, but it is not: $ ps aux | head -1 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND $ ps aux | grep 32473 user 32473 151 38.4 18338028 6305416 ? Sl Feb21 28289:48 ZServer -server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./log $ ps -ef | head -1 UID PID PPID C STIME TTY TIME CMD $ ps -ef | grep 32473 user 32473 32472 99 Feb21 ? 19-15:29:50 ZServer -server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./log The top shows 2% CPU utilization in the same time. I know the 'top' shows the current CPU utilization while ps shows CPU utilization over the lifetime of the process. I guess the lifetime definition is somewhat different for these two format options.
The %cpu and C columns are showing almost, but not quite, the same thing. If you look at the source for ps in ps/output.c you can see the differences between pr_c and pr_cpu C is the integer value for %cpu as you can guess. The odd difference is that C is clamped to a maximum of 99 while %cpu is not (there's a check for it for %cpu but it just changes the format from xx.x% to xxx%). Now, I'm not really sure why C has this clamping; it seems a little arbitrary. It's been there since procps 3.2.7 (2006) so it probably was from the era of single CPUs
What is the difference in CPU utilization between 'ps aux' and 'ps -ef'?
1,459,596,229,000
How to access the grub menu using a usb serial converter? I know it's possible to have grub menu in serial console, putting these lines in grub.conf: serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1 terminal serial But with usb serial converter? In linux it is /dev/ttyUSB0 and I can use it to see boot messages.
Didn't tried it myself, but I've found this information on the coreboot wiki (https://www.coreboot.org/GRUB2#On_a_USB_serial_or_USB_debug_adapter) To enable serial, first find out the name of your usb serial port trough: insmod nativedisk # needed not to get the disk disapearing when insmoding the *hci insmod ehci insmod ohci insmod uhci insmod usb insmod usbserial_pl2303 insmod usbserial_ftdi insmod usbserial_usbdebug terminal_output The terminal_output command should print it: grub> terminal_output Active output terminals: serial_usb1 gfxterm Available output terminals: console vga_text serial Here we can see "serial_usb1" so we now know that its name is usb1 Then add the following on top of your grub.cfg: insmod nativedisk insmod ehci insmod ohci insmod uhci insmod usb insmod usbserial_pl2303 insmod usbserial_ftdi insmod usbserial_usbdebug serial --speed=115200 --word=8 --parity=no --stop=1 usb1 terminal_output --append serial_usb1 terminal_input --append serial_usb1 The following chips/protocols are supported: usbdebug ftdi pl2303 The Wiki is outdated, but the answer seems legit.
Grub and usb serial support
1,459,596,229,000
How can I monitor if there are any errors in RAM that get corrected by ECC? The processor is an Intel Xeon (Ivy Bridge) processor, the operating system is Scientific Linux 6.3. On a previous system I had an AMD CPU, and on that system I could use edac-util to get this info, and it would also issue alerts to the kernel log.
As for as I can find only E5 Xeons are supported with the sb_edac module http://www.spinics.net/lists/linux-edac/msg00846.html
How to monitor RAM ECC errors on Ivy Bridge Xeon E3 processor in Linux?
1,459,596,229,000
Edit: I have installed CrunchBang (or #!, a Debian-based distro), and that seems to have solved all of my resolution problems. I've just installed Debian after using Linux Mint for a few months. Everything went on smooth, however, when the installation was over I noticed that the resolution was set really low. I went to System > Preferences > Monitors, and that told me that the highest available resolution was 1024 * 768. It also did not recognize my monitor properly as it was listed as "Unknown", the only rotation option was "Normal" and the refresh rate was "0 Hz" (although I'm having no problems with the refresh rate at the moment). How can I get an optimal resolution (the native one)? xrandr says: xrandr: Failed to get size of gamma for output default Screen 0: minimum 800 x 600, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0* 800x600 61.0 lspci -v | grep VGA says: 00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) I'm not sure why there seem to be two graphics cards. Currently I'm trying to install the non-free AMD/ATI r6xx r7xx drivers. Is this a good move? <- this changed nothing I'm also missing the Xorg.conf file. <- As suggested I've created one, but that only made my computer unable to start displaying anything after Grub. I'll have to delete it from Ubunto. If there's any more potentially useful info please tell me so I could share it.
Edit: I've found an actual solution that doesn't require you to install CrunchBang instead of Debian! I was using Debian Squeeze, that used a kernel version that, apparently, did not support my graphics card. The solution is simply upgrading to Debian testing (Wheezy). Change your etc/apt/sources.list to: deb http://ftp.hr.debian.org/debian testing main contrib non-free deb-src http://ftp.hr.debian.org/debian testing main contrib non-free deb http://ftp.debian.org/debian/ wheezy-updates main contrib non-free deb-src http://ftp.debian.org/debian/ wheezy-updates main contrib non-free deb http://security.debian.org/ wheezy/updates main contrib non-free deb-src http://security.debian.org/ wheezy/updates main contrib non-free And then execute sudo apt-get update && sudo apt-get dist-upgrade. Bam! Now you have a newer (and almost completely stable, as in, you probably wont experience any errors) version of Debian, which probably supports your card better than the older one. Solution 2: I have installed #! (CrunchBang) and all the problems were gone. #! is pretty much Debian with some default configurations and Openbox by default.
Debian: very low resolution and an "unknown monitor" problem
1,459,596,229,000
I am trying to run a distro in the virtual disk image with a custom kernel,so that I can experiment and debug the kernel. I followed this to make a disk image and then install Debian to it. Now I tried running distro with the following command:- qemu-system-i386 -hda debian.img -kernel ../linux-3.6.11/arch/i386/boot/bzImage -append "root=/dev/sda1" To my dissappointment it simply gives a Kernel panic-not syncing:VFS:unable to mount root fs on unknown-block(8,1). How can I fix the problem?Am I on the right path as far as kernel debugging is concerned?
I don't think you would have to start debugging the kernel right away. This error message means that the kernel is unable to mount the partition you requested to be /. This would happen for example if you gave it an empty disk image (my hunch is this is your case) - the kernel in the VM sees an unpartitioned drive, there is no /dev/sda1 just /dev/sda. To overcome this follow the instructions in the guide you have used - download a bootable ISO image and use it to install system into the VM image. When raw disk image is used, it can be directly partitioned with utilities like gdisk, fdisk or parted. Another possibility is, that there you are trying to mount a filesystem for which the kernel doesn't have a driver. This usually happens when one uses a kernel, that has most drivers in loadable modules on initrd and the initrd isn't loaded (hence the kernel lacks the ability to understand the particular filesystem).
Kernel and QEMU : Unable to mount root fs error
1,459,596,229,000
I see the cores on an Intel i5 machine I'm looking at can only be run at the same clockspeed: /sys/devices/system/cpu/cpu1/cpufreq/related_cpus lists all of the CPUs. Setting cpu1's clockspeed changes cpu0's, as expected. Supposedly the AMD A6-4400M machine I'm running should be able to run each core at a different clockspeed:/sys/devices/system/cpu/cpu1/cpufreq/related_cpu only lists cpu1. When I set cpu1's clockspeed by using the performance governor and echoing 1400000 to scaling_max_freq, cpu0's clockspeed remains at 2700000 as expected. Cpu1's scaling_cur_freq reads 1400000 as expected. However, cpu1's cpuinfo_cur_freq reads 2700000. From benchmarking, it appears CPU1 is indeed still running at 2.7 GHz. Am I missing something, or is something broken? I'm running Linux 2.6.35, and passing idle=mwait in the kernel command line.
This is not yet close to be a definite answer. Instead, it's a set of suggestions too long to fit in comments. I'm afraid you might slightly misinterpret the meanings of sysfs cpufreq parameters. For instance, on my Core Duo laptop, the related_cpu parameters for both cores read 0 1 - which, according to your interpretation, would mean that the cores cannot switch frequencies independently. But that is not the case - I can set each frequency at will. By contrast, the affected_cpus parameter for each core lists only the respective CPU number. You might want to take a look at kernel documentation for cpu-freq to get a better understanding of the parameters such as affected_cpus,related_cpus,scaling_* and cpuinfo_*. The documentation is normally distributed with kernel source packages. Specifically, I recommend reading <kernel-sources-dir>/Documentation/cpu-freq/user-guide.txt, where <kernel-sources-dir> would typically stand for /usr/src/linux or /usr/src/linux-<kernel-version>. (However, when I skim through the documentation myself now, I confess I don't catch some of the frequency-scaling-related nuances. To fully understand these, one probably needs to gain a solid understanding of CPU architectures first.) Back to your question. And one more test case on my part: when I change the value of scaling_max_freq (with either userspace or performance governor being used), the core's clock automatically switches to that new maximum. The different behaviour you're observing might be any of: specific to hardware implementation of frequency scaling mechanisms on your CPU, due to differences between the standard cpufreq module and phc-intel which I'm using, normal behaviour (call it a bug or a feature if you will) of cpufreq module, which has changed at some point since 2.6.35 (my current kernel version is 3.6.2), result of a bug in cpufreq implementation for your CPU (or entire family), specific to the implementation of performance CPU governor as of 2.6.35. Some of the things you might do to push your investigation further: read the user-guide.txt and fiddle more with other cpufreq parameters, repeat the tests while running a newer kernel - the easiest way is to boot a liveCD/DVD/USB. If you continue to experience unexpected behaviour and gain more reasons to believe it is due to a bug (definitely must check with the latest minor kernel version), go ahead and report this on kernel.org bugzilla.
Can I run multiple cores at different clock speeds?
1,459,596,229,000
I'm trying to setup two network profiles in Centos. One for at home, one for at work. The home profile has a fixed IP address, fixed gateway and DNS server addresses. The work profile depends on DHCP. I've created a 'home' and a 'work' directory in /etc/sysconfig/networking/profiles. Each has the following files containing the proper configuration: > -rw-r--r-- 2 root root 422 Apr 17 20:17 hosts > -rw-r--r-- 5 root root 223 Apr 17 20:18 ifcfg-eth0 > -rw-r--r-- 1 root root 101 Apr 17 20:17 network > -rw-r--r-- 2 root root 73 Apr 17 20:18 resolv.conf There was already a 'default' profile, which contains the same files. Then I issued these commands: system-config-network-cmd --profile work --activate service network restart I was expecting these files to get copied from the profiles/work directory to /etc/sysconfig/ and /etc/sysconfig/networking-scripts. And most files do get copied, except for ifcfg-eth0. Stangely enough that files seems to be overwritten with the current settings when I issue system-config-network-cmd. The other files are also touched, but there contents stays in tact. The system is Centos 5.7 running on a virtual pc within a windows 7 machine. Here is the output for ifconfig: # ifconfig eth0 Link encap:Ethernet HWaddr 00:03:FF:6F:2E:AB inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::203:ffff:fe6f:2eab/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4199761 errors:7 dropped:0 overruns:0 frame:0 TX packets:1733750 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2316624688 (2.1 GiB) TX bytes:415533386 (396.2 MiB) Interrupt:9 Can someone tell what I'm missing here?
As it follows from the RedHat's documentation on networking profiles, you should not use base interface name (eth0) for profile interfaces, but have one called as eth0_work and so on. BTW, you don't need to restart network configuration, since profile switching handles it by its own. An example: # system-config-network-cmd --profile foobar --activate Network device deactivating... Deactivating network device eth0, please wait... Network device activating... Activating network device eth0_foobar, please wait...
How to configure network profiles in Centos?
1,459,596,229,000
I inserted a new pendrive. The following is the dmesg output: [127321.248105] usb 2-2: new high speed USB device using ehci_hcd and address 9 [127321.380898] scsi11 : usb-storage 2-2:1.0 [127322.381159] scsi 11:0:0:0: Direct-Access XXXXXXXX U1170CONTROLLER 0.00 PQ: 0 ANSI: 2 [127322.384481] sd 11:0:0:0: Attached scsi generic sg2 type 0 [127322.387127] sd 11:0:0:0: [sdb] Attached SCSI removable disk But after executing the fdisk -l there is no device showing /dev/sdb Following is the output of fdisk command: Device Boot Start End Blocks Id System /dev/sda1 * 1 19103 153443296 7 HPFS/NTFS /dev/sda2 19103 34764 125794300 7 HPFS/NTFS /dev/sda3 34764 38914 33333249 5 Extended /dev/sda5 34764 34776 97280 83 Linux /dev/sda6 34776 35025 1998848 82 Linux swap / Solaris /dev/sda7 35025 38914 31235072 83 Linux ` Can somebody please tell me how to debug this problem ? Edit: There is one sdb created in the /dev directory after inserting the usb drive. On executing the the following command I am getting the output as: root@pradeep-laptop:~# mount /dev/sdb /mnt mount: /dev/sdb: unknown device Here is the output of lsusb command : Bus 005 Device 002: ID 1c4f:0002 SiGma Micro Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0a5c:2101 Broadcom Corp. Bluetooth Controller Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 007: ID 048d:1170 Integrated Technology Express, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The Line Bus 001 Device 007: ID 048d:1170 Integrated Technology Express, Inc. was added after inserting the pen drive.
You have to make at least one file system on the pendrive (and a partition table, certainly). The first file system you make should be the /dev/sdb1 which is then mountable. For example: root# mkfs.xfs /dev/sdb1 && mount /dev/sdb1 /mnt -t auto will run. Of course, you could add more than one file system to the pendrive, their name will be /dev/sdb{1,2..n}, respectively. Editing storage devices with gparted would make the process easier by visibility.
Pen Drive Not detected in Linux
1,459,596,229,000
Google could not help me with this problem. I hope you guys can. When I boot my computer, the first few screens presented to me by BIOS and boot menu are stretched to fit the LCD screen. Once Linux boots, however, the screen shrinks so one pixel of the console font uses only one pixel of the screen, causing the usable area of screen to shrink to only the upper left part of the screen, since the console uses only 640x480 of the 1280x1024 size monitor. I know I can use the VGA= boot flag to set modes that increases the number of rows and columns of text, so that the whole screen is used. However, what I want to do is keep the number of rows and columns as they are but scale the whole screen to fit the monitor, just like the BIOS boot messages. I need to do this in a way that will work on any monitor automatically. EDIT: I've not given info on hardware on purpose, because I want the solution to be hardware-agnostic. The distribution I'm using is Ubuntu 10.10.
Using only the nomodeset kernel option got me the results I wanted, the console now fills the entire screen.
In linux console (no X), how to stretch console screen to fit monitor
1,459,596,229,000
As far as I understand Wine runtime does better if some libraries are copied from MS Windows, but some Windows system libraries really are not to be used with Wine (some can even make it stop working, and many are simply useless). So what files make sense and are safe to copy from MS Windows into Wine system? I own a legal Windows XP copy (but prefer to use GNU/Linux) and run Wine 1.3.8 on Ubuntu 10.10.
You should use winetricks to install the files instead, some files needs specific changes to wine registery which winetricks handle.
What files should I copy from Windows into Wine?
1,459,596,229,000
I am setting up a redhat ec2 instance and by default the software I am using (called qradar) created the following volumes on the two 500g ebs storage devices attached to the instance: $ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert storetmp rootrhel -wi-ao---- 20.00g varlog rootrhel -wi-ao---- <20.00g store storerhel -wi-ao---- <348.80g transient storerhel -wi-ao---- <87.20g $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 500G 1.4G 499G 1% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 17M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/storerhel-store 349G 33M 349G 1% /store /dev/mapper/storerhel-transient 88G 33M 88G 1% /transient /dev/mapper/rootrhel-storetmp 20G 33M 20G 1% /storetmp /dev/mapper/rootrhel-varlog 20G 35M 20G 1% /var/log tmpfs 3.2G 0 3.2G 0% /run/user/1000 I need my storetmp to be 100g. How can I move 80g of storage from store to storetmp? It also seems that I may need to shift some space from xvdb3 to xvdb2: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 500G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 500G 0 part / xvdb 202:16 0 500G 0 disk ├─xvdb1 202:17 0 24G 0 part [SWAP] ├─xvdb2 202:18 0 40G 0 part │ ├─rootrhel-varlog 253:2 0 20G 0 lvm /var/log │ └─rootrhel-storetmp 253:3 0 20G 0 lvm /storetmp └─xvdb3 202:19 0 436G 0 part ├─storerhel-store 253:0 0 348.8G 0 lvm /store └─storerhel-transient 253:1 0 87.2G 0 lvm /transient Note that the directories are currently being used by the software running on the box and are not empty, so deleting them is out of the question, I need this to be done on-the-fly: $ ls -l /dev/mapper/storerhel-transient lrwxrwxrwx 1 root root 7 Aug 10 16:00 /dev/mapper/storerhel-transient -> ../dm-3 $ ls -l /dev/mapper/rootrhel-varlog lrwxrwxrwx 1 root root 7 Aug 10 16:00 /dev/mapper/rootrhel-varlog -> ../dm-0 $ ls -l /dev/mapper/storerhel-store lrwxrwxrwx 1 root root 7 Aug 17 04:10 /dev/mapper/storerhel-store -> ../dm-2
An extra 80 GB in EC2 EBS costs something under $12 per month. On-line manipulations are likely to take more than one hour of your work, and a risk of downtime if something goes wrong - how much is that worth for you? Pay for some extra capacity, add it to your instance as a third disk xvdc, initialize it as a LVM PV (you don't even have to put a partition table on it: just pvcreate /dev/xvdc will be sufficient). Then add the new PV to your rootrhel VG (vgextend rootrhel /dev/xvdc) and now you can extend your /storetmp with the added capacity. lvextend -L +80G /dev/mapper/rootrhel-storetmp xfs_growfs /storetmp #or the appropriate tool for your filesystem type With your immediate problem solved, you can now schedule for some downtime at suitable time. If you are using XFS filesystem (as RHEL/CentOS 7 does by default), then during the next scheduled downtime, you'll create tarballs of the current contents of /store and /transient, unmount and remove the entire storerhel VG, add its PV xvdb3 to the rootrhel VG and then recreate the LVs for /store and /transient filesystems using more realistic estimates for their capacity needs, and restore the contents of the tarballs. End of downtime. Now your rootrhel VG has three PVs: xvdb2, xvdb3 and xvdc, and plenty of space for your needs. If you want to stop paying for xvdc, you can use pvmove /dev/xvdc to automatically migrate the data within the VG off the xvdc and onto unallocated space within xvdb2 and/or xvdb3. You can do this on-line; just don't do it at the time of your peak I/O workload to avoid taking a performance hit. Then vgreduce rootrhel /dev/xvdc, echo 1 > /sys/block/xvdc/device/delete to tell the kernel that the xvdc device is going away, and then tell Amazon that you don't need your xvdc disk any more. I have nearly 20 years of experience working with LVM disk storage (first with HP-UX LVM, and later with Linux LVM once it matured enough to be usable in enterprise environment). These are the rules of thumb I've come to use with LVM: You should never create two VGs when one is enough. In particular, having two VGs on a single disk is most likely a mistake that will cause you headache. Reallocating disk capacity within a VG is as flexible as your filesystem type allows; moving capacity between VGs in chunks smaller than one already-existing PV is usually not worth the hassle. If there is uncertainty in your disk space requirements (and there always is), keep your LVs on the small side and some unallocated space in reserve. As long as your VG has unallocated capacity available, you can extend LVs and filesystems in them on-line as needed with one or two quick commands. It's a one-banana job for a trained monkey junior sysadmin. If there is no unallocated capacity in the VG, get a new disk, initialize it as a new PV, add it to the VG that needs capacity, and then go on with the extension as usual. Shrinking filesystems is more error-prone, may require downtime or may even be impossible without backing up & recreating the filesystem in smaller size, depending on filesystem type. So you'll want to avoid situations that require on-line shrinking of filesystems as much as possible. Micro-management of disk space can be risky, and is a lot of work. Work is expensive. Okay. Technically you could create a 80 GB file on /store, losetup it into a loop device, then make that into a PV you could add into your rootrhel VG... but doing that would result in a system that would most likely drop into a single user recovery mode at boot unless you set up a customized start-up script for these filesystems and VGs and got it right at the first time. Get it wrong, and the next time your system is rebooted for any reason you'll have to take some unplanned downtime for troubleshooting and fixing, or more realistically recreating the filesystems from scratch and restoring the contents from backups because it's simpler than trying to troubleshoot this jury-rigged mess. Or if you are using ext4 filesystem that can be on-line reduced, you could shrink the /store filesystem, shrink the LV, use pvmove --alloc anywhere to consolidate the free space to the tail end of the xvdb3 PV, shrink the PV, shrink the partition, run partprobe to make the changes effective without a reboot, then create a new partition xvdb4, initialize it as a new PV and add it to rootrhel VG... BUT if you make one mistake in this sequence so that your filesystem/PV extends beyond its LV/partition container, and your filesystem gets switched into read-only mode with an error flag that can be only reset by running a filesystem check, resulting in mandatory unplanned downtime.
Volume Management: How to move space from one partition to another?
1,459,596,229,000
In How do I extract the filesystem image from vmlinux.bin? and https://wiki.gentoo.org/wiki/Custom_Initramfs#Salvaging methods are presented for getting and unpacking an embedded initramfs/initrd included in the kernel image. Now I would like to insert the modified file system (cpio + possibly packed using e.g. lzma) into the kernel executable without having to recompile it. Would it be possible to modify the ELF image of the kernel in this way? If so then how? Would I need to keep something in respect if I were to simply replace the bytes in-place (maybe some hash?)? objdump-h Output: vmlinux.64.orig: file format elf64-big Sections: Idx Name Size VMA LMA File off Algn 0 .text 004162b8 ffffffff80100000 ffffffff80100000 00010000 2**7 CONTENTS, ALLOC, LOAD, READONLY, CODE 1 __ex_table 000063a0 ffffffff805162c0 ffffffff805162c0 004262c0 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 2 .notes 00000024 ffffffff8051c660 ffffffff8051c660 0042c660 2**2 CONTENTS, ALLOC, LOAD, READONLY, DATA 3 .rodata 0041f700 ffffffff8051d000 ffffffff8051d000 0042d000 2**8 CONTENTS, ALLOC, LOAD, READONLY, DATA 4 .pci_fixup 00000d40 ffffffff8093c700 ffffffff8093c700 0084c700 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 5 __ksymtab 0000a430 ffffffff8093d440 ffffffff8093d440 0084d440 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 6 __ksymtab_gpl 00004ff0 ffffffff80947870 ffffffff80947870 00857870 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 7 __ksymtab_strings 00010f14 ffffffff8094c860 ffffffff8094c860 0085c860 2**0 CONTENTS, ALLOC, LOAD, READONLY, DATA 8 __init_rodata 00000500 ffffffff8095d778 ffffffff8095d778 0086d778 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 9 __param 00001388 ffffffff8095dc78 ffffffff8095dc78 0086dc78 2**3 CONTENTS, ALLOC, LOAD, READONLY, DATA 10 .data 000508c0 ffffffff80960000 ffffffff80960000 00870000 2**14 CONTENTS, ALLOC, LOAD, DATA 11 .init.text 0002b084 ffffffff809b1000 ffffffff809b1000 008c1000 2**5 CONTENTS, ALLOC, LOAD, READONLY, CODE 12 .init.data 00bc6d78 ffffffff809dc088 ffffffff809dc088 008ec088 2**3 CONTENTS, ALLOC, LOAD, DATA 13 .exit.text 000019e0 ffffffff815a2e00 ffffffff815a2e00 014b2e00 2**2 CONTENTS, ALLOC, LOAD, READONLY, CODE 14 .data.percpu 00003680 ffffffff815a5000 ffffffff815a5000 014b5000 2**7 CONTENTS, ALLOC, LOAD, DATA 15 .bss 00068fb0 ffffffff815b0000 ffffffff815b0000 014b8680 2**16 ALLOC 16 .mdebug.abi64 00000000 ffffffff81618fb0 ffffffff81618fb0 014b8680 2**0 CONTENTS, READONLY 17 .comment 0000cd74 0000000000000000 0000000000000000 014b8680 2**0 CONTENTS, READONLY 18 .gnu.attributes 00000010 0000000000000000 0000000000000000 014c53f4 2**0
As mentioned in the answer to a similar question about replacing ELF sections discussed at reverseengineering.se simply using dd might be enough under some circumstances apart from the new archive not being larger, e.g. whether there are relocations.
Repack the filesystem image from vmlinux.bin (embedded initramfs) without rebuilding?
1,459,596,229,000
Nmap scanning network for SNMP enabled devices: sudo nmap -sU -p 161 --script default,snmp-sysdescr 26.14.32.120/24 I'm trying figure out how make that nmap return only devices that have specific entries in snmp-sysdescr object: snmp-sysdescr: "Target device name" Is that possible?
Nmap doesn't contain much in the way of output filtering options: --open will limit output to hosts containing open ports (any open ports). -v0 will prevent any output to the screen. Instead, the best way to accomplish this is to save the XML output of the scan (using the -oX or -oA output options), which will contain all the information gathered by the scan in an easy-to-parse XML format. Then you can filter that with XML parsing tools to include the information you want. One command-line XML parser is xmlstarlet. You can use this command to filter out only IP addresses for targets that have sysdescr containing the string "example": xmlstarlet sel -t -m "//port/script[@id='snmpsysdescr' and contains(@output,'example')]/../../../address[@addrtype='ipv4']" -v @addr -n output.xml You can also do this with Ndiff, which is a tool and Python 2 library distributed with Nmap: #!/usr/bin/env python import ndiff def sysdescr_contains (value, host): for port in host.ports: for script in filter(lambda x: x.id == u"snmp-sysdescr", port.script_results): if value in script.output: return True return False def usage (): print """Look for <substring> in snmp-sysdescr output and print matching hosts. Usage: {} <filename.xml> <substring>""" if __name__ == "__main__": import sys if len(sys.argv) < 3: usage() exit(1) scan = ndiff.Scan() scan.load_from_file(sys.argv[1]) for host in filter(lambda x: sysdescr_contains(sys.argv[2], x), scan.hosts): print host.format_name() Other Nmap-output parsing libraries are available in most common programming languages.
Nmap scan for SNMP enabled devices
1,459,596,229,000
I have recently decided to install FreeBSD on my desktop but I still have several computers running GNU/Linux and I would like to share disk partitions between the two OSs, in particular: The computer using FreeBSD will also have a GNU/Linux distribution installed and I would like to have a shared partition that can be read / written by both FreeBSD and GNU/Linux. I would like to use external hard-disk drives and USB-sticks from both operating systems. By reading various documentation and online forums, I understood that ext2 is the only solution right now: ufs write-support in Linux is still experimental, FreeBSD has limited support for ext3, and supports ext4 and ReiserFS read-only. Did I miss something, i.e. are there other viable filesystems?
You can use ext2. Support for ext2 has existed in FreeBSD for a while and can probably be considered stable. Of course it is native in GNU/Linux as you know. You could also use ext3 but without journal and extended attributes (use mount options in Linux /etc/fstab), which would increase some limits. This is probably much better than using a fs which is not native on any of the two systems, like NTFS and the like. Source: https://www.freebsd.org/doc/handbook/filesystems-linux.html
Filesystem to share disks between Linux and FreeBSD
1,459,596,229,000
I'm trying to download a file from a linux server i'm already connected to. I know you can use scp to connect to and pull down a file from a host, but that requires still being on local. I could scp the file back to my local machine, but the local machine is not accessible from the host. Is there a way to just pull down the file you are already looking at? Something like: From the host $ download <THEFILE> This would really just be more convenient than having to go back out to my local terminal than scp the file. Instead you could just say "grab this one" Done. I suppose the client would have to know what to do with the file. And i'm pretty sure "terminal.app" does not have a default downloads folder. So perhaps this is not possible. BTW I'm connect from a mac to Debian.
Your machine hostname is not resolvable from the remote host. You should do this the other way round. From your local host: scp xyz@remote:/home/user/test /home/user Or the other way is to set up remote port forwarding, so you will be able to connect from your remote machine to your local host. Your command can look like this: [local] $ ssh -R 2222:localhost:22 remote [remote]$ scp -P 2222 /home/user/test xyz@localhost:/home/user Inspired by my answer on SO
How do I download a file from a host i'm already connected to over ssh [duplicate]
1,459,596,229,000
I am trying to format an sdcard following this guide. I am able to successfully create the partition table, but attempting to format the Linux partition with mkfs yields the following output: mke2fs 1.42.9 (4-Feb-2014) Discarding device blocks: 4096/1900544 where it appears to hang indefinitely. I have left the process running for a while but nothing changes. If I eject the sdcard then mkfs writes the expected output to the terminal: mke2fs 1.42.9 (4-Feb-2014) Discarding device blocks: failed - Input/output error Warning: could not erase sector 2: Attempt to write block to filesystem resulted in short write warning: 512 blocks unused. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 476064 inodes, 1900544 blocks 95026 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1946157056 58 block groups 32768 blocks per group, 32768 fragments per group 8208 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Warning: could not read block 0: Attempt to read block from filesystem resulted in short read Warning: could not erase sector 0: Attempt to write block to filesystem resulted in short write Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: 0/58 Warning, had trouble writing out superblocks. Why is mkfs reporting that we are "discarding" blocks and what might be causing the hangup? EDIT I am able to successfully create two partitions -- one at 100MB and the other 7.3GB. I then can format, and mount, the 100MB partition as FAT32 -- it's the ext4 7.3GB partition that is having this trouble. dmesg is flooded with: [ 9350.097112] mmc0: Got data interrupt 0x02000000 even though no data operation was in progress. [ 9360.122946] mmc0: Timeout waiting for hardware interrupt. [ 9360.125083] mmc_erase: erase error -110, status 0x0 [ 9360.125086] end_request: I/O error, dev mmcblk0, sector 3096576 EDIT 2 It appears the problem manifests when I am attempting to format as ext4. If I format the 7.3GB partition as FAT32, as an example, the operation succeeds. EDIT 2 To interestingly conclude the above, I inserted the sdcard into a BeagleBone and formatted it in the exact same way I was on Mint and everything worked flawlessly. I removed the sdcard, reinserted it into my main machine and finished copying over the data to the newly created and formatted partitions.
I actually suspect you are being bitten by a much talked ext4 corruption bug in kernel 3 and 4. Have a look at this thread, http://bugzilla.kernel.org/show_bug.cgi?id=89621 There have been constant reports of corruption bugs with ext4 file systems, with varying setups. Lots of people complaining in forums. The bug seems to affect more people with RAID configurations. However, they are supposedly fixed in 4.0.3. "4.0.3 includes a fix for a critical ext4 bug that can result in major data loss." https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785672 There are other ext4 bugs, including bugs fixed as of the 30th of November [of 2015]. https://lists.ubuntu.com/archives/foundations-bugs/2015-November/259035.html There is also here a very interesting article talking about configuration options in ext4, and possible corruption with it with power failures. http://www.pointsoftware.ch/en/4-ext4-vs-ext3-filesystem-and-why-delayed-allocation-is-bad/ I would test the card with other filesystem other than ext4, maybe ext3. Those systematic bugs with ext4 are one of the reasons I am using linux-image-4.3.0-0.bpo.1-amd64 from the debian backports repository in Jessie in my server farm at work. Your version in particular, kernel 3.13 seems to be more affected by the bug. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1298972 https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1389787 I would not put it aside maybe some combination of configuration and hardware at your side is triggering the bug more than usual. SD cards also go bad with wear and tear, and due to the journaling filesystem, an ext4fs system is not the ideal for an SD card. As a curiosity, I am using a Lamobo R1 and using the SD card just for booting the kernel, with an SSD disk. http://linux-sunxi.org/Lamobo_R1
Formatting an sdcard with mkfs hangs indefinitely
1,459,596,229,000
Is there any user space tool that can retrieve and dump the list of bad blocks in a NAND flash device? I've checked the mtdinfo command line utility, and also searched /proc and /sys, but couldn't find anything. I am looking for something suitable for use from a shell script. I could parse dmesg as the kernel prints bad block information on init, but I am hoping there will be a better way.
I have not been able to find any user space utility doing what I need. The closest I have found is the nanddump utility from mtd-utils, which can dump NAND contents, including bad blocks.
Print list of bad blocks in NAND flash from user space
1,459,596,229,000
How do I set a tiled background image in urxvt? I have tried the following alternatives in my .Xresources file: URxvt*backgroundPixmap: /home/jgg/Pictures/tiles/escheresque_ste.png;style=tiled URxvt.backgroundPixmap: /home/jgg/Pictures/tiles/escheresque_ste.xpm URxvt.backgroundPixmap: /home/jgg/Pictures/tiles/escheresque_ste.png;+0+0:tile -tr I also ran xrdb -merge ~/.Xresources to ensure that the configuration is updated every time. What am I missing? Is there a library I need to install? I'm running rxvt-unicode (urxvt) v9.15 on a Debian 7.4 (Wheezy) system.
Use the -pixmap option to set the background image for RXVT. For instance, I have a set of small .png tiles that I pick at random using the following one-liner: urxvt -pixmap "`find /path/to/tiles/ -name '*.png' | sort -R | head -n 1`;style=tiled" You could easily turn the above into an alias or script.
How to set a background image for urxvt?
1,459,596,229,000
Is it possible to configured UFW to allow UPNP between computers in the home network? Everything works if I turn off the firewall. I can see in syslog the firewall is blocking me. I've tried all sorts of tips out there like open 1900, 1901, 5353, these all seemed like random attempts. I know the issue is UPNP requests a random port and UFW is simply blocking it.
You seem to be close to the answer. The easiest thing to do is to temporarily turn off the firewall let your media boxes run for a couple of minutes and then check the output from lsof lsof -i :1025-9999 +c 15 The -i lists "files" corresponding to an open port, use -i4 to restrict to IPv4 only. The number list restricts this to a list of port numbers - miss it off if you want everything. The +c bit just gives you more meaningfull command names associated with the ports netstat -lptu --numeric-ports This lists all of the active ports along with their protocol and source/target address. With this information, you can build a script to set ufw correctly. Here is my script by way of example: #!/bin/sh # Set up local firewall using ufw (default install on Ubuntu) # @see /etc/services for port names # obtain server's IP address SERVERIP=192.168.1.181 # Local Network LAN="192.168.0.0/255.255.0.0" # disable firewall ufw disable # reset all firewall rules ufw reset # set default rules: deny all incoming traffic, allow all outgoing traffic #ufw default allow incoming ufw default deny incoming ufw default allow outgoing # open port for SSH ufw allow OpenSSH # open port for Webmin ufw allow webmin # open ports for Samba file sharing ufw allow from $LAN to $SERVERIP app Samba ufw allow to $LAN from $SERVERIP app Samba #ufw allow from $LAN to $SERVERIP 137/udp # NetBIOS Name Service #ufw allow from $LAN to $SERVERIP 138/udp # NetBIOS Datagram Service #ufw allow from $LAN to $SERVERIP 139/tcp # NetBIOS Session Service #ufw allow from $LAN to $SERVERIP 445/tcp # Microsoft Directory Service # open ports for Transmission-Daemon ufw allow 9091 ufw allow 20500:20599/tcp ufw allow 20500:20599/udp # Mediatomb ## upnp service discovery ufw allow 1900/udp ## Mediatomb management web i/f ufw allow 49152 # Plex Media Server ## Manage ufw allow 32400 # open port for MySQL ufw allow proto tcp from $LAN to any port 3306 # open ports for web services ufw allow 80 ufw allow 443 ufw allow 8000:9999/tcp ufw allow 8000:9999/udp # Deny FTP ufw deny 21/tcp # Webmin/usermin allow ufw allow webmin ufw allow 20000 # open port for network time protocol (ntpd) ufw allow ntp # Allow Firefly (DAAP) ufw allow 3689 # enable firewall ufw enable # list all firewall rules ufw status verbose You should be able to see from the Mediatomb section that uPNP is working on the standard port 1900 over UDP (not TCP) and is open in both directions, this is the main port for you. But you can also see that there are numerous other ports required for specific services.
Uncomplicated Firewall (UFW) and UPNP
1,459,596,229,000
This is a follow up to my crazy mdadm problem. I'm trying to figure out what might have caused sda to get out of sync in the first place. The only thing I can think of is that I had just run a bunch of updates and was rebooting to reload the kernel upgrade. Is it possible that both drives hadn't synced? would the system prevent a reboot if there was mdadm syncing going on? could it be made to? any other suggestions as to what might have happened? and how it could be prevented in the further. Nothing seems to be wrong with the drive.
It certainly does on a clean shutdown: The Debian mdadm FAQ implies the kernel does the right thing: ​8. (One of) my RAID arrays is busy and cannot be stopped. What gives? It is perfectly normal for mdadm to report the array with the root filesystem to be busy on shutdown. The reason for this is that the root filesystem must be mounted to be able to stop the array (or otherwise /sbin/mdadm does not exist), but to stop the array, the root filesystem cannot be mounted. Catch 22. The kernel actually stops the array just before halting, so it's all well. The md driver sets all devices as read-only on shutdown (and even gives the physical devices about one second to settle). Even if your system crashes in the middle of a write, the driver does take care to mark blocks as dirty while they're being written to, and to resync dirty blocks if it starts from an unclean array. See the comments regarding array states. The kernel documentation warns that arrays that are both dirty (not cleanly shut down) and degraded (having missing pieces) are not assembled automatically as this wouldn't be safe. When you assemble a dirty array, you'll (possibly very briefly) see it resync in /sys/block/md99/md/rd0/state. All in all, the md driver takes care of protecting your data against a total failures of a hardware component (CPU or disk), which is what's expected of it. What md won't protect you against is data corruption due to a Byzantine failure (i.e. silent flipping of one or more bits) in RAM, CPU, motherboard, or disk. The disk hardware has checksums, but they're not perfect (see e.g. Zfs promotional literature). Zfs and Btrfs can protect against storage device corruption. Btrfs's checksum tree ensures that you will be notified if your hard disk flips a bit. Zfs offers a choice of checksum (according to Jeff Bonwick's Blog), up to SHA-256 which protects not only against random corruption but even against deliberate attack, at the cost of CPU cycles.
Will the system make sure that mdadm is sync-ed before completing a reboot?
1,459,596,229,000
There are several scheduling options in Linux, which can be set to a process with the help of a chrt command line. And I can't seem to grasp one of them... Which is SCHED_BATCH. In fact, it's description contradicts itself in several sources. But before, I'll summarize the facts I managed to get about all the scheduling options, as the different descriptions of SCHED_BATCH references them. SCHED_FIFO and SCHED_RR are basically real-time scheduling policies, with slight differences, no questions here mostly, they are always run first before any other processes. SCHED_OTHER is the default policy, relying on nice values of per process to assign priorities. SCHED_DEADLINE - I haven't completely understood this one, it seems to be actually closer to SCHED_FIFO but with set timers of execution, and may actually preempt SCHED_FIFO/SCHED_RR tasks if those timers are reached. SCHED_IDLE - basically "schedule only if otherwise the CPU would be idling", no questions here. And now, there is SCHED_BATCH, which I found out that is quite controversial. On one hand, in the original implementations notes back from 2002 it is said it's basically the same as SCHED_IDLE for user code and SCHED_OTHER for kernel code (to avoid kernel resources lock-up due to a process never being scheduled). In the MAN articles though it is said it's basically the same as SCHED_OTHER but with a small penalty because it is considered CPU-bound. Now, theese two SCHED_BATCH explanations are not coherent one with another. Which one is the truth? Did SCHED_BATCH got reimplemented somewhere along the way? Or someone misunderstood SCHED_BATCH when was writing the MAN article? Which of the explanations is the true one? UPDATE: additional research: I did some more digging, and found this kernel scheduler documentation, also by digging in the sched sources a bit it seems SCHED_BATCH is indeed handled by the same code as SCHED_OTHER. There's still a test I did. I have a process which was actually throttling other processes because of it's high load even with nice 19, but default SCHED_OTHER setting. I've set it up as SCHED_IDLE, and the problem of the throttling of the other processes was gone... And then I decided to try out SCHED_BATCH for the same process (with the same nice 19 setting). And surprisingly enough, SCHED_BATCH also didn't throttle my other processes, like SCHED_IDLE! But it seems from the docs I linked above, SCHED_IDLE may be also handled by the same code as SCHED_OTHER and SCHED_BATCH, but with super-low priority. Although I'm kinda still lost in which cases it does use CFS (which is SCHED_OTHER main implementation), and in which cases it uses the kernel/sched/idle.c implementation. I definitely still don't have any conclusive answer, the tests and documentations are still not making too much sense to me. Tests showed that SCHED_BATCH worked like actually SCHED_IDLE would, but all documentation tells that it should work like SCHED_OTHER instead!
SCHED_BATCH From man sched(7), about SCHED_BATCH: …this policy will cause the scheduler to always assume that the thread is CPU-intensive. Consequently, the scheduler will apply a small scheduling penalty with respect to wakeup behavior, so that this thread is mildly disfavored in scheduling decisions. Apparently, the CFS scheduler applies a "small penalty" to threads that are assumed to be CPU intensive. A look at the source code of Linux reveals that SCHED_BATCH affects the schedulers behavior in exactly one location. Namely kernel/sched/fair.c:yield_task_fair() static void yield_task_fair(struct rq *rq) { … if (curr->policy != SCHED_BATCH) { update_rq_clock(rq); /* * Update run-time statistics of the 'current'. */ update_curr(cfs_rq); /* * Tell update_rq_clock() that we've just updated, * so we don't do microscopic update in schedule() * and double the fastpath cost. */ rq_clock_skip_update(rq); } set_skip_buddy(se); } Judging from the functions names, it appears to be the function that yields, the current thread (and potentially selects the next thread). The thread currently running on the CPU (curr) is removed from CPU ("yielded"), to make the CPU available to another thread. If curr is running under the SCHED_BATCH scheduling policy, then some run-time statistics of curr are not updated. I suspect not updating the stats leaves the thread CPU intensive from CFS' perspective, which, in turn, makes CFS less favorably select the thread. SCHED_BATCH vs SCHED_IDLE As for the difference between SCHED_BATCH and SCHED_IDLE: Both are actually very different. First, SCHED_IDLE threads are only run if there are idle CPUs. Hence there is no progress guarantee for SCHED_IDLE threads. If a system is 100% utilized with other threads, then SCHED_IDLE threads may never run. That is not the case for SCHED_BATCH threads, which participate in normal CPU-multiplexing, and hence are guranteed to progress. Furthermore, with Linux >= 5.4, CPUs running SCHED_IDLE threads are considered idle from the CFS scheduler's perspective. That has an important implication: As soon as a non-SCHED_IDLE thread becomes runnable, the scheduler may immediately yield the SCHED_IDLE thread and place the non-SCHED_IDLE thread on it. See also "Fixing SCHED_IDLE" In: LWN.net (2019-11-26). That SCHED_BATCH was also able to "unthrottle" your other tasks is probably just because the penalty that SCHED_BATCH threads receive was enough to do so. Furthermore, kernel/sched/idle.c is not directly related to SCHED_IDLE, this file just contains the scheduler-related code to put a CPU in idle mode. SCHED_OTHER, SCHED_BATCH and SCHED_IDLE are all scheduling policies of Linux's CFS scheduler (kernel/sched/fair.c).
SCHED_BATCH description confusing - what does it actually do?
1,459,596,229,000
art_file (cat -A output): .::""-, .::""-.$ /:: \ /:: \$ |:: | _..--""""--.._ |:: |$ '\:.__ / .' '. \:.__ /$ ||____|.' _..---"````'---. '.||____|$ ||:. |_.' `'.||:. |$ ||:.-'` .-----. ';:. |$ ||/ .' '. \. |$ || / '-. '. \\ |. |$ ||:. _| ' \_\_\\/( \ |$ ||:.\_.-' ) || m `\.--._.-""-;$ ||:.(_ . '\ __'// m ^_/ / '. _.`.$ ||:. \__^/` _)```'-...' _ .-'.' '-.$ ||:..-'__ .' '. . ' '. `'.$ ||:(_.' .`' _. ' '-. '. . ''-._$ ||:. : '. .' '. . ' ' '.` '._$ ||:. : '. .' .::""-: .''. ' . . ' ':::''-.$ ||:. .' ..' . /:: \ '. . '. /:: \$ ||:.' .' '. |:: | _.:---""---.._' |:: |$ ||. : '\:.__ / .' -. .- '. \:.__ /$ ||: : '. . ||____|_.' .--. .--. '._||____|$ ||:'.___: '. .' ||:. | ( \/ ) ||:. |$ ||:___| \ '. : ||:. | '-. .-' ||:. |$ [[____] '. '.-._||:. | __ '..' __ ||:. |$ '. : ||:. | (__\ (\/) /__) ||:. |$ '. : ||:. | ` \/ ` ||:. |$ '-: ||:. | () ||:. |$ '._||:. |________________________||:. |$ ||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___|$ [[____] [[____]$ caption_file (cat -A output): $ $ _________ .__ $ / _____/____ _____ ______ | | ____ $ \_____ \\__ \ / \\____ \| | _/ __ \ $ / \/ __ \| Y Y \ |_> > |_\ ___/ $ /_______ (____ /__|_| / __/|____/\___ >$ \/ \/ \/|__| \/ $ ___________ __ $ \__ ___/___ ___ ____/ |_ $ | |_/ __ \\ \/ /\ __\ $ | |\ ___/ > < | | $ |____| \___ >__/\_ \ |__| $ \/ \/ $ $ $ I am trying to merge art_file with caption_file side by side. So far I have tried two methods: using pr -Jmt art_file caption_file .::""-, .::""-. /:: \ /:: \ |:: | _..--""""--.._ |:: | _________ .__ '\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____ ||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \ ||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/ ||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ > ||/ .' '. \. | \/ \/ \/|__| \/ || / '-. '. \\ |. | ___________ __ ||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_ ||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\ ||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | | ||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__| ||:..-'__ .' '. . ' '. `'. \/ \/ ||:(_.' .`' _. ' '-. '. . ''-._ ||:. : '. .' '. . ' ' '.` '._ ||:. : '. .' .::""-: .''. ' . . ' ':::''-. ||:. .' ..' . /:: \ '. . '. /:: \ ||:.' .' '. |:: | _.:---""---.._' |:: | ||. : '\:.__ / .' -. .- '. \:.__ / ||: : '. . ||____|_.' .--. .--. '._||____| ||:'.___: '. .' ||:. | ( \/ ) ||:. | ||:___| \ '. : ||:. | '-. .-' ||:. | [[____] '. '.-._||:. | __ '..' __ ||:. | '. : ||:. | (__\ (\/) /__) ||:. | '. : ||:. | ` \/ ` ||:. | '-: ||:. | () ||:. | '._||:. |________________________||:. | ||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___| [[____] [[____] paste art_file caption_file .::""-, .::""-. /:: \ /:: \ |:: | _..--""""--.._ |:: | _________ .__ '\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____ ||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \ ||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/ ||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ > ||/ .' '. \. | \/ \/ \/|__| \/ || / '-. '. \\ |. | ___________ __ ||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_ ||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\ ||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | | ||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__| ||:..-'__ .' '. . ' '. `'. \/ \/ ||:(_.' .`' _. ' '-. '. . ''-._ ||:. : '. .' '. . ' ' '.` '._ ||:. : '. .' .::""-: .''. ' . . ' ':::''-. ||:. .' ..' . /:: \ '. . '. /:: \ ||:.' .' '. |:: | _.:---""---.._' |:: | ||. : '\:.__ / .' -. .- '. \:.__ / ||: : '. . ||____|_.' .--. .--. '._||____| ||:'.___: '. .' ||:. | ( \/ ) ||:. | ||:___| \ '. : ||:. | '-. .-' ||:. | [[____] '. '.-._||:. | __ '..' __ ||:. | '. : ||:. | (__\ (\/) /__) ||:. | '. : ||:. | ` \/ ` ||:. | '-: ||:. | () ||:. | '._||:. |________________________||:. | ||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___| [[____] [[____] Both of them mess up the alignment of the second file, with paste generating a somewhat better output. So my questions are: Using either paste or pr can I generate desired output? Some option(s) I am overlooking, perhaps? If neither of them are the correct tools for the job, other than writing a new program, what pre-existing solution can I use?
The trouble is each line has a different length. The easiest solution is to give a large enough width to pr: pr -mtw 150 art_file caption_file If you want the caption text to get closer, I suggest awk ' l<length && NR<=n{l=length} NR!=FNR{ printf "%-"l"s", $0 getline line < "caption" print line } ' n="$(wc -l < caption)" art art n is the number of lines of the caption file. l is the length of the longest line between the first n lines of the art file. printf right-pads the art file with spaces so that it all its lines have l length. getline then gets a line from the caption file and prints it next to the just printed art line. Note that you can add or subtract to the value of l in printf to ad-hoc adjust the spacing. .::""-, .::""-. /:: \ /:: \ |:: | _..--""""--.._ |:: | _________ .__ '\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____ ||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \ ||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/ ||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ > ||/ .' '. \. | \/ \/ \/|__| \/ || / '-. '. \\ |. | ___________ __ ||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_ ||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\ ||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | | ||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__| ||:..-'__ .' '. . ' '. `'. \/ \/ ||:(_.' .`' _. ' '-. '. . ''-._ ||:. : '. .' '. . ' ' '.` '._ ||:. : '. .' .::""-: .''. ' . . ' ':::''-. ||:. .' ..' . /:: \ '. . '. /:: \ ||:.' .' '. |:: | _.:---""---.._' |:: | ||. : '\:.__ / .' -. .- '. \:.__ / ||: : '. . ||____|_.' .--. .--. '._||____| ||:'.___: '. .' ||:. | ( \/ ) ||:. | ||:___| \ '. : ||:. | '-. .-' ||:. | [[____] '. '.-._||:. | __ '..' __ ||:. | '. : ||:. | (__\ (\/) /__) ||:. | '. : ||:. | ` \/ ` ||:. | '-: ||:. | () ||:. | '._||:. |________________________||:. | ||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___| [[____] [[____]
What is the correct way to merge two ASCII art files side by side while preserving alignment?
1,459,596,229,000
Making a Bluetooth GATT server on a Linux machine is done using BlueZ. The preferred way to use modern (5.50) BlueZ is through the dbus API. The documentation on this topic states: GATT local and remote services share the same high-level D-Bus API. Local refers to GATT based service exported by a BlueZ plugin or an external application. Remote refers to GATT services exported by the peer. I am interpreting this as: Both local services (Linux machine is the server, other devices connect to it via Bluetooth) and remote services (Linux machine is the client, it connects to other devices via Bluetooth) are represented on dbus That sets the base assumption for the question. The bluez source code provides an example-gatt-server. An example you can execute an it will just work and turn your Linux machine in a GATT server. In that example an arbitrarily named dbus object is referenced. It's name is /org/bluez/example/service From the documentation I would expect then that once the ./example-gatt-server is run successfully there should be a /org/bluez/example/service somewhere. That is not the case: ~$ busctl tree org.bluez └─/org └─/org/bluez └─/org/bluez/hci0 I am confirming with an external device that the Linux machine is acting as the GATT server, but /org/bluez/example/service isn't listed. Why isn't /org/bluez/example/service found as an object under org.bluez?
I was also getting puzzled with this and i found that we are not able to see the dbus object as this example does not define a well-known/requested-name for this service on the dbus. As per busctl documentation for being able to query the service, you would need to have a name associated with it. While this example Gatt server does not have one. Shows an object tree of one or more services. If SERVICE is specified, show object tree of the specified services only. Otherwise, show all object trees of all services on the bus that acquired at least one well-known name. Although you can either use sudo dbus-monitor --system for monitoring the object being registered or can request a name from dbus by calling request_name on bus before creating the Application object in gatt server example code. You can check an example service with a requested name here. bus.request_name(BUS_NAME) named_bus = dbus.service.BusName(BUS_NAME, bus=bus) You need to also give permission on system bus for publishing by editing /etc/dbus-1/system.conf and add your service name as follows: <policy user="root"> <allow own="com.example.gattServer"/> </policy>
Locating the object path for a GATT server in BlueZ
1,489,492,456,000
I was going to post this in ServerFault originally but I thought this might be a better place. Let me know if you think there is a better place to post this question. I have an user-space application which performs networking through Java NIO's API (aka epoll on Linux) For demonstration and diagnostic purposes, I have a line testing utility. Its basically the same thing as iperf. Some information about the environment and how the test is run. Ubuntu 16.04 Desktop updated today (4.4.0-34-generic) irqbalance is off Intel X504T1 10GbE (ixgbe) receiver <-> Solarflare 10GbE (sfc) sender Uses 10, 000 TCP sockets Sockets use the OS default configurations The user-space read buffer is 32KB reading occurs no more than 40hz The line test consists of a single client that transmits as much information as possible over the TCP sockets. each read() per socket is allowed to be called more than once to obtain up to 98KB per hz (the 32KB buffer would have to be read 3 times to hit the ceiling) This means that at 40hz and the 98KB ceiling that read() can be called up to 120 times per second per connection; reading a total of 3, 840KB. Line tester shows that read() is called a total of about 110, 000 times a second. The line test will totally saturate the 10GbE adapter easily using about 8% softirq top - 22:04:29 up 51 min, 1 user, load average: 1.31, 1.02, 0.66 Tasks: 258 total, 1 running, 257 sleeping, 0 stopped, 0 zombie %Cpu(s): 2.2 us, 3.6 sy, 0.0 ni, 85.6 id, 1.1 wa, 0.0 hi, 7.4 si, 0.0 st KiB Mem : 16378912 total, 12909832 free, 2383088 used, 1085992 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13746736 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4922 jon 20 0 1553556 492552 127160 S 125.0 3.0 0:54.61 firefox 5099 jon 20 0 7212040 218396 16872 S 75.0 1.3 2:59.88 java 3194 root 20 0 722144 163812 134052 S 18.8 1.0 1:25.63 Xorg 4149 jon 20 0 1588648 147848 75344 S 6.2 0.9 0:28.63 compiz 4197 jon 20 0 544660 40600 26804 S 6.2 0.2 0:01.20 indicator-+ 5186 jon 20 0 41948 3696 3084 R 6.2 0.0 0:00.01 top 1 root 20 0 119744 5884 3964 S 0.0 0.0 0:00.84 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 5:01.01 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ 7 root 20 0 0 0 0 S 0.0 0.0 0:01.06 rcu_sched 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 10 root rt 0 0 0 0 S 0.0 0.0 0:00.04 watchdog/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.01 watchdog/1 12 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 13 root 20 0 0 0 0 S 0.0 0.0 0:08.16 ksoftirqd/1 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 0 1 0 0 1 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 2 0 1 0 1 0 0 0 IR-IO-APIC 12-edge i8042 16: 50 6 2 6 10 0 0 3 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1138 35 14 24 227 25 35 24 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 1 0 0 0 1 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 11 4 10 1 7 0 0 0 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 4571 1431 1142 812 1286 1442 985 730 IR-PCI-MSI 327680-edge xhci_hcd 28: 26230 3078 1744 1325 6297 2715 1703 1258 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 754 43 28 30 215 176 129 76 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 0 0 0 0 1 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 757 64 28 33 205 169 129 66 IR-PCI-MSI 2621440-edge eth1-rx-0 33: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 34: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 35: 1042128 233608 58916 16705 1612687 1484813 1121118 630363 IR-PCI-MSI 1048576-edge enp2s0-TxRx-0 36: 858271 736510 372134 165262 1704892 1127381 1265752 767377 IR-PCI-MSI 1048577-edge enp2s0-TxRx-1 37: 816359 711664 426719 192686 1475309 1307882 807216 712562 IR-PCI-MSI 1048578-edge enp2s0-TxRx-2 38: 934786 714007 432100 217627 1905295 1622682 1150693 517990 IR-PCI-MSI 1048579-edge enp2s0-TxRx-3 39: 0 0 0 0 14185366 0 0 0 IR-PCI-MSI 1048580-edge enp2s0-TxRx-4 40: 0 0 0 0 0 14332864 0 0 IR-PCI-MSI 1048581-edge enp2s0-TxRx-5 41: 0 0 0 0 0 0 14617282 0 IR-PCI-MSI 1048582-edge enp2s0-TxRx-6 42: 0 0 0 0 0 0 0 14840029 IR-PCI-MSI 1048583-edge enp2s0-TxRx-7 43: 57 88 47 34 77 64 75 58 IR-PCI-MSI 1048584-edge enp2s0 44: 0 0 0 0 0 13 1 1 IR-PCI-MSI 360448-edge mei_me 45: 246 20 30 4 345 132 128 142 IR-PCI-MSI 442368-edge snd_hda_intel 46: 63933 9794 7233 4753 28843 19323 17678 11191 IR-PCI-MSI 524288-edge nvidia NMI: 57 43 35 42 103 98 83 76 Non-maskable interrupts LOC: 300755 258293 257168 289802 373725 262211 218677 196510 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 57 43 35 42 103 98 83 76 Performance monitoring interrupts IWI: 0 0 0 0 1 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 7721466 2192716 1958606 3095012 1106115 1189666 309133 169884 Rescheduling interrupts CAL: 2598 2206 2194 1751 1976 2255 2130 2211 Function call interrupts TLB: 5450 6659 6103 5640 4352 5128 4535 4470 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 11 11 11 11 11 11 11 11 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event Now, lets apply rate control to the socket reader. Inbound rate control is set to 50KB per connection Which is about 500MB/s since we have 10, 000 connections rate control sets reading frequency to 5hz, down from 40hz in the previous example. rate control's frequency is not aligned, meaning that not all connections tick using the same starting reference however, they are all governed by a single clock. clock is 40hz; meaning there is 40 opportunities for scheduled rate control reads to occur. during each of those 5hz rate control reads, the socket is only allowed to read up to 10KB. So, 5 times a second it reads 10KB out of the socket buffer. Line tester shows that read() is called a total of about 47, 000 times a second. The amount of softirq jumps from 8% to 50-65%; the number of interrupts almost triple and there is 26-58 million RES interrupts (per core) compared to 1-7 million before. top - 22:31:50 up 1:19, 1 user, load average: 2.30, 2.30, 1.96 Tasks: 259 total, 2 running, 257 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.3 us, 5.5 sy, 0.0 ni, 41.2 id, 0.0 wa, 0.0 hi, 50.0 si, 0.0 st KiB Mem : 16378912 total, 11752520 free, 2189080 used, 2437312 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 12590400 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3 root 20 0 0 0 0 S 82.1 0.0 26:57.43 ksoftirqd/0 5194 jon 20 0 7212040 233488 16720 S 46.2 1.4 12:08.73 java 28 root 20 0 0 0 0 S 40.2 0.0 9:04.84 ksoftirqd/4 33 root 20 0 0 0 0 S 30.9 0.0 7:26.84 ksoftirqd/5 43 root 20 0 0 0 0 R 21.6 0.0 4:26.41 ksoftirqd/7 38 root 20 0 0 0 0 S 21.3 0.0 5:37.16 ksoftirqd/6 4922 jon 20 0 1533388 475124 127784 S 5.6 2.9 2:41.82 firefox 3194 root 20 0 722448 163872 134052 S 5.3 1.0 2:50.84 Xorg 5154 jon 20 0 589896 83876 53964 S 1.7 0.5 0:26.08 plugin-con+ 13 root 20 0 0 0 0 S 1.3 0.0 0:42.60 ksoftirqd/1 4548 jon 20 0 5492168 634252 43104 S 1.3 3.9 2:18.86 java 4149 jon 20 0 1604016 169732 75348 S 1.0 1.0 0:52.62 compiz 18 root 20 0 0 0 0 S 0.7 0.0 0:35.31 ksoftirqd/2 23 root 20 0 0 0 0 S 0.3 0.0 0:22.65 ksoftirqd/3 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 0 1 0 0 1 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 2 0 1 0 1 0 0 0 IR-IO-APIC 12-edge i8042 16: 50 6 2 6 10 0 0 3 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1138 35 14 24 227 25 35 24 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 1 0 0 0 1 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 11 4 10 1 7 0 0 0 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 6518 1966 1471 1031 4361 3847 2501 1673 IR-PCI-MSI 327680-edge xhci_hcd 28: 26732 3381 1957 1447 6687 3367 2112 1502 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 930 184 150 114 283 344 232 142 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 0 0 0 0 1 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 899 234 138 104 277 348 236 143 IR-PCI-MSI 2621440-edge eth1-rx-0 33: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 34: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 35: 1339704 330929 97391 31445 2023348 1859243 1369358 782238 IR-PCI-MSI 1048576-edge enp2s0-TxRx-0 36: 1863223 3328011 1764431 788048 2411300 2677922 2540016 1742062 IR-PCI-MSI 1048577-edge enp2s0-TxRx-1 37: 1911973 3426913 2084294 955668 2216702 2894499 2008907 1723010 IR-PCI-MSI 1048578-edge enp2s0-TxRx-2 38: 2064515 3379490 2155421 1093171 2652077 3162801 2369659 1442568 IR-PCI-MSI 1048579-edge enp2s0-TxRx-3 39: 0 0 0 0 23079493 0 0 0 IR-PCI-MSI 1048580-edge enp2s0-TxRx-4 40: 0 0 0 0 0 23379687 0 0 IR-PCI-MSI 1048581-edge enp2s0-TxRx-5 41: 0 0 0 0 0 0 24721093 0 IR-PCI-MSI 1048582-edge enp2s0-TxRx-6 42: 0 0 0 0 0 0 0 25752073 IR-PCI-MSI 1048583-edge enp2s0-TxRx-7 43: 211 430 277 179 142 219 240 197 IR-PCI-MSI 1048584-edge enp2s0 44: 0 0 0 0 0 13 1 1 IR-PCI-MSI 360448-edge mei_me 45: 246 20 30 4 345 132 128 142 IR-PCI-MSI 442368-edge snd_hda_intel 46: 87961 29805 21965 14718 43334 42053 34617 23830 IR-PCI-MSI 524288-edge nvidia NMI: 218 130 107 105 252 247 225 214 Non-maskable interrupts LOC: 716630 636798 640606 679852 641275 555921 488433 446196 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 218 130 107 105 252 247 225 214 Performance monitoring interrupts IWI: 0 0 0 0 3 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 38554509 4165414 4123561 5839087 2680226 2883656 1297965 812274 Rescheduling interrupts CAL: 3292 2356 2373 2014 2215 2496 2375 2474 Function call interrupts TLB: 10997 21211 21364 22716 11757 23899 28023 27646 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 17 17 17 17 17 17 17 17 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event Can anyone explain why this is happening and possibly how to avoid it? For reference, here is top when using Outbound Rate Control @ 500MB/s top - 01:26:15 up 4:13, 1 user, load average: 0.38, 0.31, 1.00 Tasks: 254 total, 1 running, 253 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.7 us, 3.7 sy, 0.0 ni, 93.3 id, 0.1 wa, 0.0 hi, 1.2 si, 0.0 st KiB Mem : 16378912 total, 12912528 free, 2209912 used, 1256472 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13873312 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6560 jon 20 0 7212040 204656 16836 S 38.9 1.2 0:21.37 java 3194 root 20 0 871176 206844 175404 S 1.0 1.3 12:11.62 Xorg 4149 jon 20 0 1909092 221972 99348 S 0.7 1.4 3:21.75 compiz 4548 jon 20 0 5879804 662312 45948 S 0.7 4.0 6:48.86 java 3940 jon 20 0 350840 13196 5468 S 0.3 0.1 0:20.41 ibus-daemon 4922 jon 20 0 1779380 686992 145824 S 0.3 4.2 20:38.42 firefox 5827 root 20 0 0 0 0 S 0.3 0.0 0:00.64 kworker/4:1 6341 root 20 0 0 0 0 S 0.3 0.0 0:00.93 kworker/1:2 6539 root 20 0 0 0 0 S 0.3 0.0 0:00.31 kworker/0:2 1 root 20 0 185280 5896 3964 S 0.0 0.0 0:01.01 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 107:56.20 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ Attaching 2, 500 TCP connections and using rate control sees an internal-tcp outbound packet rate of 20K pps; jumping to 5, 000 TCP connections sees that number jump to 105K pps; jumping to 7, 500 TCP makes outbound jump to 190K pps (these are just the packets acknowledging reads -- or I assume)** 2: Putting the Solarflare card on the server and the Intel X540T1 on the client; I see IRQ pinning to ksoftirqd/0 using 100% and the total si to 12.5% which is about one core. With Solarflare the RES interrupts don't exceede 10, 000 per core.** The following is the server when using the Solarflare card.. but only about 360-400MB/s is being received instead of the target 500MB/s top - 11:07:55 up 16 min, 1 user, load average: 1.49, 1.09, 0.62 Tasks: 259 total, 3 running, 256 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.5 us, 2.5 sy, 0.0 ni, 83.5 id, 0.0 wa, 0.0 hi, 12.5 si, 0.0 st KiB Mem : 16378912 total, 12294300 free, 2356136 used, 1728476 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13067464 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3 root 20 0 0 0 0 R 99.7 0.0 5:20.82 ksoftirqd/0 4620 jon 20 0 7212040 246176 16712 S 25.6 1.5 1:24.67 java 3241 root 20 0 716936 161772 133628 R 3.3 1.0 0:15.42 Xorg 4659 jon 20 0 654928 36356 27820 S 1.0 0.2 0:00.63 gnome-term+ 4103 jon 20 0 1567768 141048 75340 S 0.7 0.9 0:06.44 compiz 4542 jon 20 0 5688204 601804 43040 S 0.7 3.7 1:03.91 java 7 root 20 0 0 0 0 S 0.3 0.0 0:00.93 rcu_sched 4538 root 20 0 0 0 0 S 0.3 0.0 0:00.68 kworker/4:2 1 root 20 0 119844 5980 4028 S 0.0 0.0 0:00.84 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 10 root rt 0 0 0 0 S 0.0 0.0 0:00.02 watchdog/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 12 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 13 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/1 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 1 0 0 1 0 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 1 0 1 0 1 0 1 0 IR-IO-APIC 12-edge i8042 16: 61 2 1 3 7 2 1 0 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1166 55 10 19 245 45 13 19 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 0 0 0 2 0 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 26 1 2 0 1 2 0 1 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 1723 170 168 126 1603 166 135 47 IR-PCI-MSI 327680-edge xhci_hcd 28: 24980 1714 933 754 7492 1546 1202 936 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 298 2 1 7 159 4 6 1 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 16878 5179 2952 3044 18575 7842 3822 3939 IR-PCI-MSI 1048576-edge enp2s0f0-0 33: 16174 4967 2787 2583 19305 7883 3507 3862 IR-PCI-MSI 1048577-edge enp2s0f0-1 34: 16707 5192 2952 2659 18031 8588 3496 4393 IR-PCI-MSI 1048578-edge enp2s0f0-2 35: 17726 5431 2951 2746 17183 8105 3529 4238 IR-PCI-MSI 1048579-edge enp2s0f0-3 36: 6 1 0 3 6 3 0 1 IR-PCI-MSI 1050624-edge enp2s0f1-0 37: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050625-edge enp2s0f1-1 38: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050626-edge enp2s0f1-2 39: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050627-edge enp2s0f1-3 40: 414 12 9 3 0 14 18 8 IR-PCI-MSI 2621440-edge eth1-rx-0 41: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 42: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 43: 0 0 0 0 10 0 5 0 IR-PCI-MSI 360448-edge mei_me 44: 95 26 8 33 398 384 51 16 IR-PCI-MSI 442368-edge snd_hda_intel 45: 17400 1413 1135 806 17781 1714 1401 988 IR-PCI-MSI 524288-edge nvidia NMI: 37 3 5 3 2 1 1 1 Non-maskable interrupts LOC: 112894 53399 87350 46718 43552 19663 25436 19705 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 37 3 5 3 2 1 1 1 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 1808 7668 9364 1244 4161 2554 9171 954 Rescheduling interrupts CAL: 1900 2028 1497 1984 1862 1931 2118 2004 Function call interrupts TLB: 1991 2539 3176 2985 3176 2458 1612 2087 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 5 5 5 5 5 5 5 5 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event
The problem ended up being using rate-control with the default configured sockets was creating a situation where the internal TCP buffer size was automatically-adjusting to larger and larger size due to the slow read out times. (the default max size is like 6MB) When the size was automatically growing, the TCP compact process would start to churn like crazy and thus eating into all the softirq. The way to fix this is to set an explicit TCP buffer size when using rate control to prevent this aberrant behavior.
High softirq when using rate control networking
1,489,492,456,000
When I open any archive (for example zip) in midnight commander it open it and as I understand it caches its content. When you open this archive second time MC uses its cache. How to reset this cache? I ask because my archives change but I see the old content. Ctrl+R doesn't help.
I got the answer from developers at mailing list: This is known issue: https://www.midnight-commander.org/ticket/62 https://www.midnight-commander.org/ticket/2454 As workaround, menu Command -> "Active VFS list", select wanted zip VFS and press "Free VFSs now".
Midnight Commander - rescan compressed archive
1,489,492,456,000
When I scan a barcode in a text console (CTRL+ALT+F1 or not running X) I get the correct input, but when I try with an application running on X, I don't get the correct barcode. The scanner is configured to return the barcode followed by an 'n'. Under X I only get the 'n', not the preceding barcode. I ran xev to see what is going on. Here is an excerpt of the output: > KeyPress event, serial 35, synthetic NO, window 0x6800001, > state 0x10, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, > KeyRelease event, serial 35, synthetic NO, window 0x6800001, > state 0x18, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, > KeyPress event, serial 35, synthetic NO, window 0x6800001, > state 0x10, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, > KeyRelease event, serial 35, synthetic NO, window 0x6800001, > state 0x18, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, There is one KeyPress/KeyRelease event pair for every digit in the barcode, but no events happening between the KeyPress and KeyRelease event for Alt_L? Should I look at compose keys for a solution, or how can I match the behavior of a normal console? ps. I am using a Welch Allyn ST3400 barcode scanner. EDIT: I ran showkey on the command line and scanned in a barcode. This is the output I got: keycode 28 release keycode 56 press keycode 82 press keycode 76 press keycode 80 press keycode 56 release keycode 56 press keycode 82 press keycode 75 press keycode 73 press keycode 56 release It seems that the barcode scanner is using Left-Alt+#+#+# to get the correct characters. It also seems like the barcode scanner never sends a release event for the numpad keys that it used together with Left-Alt? I read something similar on a different forum - without a solution though. If I manually use Left-Alt+#+#+# with showkey, I get press and release for each key. The question now becomes, why is there no KeyPress event when I am running Xorg?
Often you can reconfigure your barcode reader to output better usable output. The configuration is often done by scanning special barcodes - look at the documentation.
barcode scanner input when running xorg and evdev
1,489,492,456,000
While studying for the RHCE, I came across a situation where stdin redirection does not work in bash: # file /tmp/users.txt /tmp/users.txt: cannot open `/tmp/users.txt' (No such file or directory) # semanage login -l > /tmp/users.txt # file /tmp/users.txt /tmp/users.txt: empty However, this works: # file /tmp/users.txt /tmp/users.txt: cannot open `/tmp/users.txt' (No such file or directory) # semanage login -l >> /tmp/users.txt # file /tmp/users.txt /tmp/users.txt: ASCII text Why is this the case? 1st Update: Permissions: # ls -ld /tmp drwxrwxrwt. 8 root root 4096 Jul 17 15:27 /tmp ACLs (not an ACL mount but just in case): # getfacl /tmp getfacl: Removing leading '/' from absolute path names # file: tmp # owner: root # group: root # flags: --t user::rwx group::rwx other::rwx And I'm performing all commands as root (hence the hash prompt). 2nd Update Per Caleb, full permissions listing of /tmp: # ls -al /tmp total 40 drwxrwxrwt. 8 root root 4096 Jul 17 15:37 . dr-xr-xr-x. 26 root root 4096 Jul 17 15:07 .. drwx------. 2 melmel melmel 4096 Jul 16 21:08 .esd-500 drwxrwxrwt. 2 root root 4096 Jul 17 15:07 .ICE-unix drwx------. 2 gdm gdm 4096 Jul 17 15:08 orbit-gdm drwx------. 2 gdm gdm 4096 Jul 17 15:07 pulse-5E9i88IGxaNh drwx------. 2 melmel melmel 4096 Jul 16 21:08 pulse-329qCo13Xk -rw-------. 1 root root 0 Jul 16 14:32 tmpXd9THg -rw-------. 1 root root 0 Jul 16 12:55 tmpie0O98 -rw-------. 1 root root 0 Jul 16 20:23 tmpr10LrK -r--r--r--. 1 root root 11 Jul 17 15:07 .X0-lock drwxrwxrwt. 2 root root 4096 Jul 17 15:07 .X11-unix -rw-r--r--. 1 root root 865 Jul 16 20:20 yum.conf.security -rw-------. 1 root root 0 Jul 10 14:57 yum.log 3rd Update: Per Hello71: # mount | grep /tmp # mount | grep -w '/' /dev/mapper/vg_svr-tap-lv_root on / type ext4 (rw) Answers to Gilles' questions: Is this something you read about in a book, or did you reach this situation on a real machine? Noticed this while performing a lab in a book on a real machine. Is SELinux in use? # sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted Some Linux-on-Linux virtualisation? Yes. KVM/QEMU guest. I second Hello71's request, except please grep /tmp /proc/mounts Nothing matches. Also env | grep '^LD_' please. Nothing matches. Oh, and can we rule out active attacks Yes we can. I'm the only one that has access to this guest.
It is probably bug in SELinux policy with regards to semanage binary (which has its own context semanage_t) and /tmp directory, which has its own context too - tmp_t. I was able to reproduce almost same results on my CentOS 5.6. # file /tmp/users.txt /tmp/users.txt: ERROR: cannot open `/tmp/users.txt' (No such file or directory) # semanage login -l > /tmp/users.txt # file /tmp/users.txt /tmp/users.txt: empty # semanage login -l >> /tmp/users.txt # file /tmp/users.txt /tmp/users.txt: empty When I tried to use file in different directory I got normal results # file /root/users.txt /root/users.txt: ERROR: cannot open `/root/users.txt' (No such file or directory) # semanage login -l > /root/users.txt # file /root/users.txt /root/users.txt: ASCII text Difference between /tmp and /root is their contexts # ls -Zd /root/ drwxr-x--- root root root:object_r:user_home_dir_t /root/ # ls -Zd /tmp/ drwxrwxrwt root root system_u:object_r:tmp_t /tmp/ And finally, after trying to redirect into file in /tmp I have got following errors in /var/log/audit/audit.log type=AVC msg=audit(1310971817.808:163242): avc: denied { write } for pid=10782 comm="semanage" path="/tmp/users.txt" dev=dm -0 ino=37093377 scontext=user_u:system_r:semanage_t:s0 tcontext=user_u:object_r:tmp_t:s0 tclass=file type=AVC msg=audit(1310971838.888:163255): avc: denied { append } for pid=11372 comm="semanage" path="/tmp/users.txt" dev=d m-0 ino=37093377 scontext=user_u:system_r:semanage_t:s0 tcontext=user_u:object_r:tmp_t:s0 tclass=file Interesting note: redirecting semanage output to pipe works OK #semanage login -l | tee /tmp/users.txt > /tmp/users1.txt # file /tmp/users.txt /tmp/users.txt: ASCII text # file /tmp/users1.txt /tmp/users1.txt: ASCII text
Why does redirection (>) not work sometimes but appending (>>) does?
1,489,492,456,000
In learning some assembly programming I have found the documents of the Linux Standard Base very useful. It seems they tell me how things are supposed to be (on standard based systems), not just how they are in the implementation I have in front of me. On the wikipedia article there are two 2005 articles linked that suggest there is contention around this standard. 2005 is a long time ago, what is the current view? (Note: just this year the linux foundation certified many distributions for LSB 4.0, so they are still in the game working with some distributions. Their press release of course does not mention any possible contention around it.)
Linux standards base is a set of APIs that are guaranteed to be available on an LSB compliant installation. This mostly requires installing some other free software libraries that most distributions already have available - such as POSIX compliant libc, C++ compiler support, Python, Perl, GTK+ and Qt. All major Linux vendors today ship LSB compliant operating systems, that includes RedHat, Debian, Ubuntu and Novell - so I don't believe there is much contention about it. Back when LSB first started people were a bit "meh - who cares". Later there was contention about which APIs to include: if Perl is to be required then what about Python? If GTK+ is required then what about Qt developers? This would have made for some pretty fancy flame wars if not for the "meh" attitude of many operating system vendors towards LSB. Eventually all this was settled by the Linux Foundation being all inclusive and supporting multiple APIs that do the same thing and now it looks that everyone is content.
What is the state of the Linux Standard Base?
1,489,492,456,000
I need to use a custom kernel option at compile time (ACPI_REV_OVERRIDE_POSSIBLE) in order for my graphical card to work correctly with bumblebeed and nvidia drivers on my Dell XPS 15 9560. I'm using ArchLinux. Every few days, there is a new kernel release (4.11.5, 4.11.6, ...). How should I handle those kernel updates ? Do I need to recompile the kernel manually each time ? (I made a small script to accelerate the process, but some stuff still need to be done manually, and it take a REALLY LONG TIME to compile). Is it possible to automate the process such as each time a kernel update shows in, the package manager compiles the kernel itself with the option I specified ? Or with a script ?
That config line should exist in the /proc/config.gz file of any kernel you previously configured it in. You could do what I do, in a two-liner, on my Gentoo systems: su - cd /usr/src && cp -a linux-<new version> /dev/shm/ && ln -s /dev/shm/linux-<new version> linux && cd linux && zcat /proc/config.gz > .config && make olddefconfig && make -j<numcpus+1> bzImage modules && mount /boot && make modules_install install && grub-mkconfig > /boot/grub/grub.cfg && sync && reboot -hi I'm typing this from memory on my mobile right now, and I always goof on the order of 'ln', and it might be "defoldconfig". But, basicallly, that's what I do every time. Works for me. :) YMMV. I'll edit later with corrections once I get a good terminal and shell. :) I always compile on tmpfs, because nothing on a system is faster and more resilient to write-rot than RAM. Check out 'make help' output when run in the kernel source directory for references, and the yummy Gentoo Wiki for even more good info. https://wiki.gentoo.org/wiki/Kernel/Upgrade/ https://wiki.gentoo.org/wiki/GRUB2
How to handle linux kernel updates when using a custom kernel?
1,489,492,456,000
On a modern 64-bit x86 Linux, how is the mapping between virtual and physical pages set up, kernel side? On the user side, you can mmap in pages from the page cache, and this will map 4K pages in directly into user space - but I am interesting in how the pages are mapped in the kernel side. Does it make use of the "whole ram identity mapping" or something else? Is that whole ram identity mapping generally using 1GB pages?
On a modern 64-bit x86 Linux? Yes. It calls kmap() or kmap_atomic(), but on x86-64 these will always use the identity mapping. x86-32 has a specific definition of it, but I think x86-64 uses a generic definition in include/linux/highmem.h. And yes, the identity mapping uses 1GB hugepages. LWN article which mentions kmap_atomic. I found kmap_atomic() by looking at the PIO code.[*] Finally, when read() / write() copy data from/to the page cache: generic_file_buffered_read -> copy_page_to_iter -> kmap_atomic() again. [*] I looked at PIO, because I realized that when performing DMA to/from the page cache, the kernel could avoid using any mapping. The kernel could just resolve the physical address and pass it to the hardware :-). (Subject to IOMMU). Although, the kernel will need a mapping if it wants to checksum or encrypt the data first.
How is the page cache mapped in the kernel on 64-bit x86 architectures?