date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,414,755,363,000
By using ls -lh we can get the file size. Is there any way I can check if the file size is greater than 1MB to then print a message like below? I may have files with different sizes like 100mb, 1gb, 10gb, 100kb. if [ $FileSize > 1MB ]; then echo "File size is grater than 1MB" fi Is there a way I can check the file size using an if statement?
Using find on a specific file at $filepath: if [ -n "$(find "$filepath" -prune -size +1000000c)" ]; then printf '%s is strictly larger than 1 MB\n' "$filepath" fi This uses find to query the specific file at $filepath for its size. If the size is greater than 1000000 bytes, find will print the pathname of the file, otherwise it will generate nothing. The -n test is true if the string has non-zero length, which in this case means that find outputted something, which in turns means that the file is larger than 1 MB. You didn't ask about this: Finding all regular files that are larger than 1 MB under some $dirpath and printing a short message for each: find "$dirpath" -type f -size +1000000c \ -exec printf '%s is larger than 1 MB\n' {} + These pieces of code ought be to portable to any Unix. Note also that using < or > in a test will test whether the two involved strings sort in a particular way lexicographically. These operators do not do numeric comparisons. For that, use -lt ("less than"), -le ("less than or equal to"), -gt ("greater than"), or -ge ("greater than or equal to"), -eq ("equal to"), or -ne ("not equal to"). These operators do integer comparisons.
Check if the file size greater than 1MB using IF condition
1,414,755,363,000
Does Linux provide a system call which can create a "view" of a limited byte range of a backing file? I'm envisioning something that for example would act on an open file descriptor and either modify it or generate a new file descriptor where file offsets are relative to the beginning of the range and end at the end of the range. The use-case would be to limit a non-cooperating subprocess to accessing only a particular portion of an input file.
One way of doing this is to use a loop device. This approach does have two requirements which may make it less useful: you need to be root to set it up, and the non-cooperating subprocess must be able to write to a block device. Oh, and it doesn’t deal with conflicting changes. To set the loop device up, run losetup -o 1024 --sizelimit 2048 --show -f yourfile replacing 1024, 2048 and yourfile with appropriate values — -o specifies the start offset, --sizelimit the size (counting from the offset). Note that sizelimit has to be a multiple of 512. This will output the name of the loop device which has been set up; adjust the permissions as necessary, and give it to your non-cooperating sub-process. When you no longer need the device, delete it with losetup -d /dev/loopN replacing N as appropriate.
Is there a Linux system call to create a “view” of a range of a file?
1,414,755,363,000
I have a script that uses gnu parallel. I want to pass two parameters for each "iteration" in serial run I have something like: for (( i=0; i<=10; i++ )) do a = tmp1[$i] b = tmp2[$i] done And I want to make this parallel as func pf() { a=$1 b=$2 } export -f pf parallel --jobs 5 --linebuffer pf ::: <what to write here?>
Omitting your other parallel flags just to stay focused... parallel --link pf ::: A B ::: C D This will run your function first with a=A, b=C followed by a=B, b=D or a=A b=C a=B b=D Without --link you get full combination like this: a=A b=C a=A b=D a=B b=C a=B b=D Update: As Ole Tange metioned in a comment [since deleted - Ed.] there is another way to do this: use the :::+ operator. However, there is an important difference between the two alternatives if the number of arguments is not the same in each param position. An example will illustrate. parallel --link pf ::: A B ::: C D E output: a=A b=C a=B b=D a=A b=E parallel pf ::: A B :::+ C D E output: a=A b=C a=B b=D So --link will "wrap" such that all arguments are consumed while :::+ will ignore the extra argument. (In the general case I prefer --link since the alternative is in some sense silently ignoring input. YMMV.)
GNU parallel - two parameters from array as parameter
1,414,755,363,000
For some reason my Fedora 25 FRESH install is not using wayland by default. I know this because of $: loginctl show-session 3 -p Type Type=x11 If I was using Wayland by default that should say wayland or weston. I'm very confused why this fresh install of fedora 25 is not sporting wayland by default. I looked over the arch wiki briefly, and tried to test run wayland by issuing $: weston Also, I have rebooted fedora to multiuser.target, to get just a command line to manually launch a dbus-run-session for wayland, and this is the output: $: dbus-run-session -- gnome-shell --display-server --wayland (gnome-shell:1372): mutter-WARNING **: Can't initialize KMS backend: could not find drm kms device Then I tried: $: startx And my standard gnome desktop popped up no problem. I'm seriously wondering if fedora 25 live installer ever installed wayland to begin with? After looking for the wayland config file weston.ini, I cannot find it in ~/.config/ where it's supposed to be. System info: $:uname -a Linux sark 4.8.10-300.fc25.x86_64 #1 SMP Mon Nov 21 18:49:16 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux I have done a full system update on first login with $: sudo dnf update Also went through the process of using the nvidia drivers for my graphics card; GTX 950 Not using the default pre-my-move-to-nvidia-driver driver :P EDIT: After investigating onto my laptop, my Laptop reports that it is using wayland: $: loginctl show-session 2 -p Type Type=wayland This laptop was a fedora24 upgrade to fedora25, not a fresh install of fedora 25 Laptop info: $: uname -a Linux mcp 4.8.10-300.fc25.x86_64 #1 SMP Mon Nov 21 18:59:16 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Nvidia does not yet support Wayland, so Fedora 25 falls back to X11. From the Nvidia forum I see someone has used packages from the in-development Fedora 26 plus some patches to get it working, but notes "I have tested it with local builds and it runs like crap, personally I wouldn't bother trying it in F25." Hopefully this will be resolved for F26. In the meantime, I'm at least glad that the X11 fallback worked nicely and transparently.
Fedora 25 is NOT using wayland by default!
1,414,755,363,000
If I want to check if I got to the max of the nproc value should I do: ps -ef | wc -l Or ps -efL | wc -l Does nproc in limits.conf refers to number of processes or number of threads?
On Linux it refers to the number of threads. From setrlimit(2) (which is the system call used to set the limits): RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN. This limit is not enforced for processes that have either the CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability. So ps -efL | wc -l would be more appropriate, however the limits in limits.conf apply per login session (see limits.conf(5) for details).
Does nproc in limits.conf refers to number of processes or number of threads?
1,414,755,363,000
I have multiple systemd services that require a generated EnvironmentFile. I have a shell script which generates this Environment file, but since I need that environment file before any Exec... commands execute, I cannot use ExecStartPre=generate_env_file.sh . Therefore, I have another service (generate_env_file.service) set to run that script as a oneshot: [Service] Type=oneshot ExecStartPre=/usr/bin/touch /path/to/config.ini ExecStart=/path/to/generate_env_file.sh and I have multiple other service files which have: [Unit] Requires=generate_env_file.service After=generate_env_file.service How can I guarantee that two or more dependent services (which require generate_env_file.service) will not run in parallel and spawn two parallel executions of generate_env_file.service? I've looked at using RemainAfterExit=true or possibly StartLimitIntervalSec= and StartLimitBurst= to ensure that only one copy will execute at a time during some period but I'm not sure the best way to go about doing this.
RemainAfterExit=true is the way to go. In this case Systemd starts the service and Systemd considers it as started and live. However this doesn't cover the use case of executing systemctl restart generate_env_file.service. In this case systemd will re-execute your service. To solve this, you could create a marker file in the run file system in ExecStartPost= and add ConditionPathExists= directive to check the existence of file.
systemd oneshot Requirement to execute only once
1,461,445,157,000
Is there a CPU/RAM overhead associated with using loop-mounted images versus using a physical partition under Linux?
On Linux <4.4, there is significant overhead when using loop devices on Linux: data accessed through the loop device has to go through two filesystem layers, each doing its own caching so data ends up cached twice, wasting much memory (the infamous "double cache" issue) Aside from casual use, other alternatives would be to use a dedicated partition or a chroot so that data can be accessed directly. Release notes for the first version with improved performance: Faster and leaner loop device with Direct I/O and Asynchronous I/O support This release introduces support of Direct I/O and asynchronous I/O for the loop block device. There are several advantages to use direct I/O and AIO on read/write loop's backing file: double cache is avoided due to Direct I/O which reduces memory usage a lot; unlike user space direct I/O there isn't cost of pinning pages; avoids context switches in some cases because concurrent submissions can be avoided. See commits for benchmarks.
Overhead of using loop-mounted images under Linux
1,461,445,157,000
A mount point /mnt/sub is shadowed by another mount point /mnt. Is it always possible to access the mounted filesystem? Root access is a given. The system is a reasonably recent Linux. Example scenario: accessing the branches of an overlay root The basic sequence of operations is: mount device1 /mnt/sub mount device2 /mnt After this /mnt/sub is a file on device2 (if it exists). The question is how to access files on device1. Some devices can be mounted twice, so mount device1 /elsewhere would work. But this doesn't work for all devices, in particular not for FUSE filesystems. This differs from the already covered case where a subdirectory is shadowed by a mount point, but the mount point of the subdirectory is itself visible, and a bind mount can create an unobscured view. In the example above, mount --bind / /elsewhere lets us see the /mnt/sub directory from the root filesystem on /elsewhere/mnt/sub, but this question is about accessing the filesystem on device1.
# unshare --mount # this opens a sub-shell # cd / # umount /mnt do what thou wilt # exit # close the sub-shell
Accessing a shadowed mount point
1,461,445,157,000
On a Debian Jessie system: $ ls -al ~/.gnupg/ total 58684 drwx------ 2 username username 4096 Nov 28 20:52 . drwxr-xr-x 50 username username 4096 Nov 28 19:33 .. -rw------- 1 username username 9602 Jun 24 22:47 gpg.conf -rw-r--r-- 1 username username 18 Jun 25 21:07 .#lk0xb7f2fa50.hostname.5551 -rw-r--r-- 1 username username 18 Aug 19 19:15 .#lk0xb8e9bf48.hostname.32133 -rw-r--r-- 1 username username 18 Aug 19 19:15 .#lk0xb8e9dc48.hostname.32133 -rw-r--r-- 1 username username 18 Nov 28 20:52 .#lk0xb9387478.hostname.24497 -rw------- 1 username username 30018875 Nov 18 21:49 pubring.gpg -rw------- 1 username username 30018875 Nov 18 20:54 pubring.gpg~ -rw------- 1 username username 600 Jun 21 21:34 random_seed -rw------- 1 username username 4890 May 7 2015 secring.gpg -rw------- 1 username username 1440 Nov 18 18:50 trustdb.gpg I have replaced the actual username with username and the actual hostname with hostname. What is the origin/purpose of the files whose names begin .#lk0xb?
They are (as the "lk" suggests) lock files. A comment in the gnupg sources says This function creates a lock file in the same directory as FILE_TO_LOCK using that name and a suffix of ".lock". Note that on POSIX systems a temporary file ".#lk..pid[.threadid] is used. and also states that there is a cleanup function (to remove obsolete locks). You're seeing leftover lock-files where the cleanup function failed. The pid and threadid do not match an earlier comment in the code (it seems that the comments are not updated). The actual code which makes the filename looks different from the comments (quoting from gnupg-1.4.19): snprintf (h->tname, tnamelen, "%.*s/.#lk%p.", dirpartlen, dirpart, h ); h->nodename_off = strlen (h->tname); snprintf (h->tname+h->nodename_off, tnamelen - h->nodename_off, "%s.%d", nodename, (int)getpid ()); but of course, the code is more pertinent than the comments.
Files starting with .#lk0xb in ~/.gnupg directory - what are they?
1,461,445,157,000
I read this post. Are file descriptors the same as file handle? While trying to configure the linux kernel, it asks open by fhandle syscalls (FHANDLE) [Y/n/?]. Why is this option provided? Does it affect performance of kernel or compiling time or just to have a uniform method of accessing files?
A FILE structure in C is typically called the file handle and is a bit of abstraction around a file descriptor: The data type FILE is a structure that contains information about a file or specified data stream. It includes such information as a file descriptor, current position, status flags, and more. It is most often used as a pointer to a file type, as file I/O functions predominantly take pointers as parameters, not the structures themselves. I don't have a kernel build environment at hand but there should be a help text that explains the option and according to quick search should say something like: CONFIG_FHANDLE - open by fhandle syscalls - If you say Y here, a user level program will be able to map file names to handle and then later use the handle for different file system operations. This is useful in implementing userspace file servers, which now track files using handles instead of names. The handle would remain the same even if file names get renamed. Enables open_by_handle_at(2) and name_to_handle_at(2) syscalls. Basically it adds support for new/additional system calls.
File handles and filenames
1,461,445,157,000
I was going through an article on GNU which goes something like below There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called “Linux” distributions are really distributions of GNU/Linux. I always thought Linux as a kernel and Operating System but it looks like Linux = Linux kernel and GNU OS. Could someone point out the exact functionality of each in the "Linux" terminology we use in our day to day life. Also, according to the wiki, GNU's design is Unix-like but differs from Unix by being free software and containing no Unix code. I thought Unix is opensource. Isn't it?
I believe the bit you're referring to is covered here on the Free Software Foundation (FSF) website: http://www.gnu.org/gnu/linux-and-gnu.html According to the FSF their contention is that Linux is just a Kernel. A usable system is comprised of a Kernel + the tools such as ls, find, shells, etc. Therefore when referring to the entire system, it should be referred to as GNU/Linux, since the other tools together with the Linux Kernel make up a complete usable system. They even go on to talk about the FSF Unix Kernel, Hurd, making arguments that Hurd and Linux are essentially interchangeable Kernels to the GNU/X system. I find the entire argument tiring and think there are better things to do with our time. A name is just a name and the fact that people consider a system that includes GNU software + the Linux Kernel + other non-GNU software to be Linux or GNU/Linux a matter of taste and really doesn't matter in the grand scheme of things. In fact I think the argument does more to hurt Linux and GNU/Linux by fracturing the community and confusing the general public as to what each thing actually is. For more than you ever wanted to know on this topic take a look at the Wikipedia articled titled: GNU/Linux naming controversy. All Unixes opensource? To my knowledge not all Unixes are opensource. Most of the functionality within Unix is specified so that how things work is open, but specific implementations of this functionality is or isn't open depending on which distro it's a part of. For example, until recently Solaris, a Unix, wasn't considered open source. Only when Sun Microsystem's released core components into the OpenSolaris project, did it at least components of Solaris become open source. Unix History I'm by no means an expert on this topic, so I would suggest taking a look at the Unix Wikipedia page for more on the topic. Linux History Take a look at the Unix Lineage diagram for more on which Unixes are considered open, mixed, or closed source. http://upload.wikimedia.org/wikipedia/commons/7/77/Unix_history-simple.svg    I also find the GNU/Linux Distribution Timeline Project useful when having this conversation. http://futurist.se/gldt/wp-content/uploads/12.10/gldt1210.png
What exactly do we mean when we say we are using Linux?
1,461,445,157,000
my server has been running with Amazon Ec2 linux. I have a mongodb server inside. The mongodb server has been running under heavily load, and, unhappily , I've ran into a problem with it :/ As known, the mongodb creates new thread for every client connection, and this worked fine before. I don't know why, but MongoDB can't create more than 975 connections on the host as a non-privileged user ( it runs under a mongod user) . But when I'm running it as a root user, it can handle up to 20000 connections(mongodb internal limit). But, further investigations show, that problem isn't the MongoDB server, but a linux itself. I've found a simple program, which checks max connections number: /* compile with: gcc -lpthread -o thread-limit thread-limit.c */ /* originally from: http://www.volano.com/linuxnotes.html */ #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <string.h> #define MAX_THREADS 100000 #define PTHREAD_STACK_MIN 1*1024*1024*1024 int i; void run(void) { sleep(60 * 60); } int main(int argc, char *argv[]) { int rc = 0; pthread_t thread[MAX_THREADS]; pthread_attr_t thread_attr; pthread_attr_init(&thread_attr); pthread_attr_setstacksize(&thread_attr, PTHREAD_STACK_MIN); printf("Creating threads ...\n"); for (i = 0; i < MAX_THREADS && rc == 0; i++) { rc = pthread_create(&(thread[i]), &thread_attr, (void *) &run, NULL); if (rc == 0) { pthread_detach(thread[i]); if ((i + 1) % 100 == 0) printf("%i threads so far ...\n", i + 1); } else { printf("Failed with return code %i creating thread %i (%s).\n", rc, i + 1, strerror(rc)); // can we allocate memory? char *block = NULL; block = malloc(65545); if(block == NULL) printf("Malloc failed too :( \n"); else printf("Malloc worked, hmmm\n"); } } sleep(60*60); // ctrl+c to exit; makes it easier to see mem use exit(0); } And the sutuation is repeated again, as root user I can create around 32k threads, as non-privileged user(mongod or ec2-user ) around 1000 . This is an ulimit for root user: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59470 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 60000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is an ulimit for mongod user: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59470 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 60000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 1024 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Kernel max threads: bash-4.1$ cat /proc/sys/kernel/threads-max 118940 SELinux is disabled. Don't know how to solve this strange problem...Possibly, somebody does?
Your issue is the max user processes limit. From the getrlimit(2) man page: RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN. Same for pthread_create(3): EAGAIN Insufficient resources to create another thread, or a system-imposed limit on the number of threads was encountered. The latter case may occur in two ways: the RLIMIT_NPROC soft resource limit (set via setrlimit(2)), which limits the number of process for a real user ID, was reached; or the kernel's system-wide limit on the number of threads, /proc/sys/kernel/threads-max, was reached. Increase that limit for your user, and it should be able to create more threads, until it reaches other resource limits. Or plain resource exhaustion - for 1Mb stack and 20k threads, you'll need a lot of RAM. See also NPTL caps maximum threads at 65528?: /proc/sys/vm/max_map_count could become an issue at some point. Side point: you should use -pthread instead of -lpthread. See gcc - significance of -pthread flag when compiling.
Linux max threads count
1,461,445,157,000
I want to list usb ports in linux and then send a message to the printer connected to it. That message is sensed by the printer to open the cash drawer. I know I can use echo - e and a port name, but my difficulty is finding the port name. How can I list the available ports or the ports that are currently used?
The lsusb command will yield the list of recognised usb devices. Here is an example: $ lsusb Bus 002 Device 003: ID 1c7a:0801 LighTuning Technology Inc. Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 04ca:f01c Lite-On Technology Corp. Bus 001 Device 003: ID 064e:a219 Suyin Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub You can note that the information provided include the bus path as well as the vendorId/deviceId. I'm not sure what "the ports that are currently used" actually means. Edit To write a message to the device on bus 1 device 2 you must access the device $ ls -l /dev/bus/usb/001/002 crw-rw-r-- 1 root root 189, 1 2011-06-04 03:11 /dev/bus/usb/001/002
List USB ports in linux
1,461,445,157,000
I had been using Arch Linux 64 bit on a Gateway P6860FX for about two years, and recently switched to Ubuntu (also 64 bit). When I type on the keyboard, my left hand feels a lot more warmth than before, and the air coming out of the exhaust port is definitely hotter. (Odd, right now there's no extra heat at all...but anyway...) Only minutes ago did I discover there are ways to monitor the CPU temperature. I have no idea what it was for Arch, but on Ubuntu it's 60-something, rising to 88 when I run heavy number-crunching software for a few minutes. There are good Q&As on this and Superuser on cleaning out dust, and ways to help the computer stay cool. My question is: why would one linix distro run hotter than another? Is there some daemon running in one and not the other, or some device driver difference, or perhaps one but not the other sets the "run really hot" bit in the CPU's mode register, or what? Can knowing this answer help me select the next distro to try? Given several candidate distros that are both 64 bit and meet various requirements, can we predict which ones are going to make this machine run hot?
As geekosaur and Tshepang are saying: Assuming that both distributions are using the same kernel, remaining differences should boil down to default configuration settings. It could be worth exploring a bit before switching distributions (changing settings is presumably quicker than installing a new OS), I suggest Check System > Preferences > Appearance > Visual Effects - you may prefer "none" to put less load on the CPU and graphics. Install and run PowerTOP, a Linux utility to help track down power consumption offenders. (It's available from the Ubuntu software center.) There are a whole bunch of other settings that may affect power consumption, but PowerTOP will probably guide you to the ones that are most relevant.
Why does one linux distro run hotter than another on laptop?
1,461,445,157,000
Is there a bash/ksh/any shell script IDE. Don't you get annoyed when you forget the space inside if or I don't know, some minor syntax mistakes you do from time to time, but takes you a long time to figure it out(especially when one is tired). I knew about some suggestion listed below, but I'm looking for something like eclipse(i.e. for java).
Just about every editor support syntax highlighting for shell - this can help you spot problems. In addition, you can put set -x and set -e at the top of your scripts. The -x tells the shell to print out every command before it executes it. The -e tells the shell to terminate the script if any errors occur. These should really help cut down on time spent looking for bugs.
Bash script IDE
1,461,445,157,000
I'm curious about the file or symlink /etc/mtab. I believe this is a legacy mechanism. On every modern linux I've used this is a symbolic link to /proc/mounts and if mtab were to be a regular file on a "normal" file system /etc there would be challenges in making software work with mount namespaces. For a long time I'd presumed that one of two things were true. Either: We're waiting for software referencing /etc/mtab to age out or be updated Other non-linux OS still use the same file name and the link is there for cross platform compliance However both of these seem shaky ideas. I can't find good reference to any modern OS keeping the same file name outside Linux. And it seems to have lived for much too long to be simply a backward compatibility issue; far more significant changes seem to have come and gone in that same time. So I'm left wondering if /etc/mtab is really just there for historic reasons. Is it in any way officially deprecated? Is there any solid modern reason [as of 2023] to keep it? I don't want to delete it from my system, but as a software developer I'd like to understand its usefulness and whether to avoid it.
Should the use of /etc/mtab now be considered deprecated? Depends on who you ask. If you ask the authors of mount on Linux, yes; since 2018 it says … is completely disabled in compile time by default, because on current Linux systems it is better … I think that's pretty strong a statement. Prior to that, /etc/mtab was "also supported", but it was considered better to not use it: This real mtab file is still supported, but on current Linux systems it is better to make it a symlink to… That sentence was there since 2014. Before that, it was only recommended: The mtab file is still supported, but it's recommended to use a symlink to … In other words, yeah. This has been deprecated for nearly a decade. You shouldn't rely on it. Ignore. The source of truth is /proc/mounts, if anything. (Listing mounts correctly, uniquely and unambigously gets a logically non-trivial problem considering Linux mount namespaces exist)
Should the use of /etc/mtab now be considered deprecated?
1,461,445,157,000
When slave side of pty is not opened, strace on the process, which does read(master_fd, &byte, 1);, shows this: read(3, So, when nobody is connected to the slave side of pty, read() waits for data - it does not return with a error. But when slave side of pty is opened by a process and that process exits, the read() dies with this: read(3, 0xbf8ba7f3, 1) = -1 EIO (Input/output error) The pty is created with master_fd = posix_openpt(O_RDWR|O_NOCTTY) Slave side of the pty is opened with comfd = open(COM_PORT, O_RDWR|O_NOCTTY) Why the read() exits when process which opened slave side of the pty exits? Where is this described?
On Linux, a read() on the master side of a pseudo-tty will return -1 and set ERRNO to EIO when all the handles to its slave side have been closed, but will either block or return EAGAIN before the slave has been first opened. The same thing will happen when trying to read from a slave with no master. For the master side, the condition is transient; re-opening the slave will cause a read() on the master side to work again. On *BSD and Solaris the behavior is similar, with the difference that the read() will return 0 instead of -1 + EIO. Also, on OpenBSD a read() will also return 0 before the slave is first opened. I don't know if there's any standard spec or rationale for this, but it allows to (crudely) detect when the other side was closed, and simplifies the logic of programs like script which are just creating a pty and running another program inside it. The solution in a program which manages the master part of a pty to which other unrelated programs can connect is to also open and keep open a handle to its slave side. See related answer: read(2) blocking behaviour changes when pts is closed resulting in read() returning error: -1 (EIO) Why the read() exits when process which opened slave side of the pty exits? When a process exits, all its file descriptors are automatically closed.
Why blocking read() on a pty returns when process on the other end dies?
1,461,445,157,000
GTK applications mark files as recently used by adding them to the XML in ~/.local/share/recently-used.xbel, but I am frequently working with files from terminal-driven applications like latex, and these are not marked in the GTK list and hence not available from the "Recent" bookmark in GUI file browsers/pickers etc.. Is there a CLI command I can use to explicitly add files to the Recent list, for smoothing operations between the terminal and GUI sides of my Linux usage? Either an official way, or a fast & simple hack with the side-effect of writing to the recently-used.xbel file!
The following Python script will add all the files given as arguments to the recently-used list, using GIO: #!/usr/bin/python3 import gi, sys gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Gio, GLib rec_mgr = Gtk.RecentManager.get_default() for arg in sys.argv[1:]: rec_mgr.add_item(Gio.File.new_for_path(arg).get_uri()) GLib.idle_add(Gtk.main_quit) Gtk.main() The last two lines are necessary to start the Gtk event loop; if you don’t do that, the changed signal from the manager won’t be handled, and the files won’t be added to the recently-used list.
Can I mark files as recently-used from the command line?
1,461,445,157,000
I am having troubles writing an expect script. I want to do something equivalent to the following bash instruction: iplist=$(cat iplist.txt) I've tried using set in all the ways that I know but it is still not working. Is there another way or is just that I'm not using it the right way? set iplist=$(cat iplist.txt)
TCL can read(n) a file directly; this is both more efficient and more portable than forking out to some command. #!/usr/bin/env expect proc slurp {file} { set fh [open $file r] set ret [read $fh] close $fh return $ret } set iplist [slurp iplist.txt] puts -nonewline $iplist This also (if necessary) allows various open(n) or chan configure options to be specified, for example to set the encoding: #!/usr/bin/env expect package require Tcl 8.5 proc slurp {file {enc utf-8}} { set fh [open $file r] chan configure $fh -encoding $enc set ret [read $fh] close $fh return $ret } set data [slurp [lindex $argv 0] shiftjis] chan configure stdout -encoding utf-8 puts $data Which if saved as readfile and given somefile as input: % file somefile somefile: DBase 3 data file with memo(s) (1317233283 records) % xxd somefile 00000000: 8365 8342 8362 834e 838b .e.B.b.N.. % ./readfile somefile ティックル %
How do I use 'expect' to read the contents of a file into a variable?
1,461,445,157,000
Unfortunately timedatectl set-timezone doesn't update /etc/timezone. How do I get the current timezone as Region/City, eg, given: % timedatectl | grep zone Time zone: Asia/Kuala_Lumpur (+08, +0800) I can get the last part: % date +"%Z %z" +08 +0800 How do I get theAsia/Kuala_Lumpur part without getting all awk-ward? I'm on Linux, but is there also a POSIX way?
In this comment by Stéphane Chazelas, he said: timedatectl is a systemd thing that queries timedated over dbus and timedated derives the name of the timezone (like Europe/London) by doing a readlink() on /etc/localtime. If /etc/localtime is not a symlink, then that name cannot be derived as those timezone definition files don't contain that information. Based on this and tonioc's comment, I put together the following: #!/bin/bash set -euo pipefail if filename=$(readlink /etc/localtime); then # /etc/localtime is a symlink as expected timezone=${filename#*zoneinfo/} if [[ $timezone = "$filename" || ! $timezone =~ ^[^/]+/[^/]+$ ]]; then # not pointing to expected location or not Region/City >&2 echo "$filename points to an unexpected location" exit 1 fi echo "$timezone" else # compare files by contents # https://stackoverflow.com/questions/12521114/getting-the-canonical-time-zone-name-in-shell-script#comment88637393_12523283 find /usr/share/zoneinfo -type f ! -regex ".*/Etc/.*" -exec \ cmp -s {} /etc/localtime \; -print | sed -e 's@.*/zoneinfo/@@' | head -n1 fi
Get current timezone as `Region/City`
1,461,445,157,000
I would like to specify a specific identity file based on the user I am ssh'ing as to a server. For example when ssh as user1 from host 1 to host 2 as user1 [user1@host1 ~]$ ssh user1@host2 I would like to use a certain identity file. However when I ssh as user1 from host1 to host2 as user2, I would like to use a different identity file [user1@host1 ~]$ ssh user2@host2 Now, I can do this by specifying the identity file in the command, [user1@host1 ~]$ ssh -i ~/.ssh/id_user1 user1@host2 [user1@host1 ~]$ ssh -i ~/.ssh/id_user2 user2@host2 but I would love to do it in my ~/.ssh/config file. I tried the following, but it does not seem to work Host user2@* IdentityFile ~/.ssh/id_user2 Host user1@* IdentityFile ~/.ssh/id_user1 Any and all help is appreciated. If this has to be configured somewhere else, that is fine as well. I would just like to avoid specifying it on the command line. Would really love to figure this out as it would be a cool solution to my problem!
You should be able to do this with the Match directive e.g. Host host2 HostName host2.some.dom.ain Match user user1 IdentityFile ~/.ssh/id_user1 Match user user2 Identityfile ~/.ssh/id_user2
Specify Specific Identity file when ssh'ing as certain user in ~/.ssh/config
1,461,445,157,000
I am trying to understand how Xorg works. I have created the following image to show my understanding (this image shows the state of the components after you press Ctrl+Alt+F7): The following is the explanation of the image: /dev/tty7 is the controlling terminal for Xorg. Xorg directly talks to the VGA driver to draw on the screen (it does not send what it wants to draw to the TTY driver). Xorg directly receives input from the keyboard and mouse drivers (it does not receive keyboard and mouse input from the TTY driver). The Virtual terminal also receives input from the keyboard driver (but based on my testing, it receives the scan codes of the keys). The X clients (xterm and Firefox in the image) don't have a controlling terminal. Is my understanding correct?
Your description doesn't quite match your diagram, and is more correct than your diagram. The X server does not use the tty driver for either input or output. It reads inputs directly from the drivers for the various input devices and sends output directly to the graphics card drivers. You can list the input devices with xinput and then get further information with xinput list-props. For example: $ xinput | tail -n 1 ↳ USB Keyboard id=10 [slave keyboard (3)] $ xinput list-props 10 | tail -n 1 Device Node (263): "/dev/input/event2" You can see that my X server obtains input from my USB keyboard by reading from /dev/input/event2. For output, I don't know if there's a similar user-level tool. xrandr --listproviders lists the graphics drivers that are in use or available, but does not list /dev entries. You can see which graphics device the X server has open with lsof -p$(pgrep Xorg) or less /var/log/Xorg.0.log. The concept of controlling terminal was designed for text mode sessions. An X server may or may not have a controlling terminal depending on how it was launched. An X program that was started from a GUI menu typically doesn't have a controlling terminal, because the window manager doesn't have one. An X program started from a shell running in a terminal does have that terminal as a controlling terminal.
How does Xorg work?
1,461,445,157,000
I have some normal x86_64 desktop Linux installed in a single ext4 root partition† on some 500GB HDD. Now if I want to migrate this installation to a 500GB SSD (rest of the system stays the same), do I just clone the disk and run genfstab (I know that from the Arch installation guide, do I even need that?) and done? Or is there more to it? †That is, everything is in that single partition. I do not have a swap partition, but a swap file, and my system can easily do without that too if it should be an issue.
After some research, I found that ext4 is apparently quite usable on SSDs, so I went with the clone approach. Here is what I did, step by step: Install the SSD Boot from a USB and clone the HDD to SSD with dd Change the UUID of the new filesystem. I missed that one at first, which caused funny results as grub and other software got confused Update the fstab on the new filesystem. I used the genfstab script from the Arch USB for that Re-generate initramfs, reinstall and reconfigure grub Move SSD to the top in boot priority, done The above worked for me; however, I am very much a novice admin, so I'm not sure if every step is actually necessary and useful.
How can I migrate a Linux installation from HDD to SSD?
1,461,445,157,000
I looking into writing my own init.d scripts to control several services running on my Linux server. I came across an example online which contained: nohup $EXEC_SCRIPT 0<&- &> $LOG_FILE & echo $! > $PID_FILE From what I understand: nohup Catches the hangup signal $EXEC_SCRIPT is a variable containing the command to be run 0<&- &> Not come across this before $LOG_FILE similar to $EXEC_SCRIPT but contains the log file path & starts logging to $LOG_FILE in the background? $! is the PID of the most last background command > writes the result of $! to the $PID_FILE I can work through it with this knowledge but the 0<&- &> is completely throwing me off. I don't like to include things that I don't at least partially understand first.
These are redirections. 0<&- closes the file descriptor 0 (standard input). &> redirects both stdout and stderr (in this case to the logfile) Are you sure there was no echo before $!? $! would be interpreted as a command and most probably result in a -bash: 18552: command not found
Linux init script what does 0<&- &> achieve
1,461,445,157,000
One of my folders contains files in the following format: 3_20150412104422154033.txt 3_2015041211022775012.txt 3_20150412160410171639.txt 3_20150412160815638933.txt 3_20150413161046573097.txt 3_20150413161818852312.txt 3_20150413163054600311.txt 3_20150413163514489159.txt 3_2015041321292659391.txt 3_20150414124528747462.txt 3_20150414125110440425.txt 3_20150414134437706174.txt 3_20150415085045179056.txt 3_20150415100637970281.txt 3_20150415101749513872.txt I want to retrieve those files having a date value less than or equal to my input date value. For example, if I give "3_20150414" which is (3_YYYYMMDD), I want the output to be the file names 3_20150412104422154033.txt 3_2015041211022775012.txt 3_20150412160410171639.txt 3_20150412160815638933.txt 3_20150413161046573097.txt 3_20150413161818852312.txt 3_20150413163054600311.txt 3_20150413163514489159.txt 3_2015041321292659391.txt 3_20150414124528747462.txt 3_20150414125110440425.txt 3_20150414134437706174.txt I can list the files by issuing a command like this: ls -l | grep '20150413\|20150414' |awk '{print $NF}' But I am struggling to find a <= match.
You can use awk and its string comparison operator. ls | awk '$0 < "3_20150415"' In a variable: max=3_20150414 export max ls | LC_ALL=C awk '$0 <= ENVIRON["max"] "z"' concatenating with "z" here makes sure that the comparison is a string comparison, and allows any time on that day since in the C locale, digits sort before z. In zsh, you can also do: print -rC1 -- *.txt(e['[[ $REPLY < ${max}z ]]'])
Get files with a name containing a date value less than or equal to a given input date
1,461,445,157,000
I'm facing a very annoying problem that I noticed a week from now and for which I can't find an answer: my network suddenly stops responding, usually coming back exactly 25 seconds later. I was using kernel 3.10.4 and now migrated to 3.11-rc4 to see if something changed, but no, the behavior is the same. And since it is a hard to spot problem due to the fact usual web surfing is in "bursts" and the outage is completely random, I can't really tell this problem was present in a previous kernel as well (I always use custom but unpatched kernels from kernel.org, all compiled by myself) I can't tell the kernel is the culprit either, but I can say there are no clues on the system logs (I checked both /var/log/syslog and /var/log/messages and there is nothing unusual there) and that hardware doesn't seem at fault, for the problem shows up using either one of my network cards: lspci output: 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express (rev 01) 04:00.0 Ethernet controller: 3Com Corporation 3c905B 100BaseTX [Cyclone] (rev 30) and I already tried to exchange the ethernet switch ports and still no one else where I work has a problem except me (although we use similar machines, I'm the only one using Linux, so I had to take some infamous jokes about it as well... hehe). I ran up wireshark on my machine and left it continuously pinging our gateway and another machine on the same network segment. Then, at the first sign of network malfunction I would check it and verify the gateway stopped responding pings, but the other machine was still there responding normally. Some other times is the other machine which stops responding and the gateway is fine, and some other times both stop responding. I don't know what else to do, so I'd like some help or tips on how to further debug this, since the system logs are completely normal. I have my kernel config file and a capture file from wireshark showing the situation. I can post here or at some pastebin site in case anyone finds it useful to understand the case, just please let me know the detail level I should use (I guess the packet level without the raw data would be enough).
The symptoms are consistent with an IP address conflict. An IP address conflict arises when your machine and some other machine on the same network are trying to use the same IP address. On a local link network, addressing is based on MAC addresses. Every Ethernet card has its own MAC address (barring gross misconfiguration or malice). A router deciding where to send an IP packet will send an ARP request for the target IP address on all its ports. That message is sometimes known as “who has”: the router is trying to find out which of its peers is responsible for this IP address. Once the router receives a reply containing a MAC address, it can build and send an Ethernet frame (Ethernet packet) containing the IP packet to that MAC address. Since this exchange takes a while, the router keeps a cache of recent ARP information. (There are other types of ARP messages, but what I've explained here is sufficient to understand the present issue.) So in a nutshell, routers need to know what physical device have each IP address that they're sending IP packets to. So what happens when there are two devices claiming the same IP address? The router receives a reply from one of the devices, and from then on it decides that this IP address belongs to that device, until the corresponding cache entry expires. After the cache entry expires, the router will send a new ARP request, and maybe the other device will reply faster this time. This explains why such situations are unstable: one minute the router is talking to you, the next minute it's talking to the other guy. If you continuously ping someone, then the router keeps your IP address in its ARP cache pretty much all the time. So while you're pinging, there's only a small window during which the other guy can replace you in the cache (after your cache entry expires, before the next ping comes). That's why observing the problem makes it mostly go away, which can be frustrating until you realize what the problem might be. In your case, it looks like your local router keeps entries in its cache for 25 seconds. When you're in the cache, you're good for 25 seconds. Then sometimes the other guy comes, at random-looking moments, and you're out of it for 25 seconds. When you try to contact multiple machines on the same local link, each has its own ARP table, so you may observe inconsistent results, with one machine deciding that you own the IP address and another machine deciding that the other guy does. High-end routers log IP address conflicts, so if you think you're encountering one, enlist the help of your system administrator. Make sure first that it isn't your machine that's trying to use an IP address that it shouldn't be using!
Strange temporary network outage in Linux
1,461,445,157,000
I tried out Red Hat, Ubuntu, Kali Linux. While working on them I searched for the difference between the distributions of Linux. One thing I got was the difference in Package Management, (.rpm and .deb). But I don't think it's only the difference. Secondly, while trying some commands on Kali Linux (like quotacheck), it's giving no results. So how can I know which distribution support which commands and also how to enable them? Thirdly, I read that Kali OS is based on Debian. So what does based on really mean?
comparing distros I'd start first with the comparison of Linux distributions on the wikipedia page titled: Comparison of Linux distributions. Distrowatch is another good resource for comparing Linux distros. The site Digital Inspiration also has a good article titled: Which is the Best Linux Distribution for your Desktop? which has invaluable information in showing what each distro's primary target audiences are. based on? The "based on" term is exactly what the name says. Linux distros can be complicated to setup and maintain. So often times people want to take the guts of an existing distro and use it as a base for their own distro, changing only pieces that they really care about. Debian, Ubuntu, and Linux Mint are good examples of this. The Debian distro is a pretty old and expansive distro. So it has lots of architectures and packages available. So the Ubuntu distro takes Debian at its core and expands upon it, changing the desktop among other things. The Linux Mint project takes Ubuntu as its core and further expands upon Ubuntu, again changing the desktop, file explorer and such. The true advantage to this is that each "child" distro is able to leverage from it's "parent" or "grandparent" distro. packages? Looking up packages across the distros is next to impossible in a systematic way, to my knowledge. This site has proven useful in looking to see what packages are available in most of the larger distros. The site is called pkgs.org.
How to get differences between different distributions of linux
1,461,445,157,000
I have a PCI-attached SATA controller connected to a (variable) number of disks on a machine with a Linux 2.6.39 kernel. I am trying to find the physical location of the disk, knowing the PCI address of the controller. In this case, controller is at address 0000:01:00.0, and there are two disks, with SCSI addresses 6:0.0.0.0 and 8:0.0.0 (though these last two aren't necessarily fixed, this is just what they are right now). lshw -c storage shows the controller and the SCSI devices (system disk and controller trimmed): *-storage description: SATA controller product: Marvell Technology Group Ltd. vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:01:00.0 version: 10 width: 32 bits clock: 33MHz capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom configuration: driver=ahci latency=0 resources: irq:51 ioport:e050(size=8) ioport:e040(size=4) ioport:e030(size=8) ioport:e020(size=4) ioport:e000(size=32) memory:f7b10000-f7b107ff memory:f7b00000-f7b0ffff *-scsi:1 physical id: 2 logical name: scsi6 capabilities: emulated *-scsi:2 physical id: 3 logical name: scsi8 capabilities: emulated lshw -c disk shows the disks: *-disk description: ATA Disk product: TOSHIBA THNSNF25 vendor: Toshiba physical id: 0.0.0 bus info: scsi@6:0.0.0 logical name: /dev/sdb version: FSXA serial: 824S105DT15Y size: 238GiB (256GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=79a679b1-3c04-4306-a498-9a959e2df371 sectorsize=4096 *-disk description: ATA Disk product: TOSHIBA THNSNF25 vendor: Toshiba physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdc version: FSXA serial: 824S1055T15Y size: 238GiB (256GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=79a679b1-3c04-4306-a498-9a959e2df371 sectorsize=4096 However, there does not seem to be a way to go from the PCI address to the SCSI address. I have also looked under the sysfs entries for the PCI and SCSI devices and no been able to find an entry which makes the connection. When the disks are plugged into different physical ports on the controller, the SCSI address doesn't necessarily change, so this cannot be used with an offset to correctly determine the location of the disk. Listing disks by ID also doesn't work - ls -lah /dev/disks/by-path shows that the entry for pci-0000:01:00.0-scsi-0:0:0:0 points to /dev/sdc (or in general, the last disk connected), and there are no other paths that start in pci-0000:01:00.0 that aren't just links to partitions of that drive. Are there any other ways to map the controller address into something that can be used to locate the disks?
I think you can get what you want by cross referencing the output from lshw -c disk and this command, udevadm info -q all -n <device>. For example My /dev/sda device shows the following output for lshw: $ sudo lshw -c disk *-disk description: ATA Disk product: ST9500420AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 0003 serial: 5XA1A2CZ size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=ebc57757 If I interrogate the same device using udevadm I can find out what it's DEVPATH is: $ sudo udevadm info -q all -n /dev/sda | grep DEVPATH E: DEVPATH=/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda This string has all the info you're looking for regarding this device. The PCI address, "0000:00:1f.2", along with the SCSI address, "0:0:0:0". The SCSI address is the data in the 6th position if you break this data up on the forward slashes ("/").
Match PCI address of SATA controller and SCSI address of attached disks
1,461,445,157,000
In my Redhat Linux, I am getting the following error when executing the ls command. # ls ls: sugar.sql: Value too large for defined data type
From http://www.gnu.org/software/coreutils/faq/#Value-too-large-for-defined-data-type - It means that your version of the utilities were not compiled with large file support enabled. The GNU utilities do support large files if they are compiled to do so. You may want to compile them again and make sure that large file support is enabled. ...
Why isn't the ls command listing huge filesizes?
1,461,445,157,000
I have a few Linux servers that lack necessary sudo or root permissions, and I'm feeling kind of stuck with my options: hand-compile packages to a ~/local/ folder or some equivalent work with sysadmins to get some old version of whatever tool I'm wanting installed, likely never to be upgraded again try to roll my own homebrew, not having a clue how to do it Is there something for a user in my limited state to be able to locally compile and install various localized applications in the same way I use homebrew on my personal Mac at home?
Do you know pkgsrc? It's a framework (using Makefiles and some pkg_* tools) for compiling packages that also facilitates non-root building of packages (and their dependencies) very much. So, referring to your choices, it's the "homebrew" thing but already built and proven, with lots of packages. There's a guide, too. (While it looks kinda NetBSD-specific, it's not and should work just as well on Linux.)
Is there a Homebrew-equivalent for limited access user accounts in Linux?
1,461,445,157,000
I have 15 identical Linux RH 4.7 64-bit severs. They run cluster database (cluster is application level). On occasion (every month or so) a random box (never the same though) freezes. I can ping the box and ping works. If I try to ssh in the box I get: ssh_exchange_identification: Connection closed by remote host SSH is set up properly. When I go to the server room, and try to login directly to console, I can switch consoles with Alt+Fn, I can enter a username, and characters do show, but after pressing Enter, nothing happens. I waited 8 hours once and it didn't change. I set up syslog to log everything to a remote host, and there is nothing in those logs. When I reboot the machine, it works without a problem. I have run HW tests - everything is ok, and nothing is in the logs. The machines are also monitored with NAGIOS, and there is no unusual load or activity prior to freeze. I have run out of ideas; what else can I do or check?
It sounds like your kernel panicked in some way such that sshd couldn't send the server keys. Possibly, the kernel was wedged in such a way that the network stack was still up, but the vfs layer was unavailable. When I experienced similar problems on a RHEL4 system, I set up the netdump and netconsole services, and a dedicated netdump and syslog server to catch the crash dumps and kernel panic information. I also set the kernel.panic sysctl to 10. That way, when a system panics, you get both the kernel trace and a copy of the memory on that system, to which you could analyse with the 'crash' utility. You would certainly also benefit from setting up a serial console for the hosts, so you could see the console out put and potentially hit the magic sysrq keys. Also, if you're willing to set up the networking and you have hardware that supports it, you can use IPMI to remotely poweroff,poweron,restart, and query the hardware. (for what it's worth, RHEL5 has a similar functionality with kexec/kdump, only the crash dump is stored locally)
Debugging Linux machine freezes
1,461,445,157,000
I have the following problem: On every machine running Postgresql there is a special user postgres. This user has administrative access to the database server. Now I want to write a Bash script that executes a database command with psql as user postgres (psql shall execute as user postgres, not the script). So far, that wouldn't be a problem: I could just run the script as user postgres. However, I want to write the output of psql to a file in a directory where postgres has no write access. How can I do that? I thought about changing EUIDs in the script itself, however: I couldn't find a way to change the EUID in a Bash script How can I change the EUID when using something like psql -U postgres -c "<command>" > file?
Use a subshell: (su -c 'psql -U postgres -c "<command>"' postgres) > file Inside the subshell you can drop permissions to do your work, but output is redirected to your original shell which still has your original permissions.
How to run part of a script with reduced privileges?
1,461,445,157,000
I want to make my Fedora Linux capable of following : Use Linux for complete development platform without requiring any other OS installation but still able to build and test programs under different platforms. Completely replace Windows machine for all the other work e.g. Office, Paint, Remote Desktop, etc. Can you suggest open source projects and tools for achieving above objectives ?
You can easily do cross-platform development whether you are a systems programmer, a web developer or a desktop application developer. If you are into systems, then any utilities and/or drivers you write for linux are likely to work well for other *nix with very minimal modifications. Provided that you write standard C code and don't use too many system specific calls, they may be even easy to port to windows. If you are a desktop application dev, you can target GTK, QT or wxWidgets and your app will likely work well across the major 3 platforms today (*nix, Windows, Mac). Again, keep system specific calls to a minimum or isolate them into a wrapper library that's going to be system specific. You call also target a Virtual Machine like the JVM and/or CLR which will allow application to work across the board. If you are a web dev, then you are likely to run into too many different alternatives to choose from. I prefer a little web server called Cherokee and I develop and run ASP.NET (mono) and Django apps that run on it and use a PgSQL backend. So the conclusion is that cross-platform development in Linux can be done, provided that you can compile the code on the target platform and you keep that in mind while writing your code or if you target a VM. The other point is that you may run into The Paradox of Choice and not know what to use. For that read below my answer to the second question. As to the second question, the best resource I have found is called Open Source Alternatives. This web site lists out commercial software and their open source alternatives. Almost all the alternatives run on Linux and FreeBSD.
Linux as a complete development platform?
1,461,445,157,000
What I want I have a systemd service that I would like to have stopped before suspend/shutdown, and start up again after resume. System details System details below. $ lsb_release -dc Description: Ubuntu 20.04.1 LTS Codename: focal $ systemd --version systemd 245 (245.4-4ubuntu3.3) +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid What I have so far I have two services, myservice-resume.service and myservice-suspend.service respectively starting and stopping a python process at suspend and resume. The python script issues commands to an SDK server that controls RGB lighting. When on is passed as an argument (as in ExecStart), the process must be left running in the background to keep issuing commands as part of a loop. When the process catches a SIGINT signal the lighting is switched off and gracefully exits. In this setup, myservice-suspend.service is triggered before suspend and causes stopping of myservice-resume.service due to conflict. myservice-resume.service [Unit] Description=Start myservice-resume.service after suspend and shutdown [Service] Type=simple ExecStart=/path/to/python3 /path/to/script.py on myservice-suspend.service [Unit] Description=Stop myservice-resume.service before suspend and shutdown Before=suspend.target shutdown.target Conflicts=myservice-resume.service [Service] Type=oneshot ExecStart=/bin/true [Install] WantedBy=suspend.target shutdown.target In this setup, I start the service (and lighting) using systemctl start myservice-resume.service and successfully turn off lighting using systemctl start myservice-suspend.service, systemctl stop myservice-resume.service, or by doing a system suspend using systemctl suspend. I'd like to have the first service, myservice-resume.service, automatically start again on system resume. I'd imagine that this would involve adding some clever After/Before/WantedBy targets in the [Unit] and [Install] sections, but I can't determine an appropriate way to set this up. Research/What I've tried A related post (Systemd: stop service before suspend, restart after resume) hinted that I could configure a service to run after resume from suspend by adding After=suspend.target to the Unit section of myservice-resume.service. I've tried this, but the systemctl log shows that the unit was not started again on resume. This post (Writing systemd unit file for suspend/resume) points the OP to the systemd man pages to come up with a solution (and clarifies the purpose of After/WantedBy), but I couldn't find a solution here either.
The need for an After= or Before= can finally be seen in examples from archlinux (a remarkable source of help as usual). Based on that link, there are two solutions to running a command on suspend and resume. One method is to use two units, say mysyssuspend and mysysresume. The following examples just run the date command to syslog so we can see when they get activated: /etc/systemd/system/mysyssuspend.service [Unit] Before=suspend.target [Service] Type=simple StandardOutput=syslog ExecStart=/bin/date +'mysyssuspend start %%H:%%M:%%S' [Install] WantedBy=suspend.target /etc/systemd/system/mysysresume.service [Unit] After=suspend.target [Service] Type=simple StandardOutput=syslog ExecStart=/bin/date +'mysysresume start %%H:%%M:%%S' [Install] WantedBy=suspend.target As usual, do a systemctl daemon-reload and systemctl enable mysyssuspend mysysresume after creating the unit files. The first unit has a Before dependency on the suspend target and gets run when the computer enters suspend. The second unit similarly has an After dependency, and gets run on resuming. The other method puts all the commands in a single unit: /etc/systemd/system/mysuspendresume.service [Unit] Before=sleep.target StopWhenUnneeded=yes [Service] Type=oneshot StandardOutput=syslog RemainAfterExit=yes ExecStart=/bin/date +'mysuspendresume start %%H:%%M:%%S' ExecStop=/bin/date +'mysuspendresume stop %%H:%%M:%%S' [Install] WantedBy=sleep.target This works with StopWhenUnneeded=yes, so the service is stopped when no active service requires it. The sleep target also has StopWhenUnneeded, so when it is finished it will run ExecStop of our unit. The RemainAfterExit is needed so that our unit is still seen as active, even after ExecStart has finished. I tested both of these methods on Ubuntu 18.04.5 with systemd version 237 and they both seem to work correctly. Rather than trying to merge your requirement into the above working mechanisms, it is probably more pragmatic to use one of them to stop/start an independent unit. For example, use the second method and add a mylongrun service: /etc/systemd/system/mysuspendresume.service [Unit] Before=sleep.target StopWhenUnneeded=yes [Service] Type=oneshot StandardOutput=syslog RemainAfterExit=yes ExecStart=-/bin/date +'my1 %%H:%%M:%%S' ; /bin/systemctl stop mylongrun ; /bin/date +'my2 %%H:%%M:%%S' ExecStop=-/bin/date +'my3 %%H:%%M:%%S' ; /bin/systemctl start mylongrun ; /bin/date +'my4 %%H:%%M:%%S' [Install] WantedBy=sleep.target /etc/systemd/system/mylongrun.service [Unit] Description=Long Run [Service] Type=simple StandardOutput=syslog ExecStart=/bin/bash -c 'date +"my11 %%H:%%M:%%S"; while sleep 2; do date +"my12 %%H:%%M:%%S"; done' ExecStop=/bin/bash -c 'date +"my13 %%H:%%M:%%S"; sleep 10; date +"my14 %%H:%%M:%%S"' [Install] WantedBy=multi-user.target Testing this by starting mylongrun then closing the lid gives the following journalctl entries: 09:29:19 bash[3626]: my12 09:29:19 09:29:21 bash[3626]: my12 09:29:21 09:29:22 systemd-logind[803]: Lid closed. 09:29:22 systemd-logind[803]: Suspending... 09:29:22 date[3709]: my1 09:29:22 09:29:22 systemd[1]: Stopping Long Run... 09:29:22 bash[3715]: my13 09:29:22 09:29:23 bash[3626]: my12 09:29:23 09:29:25 bash[3626]: my12 09:29:25 09:29:27 bash[3626]: my12 09:29:27 09:29:29 bash[3626]: my12 09:29:29 09:29:31 bash[3626]: my12 09:29:31 09:29:32 bash[3715]: my14 09:29:32 09:29:32 systemd[1]: Stopped Long Run. 09:29:32 date[3729]: my2 09:29:32 09:29:32 systemd[1]: Reached target Sleep. 09:29:33 systemd[1]: Starting Suspend... We can see the long running stop command (sleep 10) completed correctly. On resume, the long run command is started again: 09:35:12 systemd[1]: Stopped target Sleep. 09:35:12 systemd[1]: mysuspendresume.service: Unit not needed anymore. Stopping. 09:35:12 systemd[1]: Reached target Suspend. 09:35:12 date[3813]: my3 09:35:12 09:35:12 systemd[1]: Started Long Run. 09:35:12 date[3817]: my4 09:35:12 09:35:12 bash[3816]: my11 09:35:12 09:35:14 bash[3816]: my12 09:35:14 09:35:16 bash[3816]: my12 09:35:16 09:35:18 bash[3816]: my12 09:35:18
Stop systemd service before suspend, start again after resume
1,461,445,157,000
Per the IPv6 standard, Linux assigns IPv6 link local addresses to interfaces. These interfaces are always assigned /64 addresses. Is this correct? I would think they should be /10. Why are they assigned /64 addresses?
The address space allocated to link-local addresses is fe80::/10, but the next 54 bits are defined to be all zeroes, so the effective range is fe80::/64. Which puts it in line with the usual custom for IPv6 addresses. RFC 4291: 2.5.6. Link-Local IPv6 Unicast Addresses Link-Local addresses are for use on a single link. Link-Local addresses have the following format: | 10 | | bits | 54 bits | 64 bits | +----------+-------------------------+----------------------------+ |1111111010| 0 | interface ID | +----------+-------------------------+----------------------------+
Linux assigns an fe80::/64 address to an interface. Shouldn't that be fe80::/10?
1,461,445,157,000
Can someone explain the following rule for filtering traffic to loopback interface? # Allow all loopback (lo0) traffic and reject traffic # to localhost that does not originate from lo0. -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -s 127.0.0.0/8 -j REJECT The way I interpret it: accept all incoming packets to loopback. reject all incoming packets from 127.x.x.x.x which are not to loopback. What are the practical uses for these rules? In the case of 1, does this mean that all packets to loopback do not have to go through additional filtering? Is it possible for an incoming packet to loopback to be from an external source?
What the rules mean is exactly what you are describing, all packets accepted from the loopback interface. No packets with the loopback address accepted from other sources. It does not means per se data coming from the loopback interface has to go through additional filtering; what does it means is that the rule 2) is trying to prevent fake/spoofed packets with the loopback address coming from other interfaces.
Iptables rule for loopback
1,461,445,157,000
Only sometimes, I forget to make a backup of a given linux file such as /etc/rc.local, /etc/rsyslog.conf, /etc/dhcpcd.conf, etc, and later wish I did. Distribution agnostic, is there a good approach to later getting a copy of an unf'd up copy?
While the topic of configuration files backup/versioning might seem simple on the surface, it is one of the hot topics of system/infrastructure administration. Distribution agnostic, to keep automatic backups of /etc as a simple solution you can install etckeeper. By default it commits /etc to a repository/version control system installed on the same system. The commits/backups are by default daily and/or each time there are package updates. The etckeeper package is pretty much present in all Linux distributions. see: https://help.ubuntu.com/lts/serverguide/etckeeper.html or https://wiki.archlinux.org/index.php/Etckeeper It could be argued it is a good standard of the industry to have this package installed. If you have not etckeeper installed, and need a particular etc file, there are several ways; you might copy it from a similar system of yours, you can ask your package manager to download the installation file or download it by hand, and extract the etc file from there; one of the easiest ways is using mc (midnight commander) to navigate inside packages as if they were directories. You can also use the distribution repositories to get packages, in the case of debian is http://packages.debian.org Ultimately if the etc/configurations are mangled beyond recognition you always have the option to reinstall the particular package. move the etc files to a backup name/directory, and for instance in Debian: apt-get install --reinstall package_name You can also configure and install the source repos for your particular distribution/version, install the source package, and get the etc files from there. https://wiki.debian.org/apt-src (again a Debian example) In some packages, you might also have samples of the configurations files at /usr/share/doc/package_name, which might be fit or not for use. As a last resort, you may also find etc files in the repositories/github addresses if the corresponding open source projects, just bear in mind that often distributions change default settings and things around. Obviously, none of these alternatives exempt you from having a sound backup policy in place, and retrieve your lost /etc files from there. Times also move fast, and if following a devops philosophy, you might also choose to discard certains systems altogether and redeploy them in case some files get corrupted; you might also use CI and reploy the files for instance, from jenkins.
How to get copies of default Linux etc files
1,461,445,157,000
I have read that the ps command can take flags in two format: The Unix format in which you should precede the flags with a dash. The BSD format in which you should not precede the flags with a dash. Now does the same flags can be used with both formats, for example do the following commands mean the same things: ps -x ps x Or does the Unix format has its own set of flags, while the BSD format has an entirely different set of flags?
The manpage answers your question: Options of different types may be freely mixed, but conflicts can appear. There are some synonymous options, which are functionally identical, due to the many standards and ps implementations that this ps is compatible with. Note that ps -aux is distinct from ps aux. The POSIX and UNIX standards require that ps -aux print all processes owned by a user named "x", as well as printing all processes that would be selected by the -a option. If the user named "x" does not exist, this ps may interpret the command as ps aux instead and print a warning. This behavior is intended to aid in transitioning old scripts and habits. It is fragile, subject to change, and thus should not be relied upon. The flags are different, but can be combined. Typically you’d pick one though, e.g. either ps aux or ps -ef to see details of all processes, not a mixture. The only x flag is the BSD one, so ps x and ps -x produce the same result; but that doesn’t work for flags defined in both variants. All this is specific to procps and procps-ng. The equivalence of ps x and ps -x is the result of a “second chance” parsing stage which is invoked if a first pass doesn’t fully parse all the arguments; this isn’t documented in the manpage but is mentioned in the HACKING file in the source code: Unless the personality forces BSD parsing, parser.c tries to parse the command line as a mixed BSD+SysV+Gnu mess. On failure, BSD parsing is attempted. If BSD parsing fails after SysV parsing has been attempted, the error message comes from the original SysV parse.
Confused about the meaning of Unix vs. BSD flags format for the "ps" command
1,461,445,157,000
Multiple sessions of the same user. When one of them gets to the point that it can no longer run new programs, none of them can, not even a new login of that user. Other users can still run new programs just fine, including new logins. Normally user limits are in limits.conf, but its documentation says "please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session." I'm nowhere close to running out of ram (44GB available), but I can't figure out what else to look at. What limits exist that would have a global effect on all sessions using the same UID, but not other UIDs? Edited on 6/12/16 at 8:45p to add: While writing the below I realized that the problem could be X11 related. This user account on this box is used nearly exclusively for GUI applications. Is there a good text based program I can try to run from bash that will use lots of resources and give good error messages? The box does not get to the point where it cannot even run ls. Unfortunately, the GUI programs this problem normally affects (Chrome and Firefox) do not do a good job of leaving error messages behind. Chrome tabs will start showing up blank or with the completely useless "Aw, Snap!" error. Firefox simply will refuse to start. The only even partially helpful error messages I managed to obtain came from trying to start Firefox from bash: [pascal@firefox ~]$ firefox --display=:0 --safe-mode Assertion failure: ((bool)(__builtin_expect(!!(!NS_FAILED_impl(rv)), 1))) && thread (Should successfully create image decoding threads), at /builddir/build/BUILD/firefox-45.2.0/firefox-45.2.0esr/image/DecodePool.cpp:359 #01: ???[/usr/lib64/firefox/libxul.so +0x10f2165] #02: ???[/usr/lib64/firefox/libxul.so +0xa2dd2c] #03: ???[/usr/lib64/firefox/libxul.so +0xa2ee29] #04: ???[/usr/lib64/firefox/libxul.so +0xa2f4c1] #05: ???[/usr/lib64/firefox/libxul.so +0xa3095d] #06: ???[/usr/lib64/firefox/libxul.so +0xa52d44] #07: ???[/usr/lib64/firefox/libxul.so +0xa4c051] #08: ???[/usr/lib64/firefox/libxul.so +0x1096257] #09: ???[/usr/lib64/firefox/libxul.so +0x1096342] #10: ???[/usr/lib64/firefox/libxul.so +0x1dba68f] #11: ???[/usr/lib64/firefox/libxul.so +0x1dba805] #12: ???[/usr/lib64/firefox/libxul.so +0x1dba8b9] #13: ???[/usr/lib64/firefox/libxul.so +0x1e3e6be] #14: ???[/usr/lib64/firefox/libxul.so +0x1e48d1f] #15: ???[/usr/lib64/firefox/libxul.so +0x1e48ddd] #16: ???[/usr/lib64/firefox/libxul.so +0x20bf7bc] #17: ???[/usr/lib64/firefox/libxul.so +0x20bfae6] #18: ???[/usr/lib64/firefox/libxul.so +0x20bfe5b] #19: ???[/usr/lib64/firefox/libxul.so +0x21087cd] #20: ???[/usr/lib64/firefox/libxul.so +0x2108cd2] #21: ???[/usr/lib64/firefox/libxul.so +0x210aef4] #22: ???[/usr/lib64/firefox/libxul.so +0x22578b1] #23: ???[/usr/lib64/firefox/libxul.so +0x228ba43] #24: ???[/usr/lib64/firefox/libxul.so +0x228be1d] #25: XRE_main[/usr/lib64/firefox/libxul.so +0x228c073] #26: ???[/usr/lib64/firefox/firefox +0x4c1d] #27: ???[/usr/lib64/firefox/firefox +0x436d] #28: __libc_start_main[/lib64/libc.so.6 +0x21b15] #29: ???[/usr/lib64/firefox/firefox +0x449d] #30: ??? (???:???) Segmentation fault [pascal@firefox ~]$ firefox --display=:0 --safe-mode -g 1465632860286DeferredSave.extensions.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860287addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860288addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860290addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860358DeferredSave.addons.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860359addons.repositoryERRORSaveDBToDisk failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 Segmentation fault [pascal@firefox ~]$ [pascal@localhost ~]$ ulimit -aH core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 579483 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 579483 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [pascal@localhost ~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 579483 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [pascal@localhost ~]$ set /proc/*/task/*/cwd/.; echo $# 306 [pascal@localhost ~]$ prlimit RESOURCE DESCRIPTION SOFT HARD UNITS AS address space limit unlimited unlimited bytes CORE max core file size 0 unlimited blocks CPU CPU time unlimited unlimited seconds DATA max data size unlimited unlimited bytes FSIZE max file size unlimited unlimited blocks LOCKS max number of file locks held unlimited unlimited MEMLOCK max locked-in-memory address space 65536 65536 bytes MSGQUEUE max bytes in POSIX mqueues 819200 819200 bytes NICE max nice prio allowed to raise 0 0 NOFILE max number of open files 32768 65536 NPROC max number of processes 4096 579483 RSS max resident set size unlimited unlimited pages RTPRIO max real-time priority 0 0 RTTIME timeout for real-time tasks unlimited unlimited microsecs SIGPENDING max number of pending signals 579483 579483 STACK max stack size 8388608 unlimited bytes Edited on 6/13/16 at 10:24p to add: Not a GUI problem. When I tried to su to the user today, that doesn't even work. Root is fine. I can ls, vi, create a new user, su to that user, everything works fine for that user, I exit and try to su to the problem user and no go. Bash kinda loaded the first time, but even exit didn't work. I had to reconnect to get back to root. [root@firefox ~]# su - pascal Last login: Sat Jun 11 03:08:47 CDT 2016 on pts/1 -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ ls -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ exit logout -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ [root@firefox ~]# ls -l / total 126 lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin ---- snip ---- drwxr-xr-x. 19 root root 23 May 27 18:03 var [root@firefox ~]# vi /etc/rc.local [root@firefox ~]# useradd test [root@firefox ~]# su - test [test@firefox ~]$ cd [test@firefox ~]$ ls -l total 0 [test@firefox ~]$ ls -l / total 126 lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin ---- snip ---- drwxr-xr-x. 19 root root 23 May 27 18:03 var [test@firefox ~]$ vi /etc/rc.local [test@firefox ~]$ exit logout [root@firefox ~]# su - pascal Last login: Mon Jun 13 22:12:12 CDT 2016 on pts/1 su: failed to execute /bin/bash: Resource temporarily unavailable [root@firefox ~]#
nproc was the problem: [root@localhost ~]# ps -eLf | grep pascal | wc -l 4068 [root@localhost ~]# cat /etc/security/limits.d/20-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 4096 root soft nproc unlimited [root@localhost ~]# man limits.conf states: Also, please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session. One exception is the maxlogin option, this one is system wide. But there is a race, concurrent logins at the same time will not always be detected as such but only counted as one. It appears to me that nproc is only enforced per login but counts globally. So a login with nproc 8192 and 5000 threads would have no problems, but a simultaneous login of the same UID with nproc 4096 and 50 threads would not be able to create more because the global count (5050) is above its nproc setting. [root@localhost ~]# ps -eLf | grep pascal | grep google/chrome | wc -l 3792
How can I tell which user limit I am running into?
1,461,445,157,000
You know when you have a pdf, which is a scan of a document and it's a really huge file, because it just stores the picture of the scanned document? And there are OCR tools which can help you to make a proper document which just stores the text? Well, I need the reverse of that! Let's say I have a perfect pdf document generated with pdflatex and I need to turn it into such a "huge" pdf, which looks exactly the same when printed on paper (with a certain dpi value), but is just a picture of the original. My initial idea is to turn the pdf into a series of JPGs and then back into a PDF, but perhaps there is some canonical way for that? In case you wonder why I would want to do such a thing: I'm currently stuck with a network printer, which is not maintained by me, and which randomly drops characters in printed files! So until someone figures out what's wrong there, I want this as workaround.
You could test out if image based PDF's are polluted as well. First convert PDF to (multipage) TIFF, e.g. with ghostscript: gs -sDEVICE=tiffg4 -o sample.tif sample.pdf Then convert the TIFF to PDF, e.g.: tiff2pdf -z -f -F -pA4 -o sample-img.pdf sample.tif This result in a PDF file where the pages are images instead of text. Alternatively, if your system supports printing of TIFF files try to print it directly. There is also the option of pdf2ps for converting PDF to PS, which if works, would likely be preferable.
How can I rasterize all of the text in a PDF?
1,461,445,157,000
I have been reading on OnCalendar= Sadly, I found no info on how to schedule an event on a day other way than defining a day of the week, which would make it run every week at the rarest. I need it to run every, say, 14 days. And at a specified hour (say, 4am). Is this possible with systemd?
I need it to run every, say, 14 days. And at a specified hour (say, 4am). Is this possible with systemd? The easiest way to get approximately every 14 days is to make it twice a month. [Install] WantedBy=default.target [Unit] Description=Every fortnight. [Timer] OnCalendar=*-*-1,15 4:00:00 Unit=whatever.service That syntax is explained in man systemd.timer; *-*-1,15 is the 1st and the 15th of every month of every year. If you wanted to try for exactly every fourteen days from when the service started: [Timer] OnActiveSec=14d But there's a catch here: I think you'd have to have the system up the whole time. There is a Persistent option to have "the time when the service unit was last triggered...stored on disk" but according to the man page "this setting only has an effect on timers configured with OnCalendar".
systemd timer every X days at 04:00
1,461,445,157,000
I'm looking forward to download a Linux Kernel to get to know how to modify it and how to compile it. I am using Debian distribution and I'm interested in the Debian-modified Linux Kernel rather than in the vanilla Kernel form kernel.org. Doing some research I found out there are mainly two ways for achiving this purpose: Install source package (i.e. apt-get install linux-source-3.19) Download source from binary package (i.e. apt-get source linux-image-3.19.0-trunk-amd64) The first option will download the source tarball into /usr/src/linux-source-3.19.tar.xz and the later will download a source tarball (linux_3.19.1.orig.tar.xz), a patch (linux_3.19.1-1~exp1.debian.tar.xz) and a description file (linux_3.19.1-1~exp1.dsc). The latter will also unpack and extract everything into a 'linux-3.19.1' directory. At first I thought both versions would result with the same code, as they have the same kernel version and patch level (based on the report of the apt-cache command). However, diff command reported differences when comparing the unpacked source from apt-get install with the unpacked source from apt-get source (for both patched and non-patched code). When comparing apt-get install with apt-get source: $ diff -rq apt-get-install/ apt-get-source/ | wc -l 253 $ diff -rq apt-get-install/ apt-get-source/ | grep "Only in" Only in apt-get-install/arch/arm/boot/dts: sun7i-a20-bananapro.dts Only in apt-get-install/arch/s390/include/asm: cmb.h.1 Only in apt-get-install/drivers/dma-buf: reservation.c.1 Only in apt-get-install/drivers/dma-buf: seqno-fence.c.1 Only in apt-get-install/drivers/gpu/drm/i915: i915_irq.c.1 Only in apt-get-install/drivers/scsi: constants.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_acm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_ecm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_obex.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_serial.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_subset.c.1 Only in apt-get-install/include/linux: reservation.h.1 Only in apt-get-install/kernel: sys.c.1 Only in apt-get-install/lib: crc32.c.1 Only in apt-get-install/sound/soc: soc-cache.c.1 And when comparing apt-get install with apt-get source (+ patch): $ diff -rq apt-get-install/ apt-get-source+patch/ Only in apt-get-install/arch/s390/include/asm: cmb.h.1 Only in apt-get-source+patch/: debian Only in apt-get-install/drivers/dma-buf: reservation.c.1 Only in apt-get-install/drivers/dma-buf: seqno-fence.c.1 Only in apt-get-install/drivers/gpu/drm/i915: i915_irq.c.1 Only in apt-get-install/drivers/scsi: constants.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_acm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_ecm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_obex.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_serial.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_subset.c.1 Only in apt-get-install/include/linux: reservation.h.1 Only in apt-get-install/kernel: sys.c.1 Only in apt-get-install/lib: crc32.c.1 Only in apt-get-source+patch/: .pc Only in apt-get-install/sound/soc: soc-cache.c.1 I've found some links where both methods are mentioned but I couldn't get anything clear from those: https://kernel-handbook.alioth.debian.org/ch-common-tasks.html#s-common-official https://help.ubuntu.com/community/Kernel/Compile (Option B vs Alternate option B) I would really appreciate if someone could tell me the differences and advise me which is the preferred option. Thank you.
In Debian terminology, when you run apt-get source linux-image-3.19.0-trunk-amd64 (or the equivalent apt-get source linux), you're actually downloading and extracting the source package. This contains the upstream code (the kernel source code downloaded from kernel.org) and all the Debian packaging, including patches added to the kernel by the Debian kernel team. When you run apt-get install linux-source-3.19 you're actualling installing a binary package which happens to contain the source code of the Linux kernel with the Debian patches applied and none of the Debian packaging infrastructure. The source package's name is just linux; apt-get source will convert any binary package name it is given into the corresponding source package name. By the way, since experimental packages aren't upgraded automatically, you should make sure you've updated your copy of linux-source-3.19 and re-extracted it before comparing; the .dts file you're seeing in your diff was introduced in the latest update. The packages currently in the archive all contain this file. The remaining differences are pretty much normal: as has been indicated in the comments, debian contains all the packaging and is only in the source package, .pc is used by quilt to keep track of the original files modified by patches, and is also only in the source package, and the .1 files are generated manpages, probably a side-effect of the kernel build, and therefore only appear in the binary package (but they shouldn't really be there). The reference package is the source package, as obtained by apt-get source. This builds all the kernel binary packages, including linux-source-3.19 which you install with apt-get install. The latter is provided as a convenience for other packages which may need the kernel source; it's guaranteed to be in the same place all the time, unlike the source package which is just downloaded in the current directory at the time apt-get source is run. As far as documentation goes, I'd follow the Debian documentation in the kernel handbook (section 4.5). Rebuilding the full Debian kernel as documented in section 4.2 which you linked to takes a very long time because it builds a number of variants.
Get kernel source: apt-get install vs apt-get source
1,461,445,157,000
I'm using Debian and I want to remap my keyboard because it has some problem. I googled and found xmodmap. But it doesn't work in graphicless mode, like tty1.
Linux uses two independent keyboard mappings. One for the graphical mode X and one for the console. You usually change the first one with setxkbmap (or xmodmap) and the second one with loadkeys. All those tool have a fine manpage. For loadkeys you can find the existing keymaps under /usr/share/kbd/keymaps. The description of those files is available in man 5 keymaps.
Remap keyboard on the Linux console
1,461,445,157,000
Debian on external USB SSD drive. There was some error in dmesg log file: ...[ 3.320718] EXT4-fs (sdb2): INFO: recovery required on readonly filesystem [ 3.320721] EXT4-fs (sdb2): write access will be enabled during recovery [ 5.366367] EXT4-fs (sdb2): orphan cleanup on readonly fs [ 5.366375] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6072 [ 5.366426] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6071 [ 5.366442] EXT4-fs (sdb2): 2 orphan inodes deleted [ 5.366444] EXT4-fs (sdb2): recovery complete ... The system boots and works normally. Is it possible to repair this fully, and what is the proper way?
You can instruct the filesystem to perform an immediate fsck upon being mounted like so: Method #1: Using /forcefsck You can usually schedule a check at the next reboot like so: $ sudo touch /forcefsck $ sudo reboot Method #2: Using shutdown You can also tell the shutdown command to do so as well, via the -F switch: $ sudo shutdown -rF now NOTE: The first method is the most universal way to achieve this! Method #3: Using tune2fs You can also make use of tune2fs, which can set the parameters on the filesystem itself to force a check the next time a mount is attempted. $ sudo tune2fs -l /dev/sda1 Mount count: 3 Maximum mount count: 25 So you have to place the "Mount count" higher than 25 with the following command: $ sudo tune2fs -C 26 /dev/sda1 Check the value changed with tune2fs -l and then reboot! NOTE: Of the 3 options I'd use tune2fs given it can deal with force checking any filesystem whether it's the primary's (/) or some other. Additional notes You'll typically see the "Maximum mount count:" and "check interval:" parameters associated with a partition that's been formatted as ext2/3/4. Often times they're configured like so: $ tune2fs -l /dev/sda5 | grep -E "Mount count|Maximum mount|interval" Mount count: 178 Maximum mount count: -1 Check interval: 0 (<none>) When the parameters are set this way, the device will never perform an fsck during mounting. This is fairly typical with most distros. There are 2 forces that drive a check. Either number of mounts or an elapse time. The "Check interval" is the time based one. You can say every 2 weeks to that argument, 2w. See the tune2fs man page for more info. NOTE: Also make sure to understand that tune2fs is a filesystem command, not a device command. So it doesn't work with just any old device, /dev/sda, unless there's an ext2/3/4 filesystem there, the command tune2fs is meaningless, it has to be used against a partition that's been formatted with one of those types of filessystems. References Linux Force fsck on the Next Reboot or Boot Sequence
How to repair a file system corruption?
1,406,248,737,000
I have been trying for a while to view files, hidden by a mount on my device sporting Debian 6, to no avail, and being new to Linux, I am compelled to ask the question: How do you view files hidden by a mount on Debian 6? I have gone over the many duplicates I came across as I was drafting this question the first 1 or 10 times and the following answers did not help in my case: Answer to "Access to original contents of mount point" Answer to "Where do the files go if you mount a drive to a folder that already contains files?" Answer to "What happened to the old files if we mount a drive to the directory? [duplicate]" I also found this, but it was a little intimidating to try that with my limited knowledge of what I am even doing. I also asked Linux users around me, who all (both) say that it's impossible to see my files without umounting. So just to make things clear, this is what I am working with: /tmp # mkdir FOO /tmp # cd FOO/ /tmp/FOO # touch abc /tmp/FOO # cd ~ # mount /dev/sda1 /tmp/FOO/ ~ # ls /tmp/FOO/ bbb ~ # cd /tmp/ /tmp # mkdir BAR /tmp # cd ~ # mount --bind / /tmp/BAR ~ # cd /tmp/BAR/ /tmp/BAR # ls bin etc lib media proc sbin sys usr dev home linuxrc mnt root selinux tmp var /tmp/BAR # cd tmp/ /tmp/BAR/tmp # ls /tmp/BAR/tmp # @John1024: ~ # mount | grep /tmp/ /dev/sda1 on /tmp/FOO type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=cp932,iocharset=cp932,errors=remount-ro) /dev/root on /tmp/BAR type jffs2 (ro,relatime) What and to where should I mount --bind to see the files that were in /tmp/somefolder? Could the functionality of the solutions linked above be related to what build of Debian I am using? Edit: For clarification, these are some of the commands I tried: ~ # mount --bind /tmp/somefolder /tmp/anotherfolder ~ # mount --bind / /tmp/anotherfolder ~ # mount --bind /dev/sda1 /tmp/anotherfolder
As I understand it, you want to see the files, if any, hidden by the mount /dev/sda1 /tmp/somefolder command. Assuming that /tmp is part of the / filesystem, run: mount --bind / /tmp/anotherfolder ls /tmp/anotherfolder/tmp/somefolder If /tmp is not part of / but is a separate filesystem, run: mount --bind /tmp /tmp/anotherfolder ls /tmp/anotherfolder/somefolder
How to view files hidden by a mount on Debian 6
1,406,248,737,000
I read this from here: The most useful combination is the Alt+SysRq/Prnt Scrn + R-E-I-S-U-B. The above basically means that while you press and hold Alt+SysRq/Prnt Scrn and press R, E, I, S, U, B giving sufficient time between each of these keys to ensure they perform the required job. My question is: How long should I wait to ensure "sufficient time" between each of these keys?
Forget about REISUB. I don't know who invented this, but it's overly complicated: half the steps are junk. If you're going to unmount and reboot, you only need two steps: U and B. At most three steps E, U, B. Alt+SysRq+R resets the keyboard mode to cooked mode (where typing a character inserts that character). That's useful if a program died and left the console in raw mode. If you're going to reboot immediately, it's pointles. Alt+SysRq+E and Alt+SysRq+I kills processes. E sends processes the SIGTERM signal, which causes some programs to save their state (but few do this). If you do E, there's no fixed delay: typically, after a few seconds, either the program has done what it was going to do or it won't do it. I sends processes the SIGKILL signal, which leaves the system unusable (only init is still running) and is pointles anyway if you're going to reboot immediately. Alt+SysRq+S synchronizes the file contents that are not yet written to disk. U does that first thing, so doing S before U is pointless. Alt+SysRq+U remounts filesystems read-only. If you can see the console, wait until the message Emergency Remount complete. Otherwise, wait until disk activity seems to have died down. Finally Alt+SysRq+B reboots the system without doing anything, not even flushing disk buffers (so you'd better have done it afterwards, preferably as part of Alt+SysRq+U which marks disks as cleanly unmounted).
How long should I wait between keystrokes when doing SysRq + REISUB?
1,406,248,737,000
After I use Jack, the PulseAudio outputs and inputs are replaced by a dummy device. I've tried to kill PulseAudio and reload Alsa, but the only way I can use an Alsa-based application again is to reboot. I know that there must be a way to fix the problem without rebooting. I have had this problem in multiple Linux distros, including Ubuntu and currently Fedora 19. Output of service alsa-utils restart: Redirecting to /bin/systemctl restart alsa-utils.service Failed to issue method call: Unit alsa-utils.service failed to load: No such file or directory. See system logs and 'systemctl status alsa-utils.service' for details. And systemctl status alsa-utils.service: alsa-utils.service Loaded: error (Reason: No such file or directory) Active: inactive (dead) alsactl kill quit and alsactl init proceed with no errors.
The solution turned out to be simpler than it appeared. The output of fuser -v /dev/snd/* revealed jackd was silently hogging the audio card even after QjackCtl supposedly killed it. Running killall jackd fixed the problem. The problem wasn't with PulseAudio, but rather jackd running invisibly in the background.
How to restart Alsa/PulseAudio after using Jack
1,406,248,737,000
I'm trying to decide between "jailing" certain applications and I know the trade-offs of KVM versus LXC and how I can use them both. Lately I came across UML (User-Mode Linux) again and was wondering how it compares with respect to security and resource consumption (or overhead, if you will). Where can I find a comparison like that, or does anyone here know how they compare? Basically: what is the disk I/O and CPU overhead? how strict is the separation and how secure is the host from what's going on in the guest?
Best Disk I/O: LXC > KVM > UML. No overhead to speak of with LXC, KVM adds a layer of indirection so it will be slower (but you could also use it with raw disks), UML will be much slower. Least CPU overhead: LXC > KVM > UML. No overhead to speak of with LXC, small overhead with KVM, bigger overhead with UML. Strict separation and security: UML > KVM > LXC. Contrary to the statements above by krowe, if you want security above all else, UML is the way to go. You can run the UML kernel process as a totally unprivileged user, in a restricted chrooted environment, with any hardening you want on top. Escaping from the VM would require finding a kernel bug first, and even then, at best you end up with the privileges of a normal user process on the host. Now, if you care about performance... KVM is a much better option. LXC will give you the best performance, but is also the least secure of the 3.
Which one is lighter security- and CPU-wise: LXC versus UML
1,406,248,737,000
CentOS / RHEL 6 I recently learned that there's a ifcfg directive called IPV4_FAILURE_FATAL exists for use in the files located here: /etc/sysconfig/networking-scripts/ifcfg-*. But I'm having a difficult time finding information about it. What does it do? Under what circumstances would I ever want it set to "yes"?
From the Fedora Project's wiki page on Anaconda Networking: If both IPv4 and IPv6 configuration is enabled, failing IPv4 configuration of activated device means that activation is considered as failing overall (which corresponds to Require IPv4 addressing for this connection to complete checked in nm-c-e or IPV4_FAILURE_FATAL=yes in ifcfg file). Putting it another way it's saying that if a connection is setup for both IPv4 and IPv6, with this option set to yes, the setup of the said connection will be reported as failed, even if IPv6 is setup, and IPv4 is not.
What is the IPV4_FAILURE_FATAL ifcfg directive and under what scenarios would I want to use it?
1,406,248,737,000
I'm working on a curses GUI that is supposed to start up automatically on boot-up in the default linux terminal (I have no X server installed). I have this working great, but I have a problem where shortly after my curses application starts, the OS will dump some information to the terminal, which messes up my GUI. Something about "read-ahead cache" pops up every time. I have also seen messages displayed when I insert a USB flash drive or some other device. Is there a way to prevent these messages from being sent to /dev/tty1?
You can use the command dmesg -n1 to prevent all messages, except panic messages, from appearing on the console. To make this change permanent, modify your /etc/sysctl.conf file to include the following setting (the first 3 is the important part). kernel.printk = 3 4 1 3 See this post for information on the kernel.printk values.
How do I prevent system information from being displayed on a terminal?
1,406,248,737,000
I have a couple of large disks with backup/archive material on them. They're ext4. Regarding the ones of those that will be stored for a couple of years without reading the whole disc again I've been thinking of a way to refresh the disks magnetic state. Shelf life of drives seems to be a matter of debate everywhere I've been looking for an answer, but it seems after a couple of years (say 5 or so) of storage it would be wise to refresh the data in some way (?) I've seen this suggested: dd if=/dev/sda of=/dev/sda Is it safe? Is it useful? What I'm looking to do is another thing than a fsck or a dd if=/dev/sda of=/dev/null, both of which will probably discover existing magnetic drop outs on the disk. What I want to do is to refresh the magnetic data before the magnetic charges on the disk lowers below a readable level. How can I do this?
Generally you can't really refresh the whole disk without reading/writing all of it. fsck is unlikely to provide what you need - it works with the file system not the underlying device hence it mostly just scans file system meta data (inodes and other file system structures). badblocks -n might be an option to dd if=X of=X. In any case you probably want to use large blocks to speed things up (for dd something like bs=16M, for badblocks this would read -b 16777216, or -b $((1<<24)) in reasonable shells). You'll probably also want to use conv=fsync with dd. As for the safety of dd with the same input and output device - it reads block from input and writes it to output, so it should be safe (I have re-encrypted an encrypted partition like this on several occasions, by creating loop devices with the same underlying device and different passwords and then dd'ing from one to the other) - at least for some types of physical media: for example with shingled drives it is definitely not obvious to me, that it is 100% failure-proof.
How do I refresh the magnetic state on a disks with backups?
1,406,248,737,000
This is my first question in UNIX: I started with shell scripts 2 days ago. But I have a question: is a shell script a special programming language for a specific shell?
If you know of any command, say, ls, you type it, and then hit return, the shell will know where that program (ls) is, invoke it, and show you the result. This is the "non-script" shell usage. But, if you order a couple of such commands in a sequence, say A, B, C, D, and then put it (the sequence) in an executable file, you've got a program. The program has a name and location, so it can be referenced, and invoked; it has code, so it can be executed, one command at a time, by the CPU. (The program is not compiled - like, for example, C source would have been - and this makes it a script. But it is a program nonetheless.) That is, in some aspect, already at this point you are programming, because you are instructing the computer what to do. Furthermore, you are, again at a very basic level, also using a programming language, because you can't just type anything (and expect it to work); at the same time, whatever you type you'll get activity from the computer that corresponds exactly to what you wrote. There are rules how you should say things, and there are rules how the computer will react to those things. That said, with "programming", you typically associate somewhat more expressive power than just piling commands on top of each other. At the very least, you'd like branching (if ... then ... else), iteration (loops: while, for, etc.), and probably some other things as well. But that's OK as the scripting languages have those things and more. The shells have different languages, yes, but some may overlap to a great extent because of convention (why change a good way to say something?), or to be compatible with earlier versions (or some standard).
Is shell script a programming language? [closed]
1,406,248,737,000
I am bind mounting a single file on top of another one and after making changes with an editor, I don't see the modifications in both files. However, if I make the changes with the shell using redirection, >>, e.g., I do see the changes in both files. Below is an example to demonstrate: First case: -bash-3.00# echo foo >| foo -bash-3.00# echo bar >| bar -bash-3.00# diff foo bar 1c1 < foo --- > bar -bash-3.00# mount --bind foo bar -bash-3.00# echo modified >> foo -bash-3.00# diff foo bar -bash-3.00# umount bar Everything in the above case is as I expect; the two files show no differences after appending "modified" to the file "foo". However, if I perform the same test but use vi to edit foo, I get a different result. Second case: -bash-3.00# echo foo >| foo -bash-3.00# echo bar >| bar -bash-3.00# diff foo bar 1c1 < foo --- > bar -bash-3.00# mount --bind foo bar -bash-3.00# diff foo bar -bash-3.00# vi foo # append "modified with vi" and :wq vi "foo" 2L, 21C written -bash-3.00# cat foo foo modified with vi -bash-3.00# cat bar foo -bash-3.00# diff foo bar 2d1 < modified with vi -bash-3.00# Here, the two files are different even though one is bind mounted onto the other. Anyone here know what is going on in this case? Thanks!
What is happening is that vi is creating a new file (inode) and, effectively, undoing the bind, even though the mount is still in place. Appending uses the existing file (inode). Take a look at the inode numbers of the files using ls -li as I step through your test(s). $ echo foo > foo $ echo bar > bar $ ls -li foo bar # 2 inodes so 2 different files 409617 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 foo $ sudo mount --bind foo bar $ ls -li foo bar # both inodes are the same so both reference the same file (foo) 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 foo $ echo mod >> foo $ ls -li foo bar # appending doesn't change the inode 409619 -rw-r--r-- 1 derek derek 8 Jul 31 12:57 bar 409619 -rw-r--r-- 1 derek derek 8 Jul 31 12:57 foo $ vi foo $ ls -li foo bar # vi has created a new file called foo (new inode) # bar still points to the old foo 409619 -rw-r--r-- 0 derek derek 8 Jul 31 12:57 bar 409620 -rw-r--r-- 1 derek derek 14 Jul 31 12:57 foo $ sudo umount bar $ ls -li foo bar # umount uncovers the original bar. original foo has no references 409617 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409620 -rw-r--r-- 1 derek derek 14 Jul 31 12:57 foo You need to think in terms of the underlying inodes rather than file names. What are you trying to do which couldn't be done with symlinks? I tried a variation and think you can do what you want. Take a look at the following... $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 a/foo 840457 -r--r--r-- 1 root root 6 Jul 31 19:41 /mnt/c/foo $ sudo mount --bind a/foo /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 a/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 /mnt/c/foo $ vi /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 a/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 /mnt/c/foo $ sudo umount /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 a/foo 840457 -r--r--r-- 1 root root 6 Jul 31 19:41 /mnt/c/foo While a/foo was mounted on the read-only file /mnt/c/foo I could edit /mnt/c/foo and it changed the contents of a/foo without changing the inode.
single bind mounted file gets out of sync in linux
1,406,248,737,000
I created a cronjob, it runs for a very long time but now don't know how to stop it.
You should stop the process that the crontab started running. #kill -HUP PID (PID: Process ID is the process running) To see a relation of the PID with the running processes (and more info) use top command, change the column order with the keys < and > Also try ps -ax|grep [your_process_file] which lists the running processes filtered by the name you choose -HUP = Hang UP
List currently running cron tab and stop it
1,406,248,737,000
So I have this program which I manually run as root : sudo gammu-smsd -c /etc/gammu-smsdrc -d What this does is it runs the Gammu (software to manage gsm modems) and 'daemonize' it. My problem is I want this program to automatically run on boot up . Is it ok to just edit root's crontab and stick this command there? Or there's some other way? (Im using Ubuntu 11.04.)
How about /etc/rc.local? This will be executed last in the startup sequence.
How to run a program on boot up?
1,406,248,737,000
I have recently upgraded to Fedora 33 (Linux 5.9.16-200) on my machine. I am running vim-enhanced version 8.2. When I type sudo vim (or even sudo vi) in order to edit files with admin privilege, I get the following error. sudo: __vi_internal_vim_alias: command not found I am not sure what is causing this. Vim loads fine without the sudo. Could you please tell me how to troubleshoot this? Thank you. Update: Upon executing which vim, I get the following result. alias vim='__vi_internal_vim_alias' __vi_internal_vim_alias () { ( test -f /usr/bin/vim && exec /usr/bin/vim "$@"; test -f /usr/bin/vi && exec /usr/bin/vi "$@" ) } I am not sure what did this and where. Maybe it's a Fedora 33 thing. Given the above information, what do you suggest is a permanent fix?
As @scy mentioned unalias-ing the vi and vim is a workaround solution for keeping the sudo="sudo " alias so it can be used with other aliases. Expanding his/her answer for the different shells: ZSH Shell: Add to the .zshrc file (of the user you want to be affected by the changes) located at: For Fedora 33 Workstation(or Server or another non-atomic OS Distro): /home/$USER/.zshrc For Fedora CoreOS 33.x (or Silverblue 33 or other similar atomic OS Distro): /var/home/$USER/.zshrc the following lines of code: [ "$(type -w vi)" = 'vi: alias' ] && unalias vi [ "$(type -w vim)" = 'vim: alias' ] && unalias vim BASH Shell: Add to the .bashrc file (of the user you want to be affected by the changes) located at the same locations, respective to the OS/Distro specific location for the $USER 's home directory (check the directions for Fedora Workstation, etc...) the following code: [ "$(type -t vi)" = 'alias' ] && unalias vi [ "$(type -t vim)" = 'alias' ] && unalias vim P.S. Concerning ZSH Shell, this solution can resolve similar problems with other CLI applications that are in a similar initialization situation. For example: mc (Midnight Commander). Meanwhile, mc will not have any such problem in BASH Shell.
How to resolve __vi_internal_vim_alias: command not found?
1,406,248,737,000
I am tuning my linux machine running Elasticsearch. It says that I should give at least half the memory of the machine running elasticsearch to the filesystem cache. But I don't know how much of it is given currently to filesystem cache. How to find it? And how to change it to half of the RAM?
You won't give memory to the file system cache, because it is part of the page cache. You may need to have enough physical RAM to make that possible (so you might need to buy more RAM). See also LinuxAteMyRam (which explains that "free" RAM is used in the page cache for file data), and use the free(1) command (also ps(1) & top(1), or even htop(1)...). See also proc(5) Of course, if you have big processes running (outside of Elasticsearch) you might terminate or stop them. See also setrlimit(2).
How to give RAM to the filesystem cache
1,406,248,737,000
I have installed the mlocate package on Asus RT-N56U running Padavan with Entware-ng, which is based on OpenWrt. This embedded Linux distribution has SSH enabled. My locate results are out of date. When I use the updatedb command this error appears: updatedb: can not find group mlocate How can I fix this, preferably with one liner?
The addgroup package is necessary and is included in busybox of padavan firmware. Do the following steps as root: grep -s mlocate /etc/group || addgroup mlocate chgrp mlocate /opt/var/mlocate chmod g=rx,o= /opt/var/mlocate chgrp mlocate /opt/bin/locate chmod g+s,go-w /opt/bin/locate touch /opt/var/mlocate/mlocate.db chgrp mlocate /opt/var/mlocate/mlocate.db This is the one-liner (a single copy and paste command) to fix the "updatedb: can not find group mlocate" message: # grep -s mlocate /etc/group || addgroup mlocate;chgrp mlocate /opt/var/mlocate;chmod g=rx,o= /opt/var/mlocate;chgrp mlocate /opt/bin/locate;chmod g+s,go-w /opt/bin/locate;touch /opt/var/mlocate/mlocate.db;chgrp mlocate /opt/var/mlocate/mlocate.db
How to fix "updatedb: can not find group ` mlocate'" on entware?
1,406,248,737,000
I have a long running (3 hours) shell script running on a CentOS 7 machine. The script runs a loop with an inner loop and calls curl in each iteration. I'm starting the script with PM2 because it's already on the system and it's good for managing processes. However it seems that it might not be good for shell scripts. When I came in this morning I saw that PM2 had restarted my shell script 6 times. The PM2 logs say it received a SIGINT and was restarted. Since this script results in data being pushed to a database that means my data has been pushed 6 times. That's no bueno. I'm the only person that logs into the box so it's not another user. So, next question is whether this is a bug in PM2 or a legit SIGINT. Which begs the question: if it is legit, where is it coming from? I have to determine (if possible) if the OS is somehow killing this process before I submit this as a bug in PM2 (which seems like the most likely thing).
sysdig can monitor for these using the evt.type=kill filter: # terminal uno perl -E 'warn "$$\n"; $SIG{INT}= sub { die "aaaaargh" }; sleep 999' # terminal dos sysdig -p '%proc.pname[%proc.ppid]: %proc.name -> %evt.type(%evt.args)' evt.type=kill # terminal tres kill -INT 11943 # or whatever A more specific filter may be necessary to avoid e.g. systemd spam from cluttering up the sysdig output, or grep for your process names or pids: # sysdig -p '%proc.pname[%proc.ppid]: %proc.name -> %evt.type(%evt.args)' evt.type=kill systemd[1]: systemd-udevd -> kill(pid=11969(systemd-udevd) sig=15(SIGTERM) ) systemd[1]: systemd-udevd -> kill(res=0 ) systemd[1]: systemd-udevd -> kill(pid=11970(systemd-udevd) sig=15(SIGTERM) ) systemd[1]: systemd-udevd -> kill(res=0 ) systemd[1]: systemd-udevd -> kill(pid=11971(systemd-udevd) sig=15(SIGTERM) ) systemd[1]: systemd-udevd -> kill(res=0 ) sshd[11945]: bash -> kill(pid=11943(perl) sig=2(SIGINT) ) sshd[11945]: bash -> kill(res=0 )
Can I track down where a SIGINT came from if it's not from a user?
1,406,248,737,000
when loading a shared library in Linux system, what is the memory layout of the shared library? For instance, the original memory layout is the following: +-----------+ |heap(ori) | +-----------+ |stack(ori) | +-----------+ |.data(ori) | +-----------+ |.text(ori) | +-----------+ When I dlopen foo.so, will the memory layout be A or B? A +-----------+ |heap(ori) | +-----------+ |stack(ori) | +-----------+ |.data(ori) | +-----------+ |.text(ori) | +-----------+ |heap(foo) | +-----------+ |stack(foo) | +-----------+ |.data(foo) | +-----------+ |.text(foo) | +-----------+ Or B +-----------+ |heap(ori) | +-----------+ |heap(foo) | +-----------+ |stack(foo) | +-----------+ |stack(ori) | +-----------+ |.data(foo) | +-----------+ |.data(ori) | +-----------+ |.text(foo) | +-----------+ |.text(ori) | +-----------+ Or anything other than A and B... ?
The answer is "Other". You can get a glimpse of the memory layout with cat /proc/self/maps. On my 64-bit Arch laptop:: 00400000-0040c000 r-xp 00000000 08:02 1186758 /usr/bin/cat 0060b000-0060c000 r--p 0000b000 08:02 1186758 /usr/bin/cat 0060c000-0060d000 rw-p 0000c000 08:02 1186758 /usr/bin/cat 02598000-025b9000 rw-p 00000000 00:00 0 [heap] 7fe4b805c000-7fe4b81f5000 r-xp 00000000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b81f5000-7fe4b83f5000 ---p 00199000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83f5000-7fe4b83f9000 r--p 00199000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83f9000-7fe4b83fb000 rw-p 0019d000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83fb000-7fe4b83ff000 rw-p 00000000 00:00 0 7fe4b83ff000-7fe4b8421000 r-xp 00000000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b85f9000-7fe4b85fc000 rw-p 00000000 00:00 0 7fe4b85fe000-7fe4b8620000 rw-p 00000000 00:00 0 7fe4b8620000-7fe4b8621000 r--p 00021000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b8621000-7fe4b8622000 rw-p 00022000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b8622000-7fe4b8623000 rw-p 00000000 00:00 0 7ffe430c4000-7ffe430e5000 rw-p 00000000 00:00 0 [stack] 7ffe431ed000-7ffe431ef000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] You can see that the executable gets loaded in low memory, apparently .text segment, read-only data, and .bss. Just about that is "heap". In much higher memory the C library and the "ELF file interpreter", "ld-so" get loaded. Then comes the stack. There's only one stack and one heap for any given address space, no matter how many shared libraries get loaded. cat only seems to get the C library loaded. Doing cat /proc/$$/maps will get you the memory mappings of the shell from which you invoked cat. Any shell is going to have a number of dynamically loaded libraries, but zsh and bash will load in a large number. You'll see that there's just one "[heap]", and one "[stack]". If you call dlopen(), the shared object file will get mapped in the address space at a higher address than /usr/lib/libc-2.21.so. There's something of an "implementation dependent" memory mapping segment, where all addresses returned by mmap() show up. See Anatomy of a Program in Memory for a nice graphic. The source for /usr/lib/ld-2.21.so is a bit tricky, but it shares a good deal of its internals with dlopen(). dlopen() isn't a second class citizen. "vdso" and "vsyscall" are somewhat mysterious, but this Stackoverflow question has a good explanation, as does Wikipedia.
Memory layout of dynamic loaded/linked library
1,406,248,737,000
I'm running a Oracle Linux VM on a Windows 7 host and I'm trying to ssh into my MacBook. I've already created the private/pub keys in my Mac. I have copied the id_rsa.pub contents into the authorized_keys file in .ssh folder. I have changed the authorized_keys permissions to 600 for the current user. Permissions for ~ and ~/.ssh have been changed to 700. I have also copied the id_rsa.pub contents from the Oracle Linux VM to the authorized_keys file using ssh-copy-id In my Mac I also have an Oracle Linux VM into which I can ssh perfectly fine from the Oracle Linux VM in the Windows machine. However, I cannot ssh into my Mac using just: ssh macdomain I have to use: ssh username@macdomain to ssh successfully. Without the username it will ask me for a password and eventually result in: Permission denied (publickey, keyboard-interactive) This is my sshd_config file: # $OpenBSD: sshd_config,v 1.81 2009/10/08 14:03:41 markus Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # The default requires explicit activation of protocol 1 #Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes no #MaxAuthTries 6 #MaxSessions 10 #RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys AllowUsers username # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords both PasswordAuthentication and # ChallengeResponseAuthentication must be set to "no". #PasswordAuthentication no #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes #GSSAPIStrictAcceptorCheck yes #GSSAPIKeyExchange no # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. # Also, PAM will deny null passwords by default. If you need to allow # null passwords, add the " nullok" option to the end of the # securityserver.so line in /etc/pam.d/sshd. #UsePAM yes #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # pass locale information AcceptEnv LANG LC_* # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/libexec/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server # XAuthLocation added by XQuartz (http://xquartz.macosforge.org) XAuthLocation /opt/X11/bin/xauth I've googled and had a look at almost every relevant topic but to no avail.
Your username in the VM is different than your username on the Mac. By default, ssh assumes the usernames are the same if you don't specify it explicitly. It's trying to log in to a user that doesn't exist (or that you haven't set up), which is why it always fails. To avoid that, you can either specify the username each time, or set up your .ssh/config file in the VM like this: Host mac Hostname macdomain User yourmacusername That will override the default username for that host only. You would also be able to just ssh mac if you prefer, rather than using the hostname.
ssh from linux into mac - permission denied
1,406,248,737,000
I want to move all my 14.5 TB of media drives (not OS) to a combined LVM file system due to constant problems arranging things to fit into multiple smaller file systems. My question is if after setup any of the 6 drives moves to a different location (/dev/sd*), is that going to be a problem? I have always mounted them based on UUID, but I don't know LVM enough to know how it works with multiple drives. I know I can still mount the file system based on UUID, but I want to make sure LVM is not going to be messed up finding the individual parts of the system. I have to ask this since, for some reason, if I reboot with USB drives inserted they get lower sd* letters than some of the media drives and it causes those media drives to be rearranged for that boot only. PS. I maintain off site backups of my media so I'm not to worried about if one drive fails breaking stuff. Only mentioned since my Google searches of LVM always have someone trying to talk the person out of it because one problem loses everything.
Each LVM object (physical volume, volume group, logical volume) has a UUID. LVM doesn't care where physical volumes are located and will assemble them as long as it can find them. By default, LVM (specifically vgscan, invoked from an init script) scans all likely-looking block devices at boot time. You can define filters in /etc/lvm.conf. As long as you don't define restrictive filters, it doesn't matter how you connect your drives. You can even move partitions around while the system isn't running and LVM will still know how to assemble them. You hardly ever need to interact with LVM's UUIDs. Usually you would refer to a volume group by name and to a logical volume by its name inside its containing volume group. If you use LVM for all your volumes, the only thing that may be affected by shuffling disks around is your bootloader.
How does LVM find drives after setup
1,406,248,737,000
I'm debugging a closed-source software installer that seems to have some pre-conceived notions about my distribution. The installation aborts after not finding apt-get. The command it attempts to run is: apt-get -y -q install linux-headers-3.7.5-1-ARCH I suppose the "package name" comes from /usr/src, where the sole entry is linux-3.7.5-1-ARCH. Does anyone have any educated guess as to which package I should install with pacman? The headers are probably going to be used to compile drivers for custom hardware. Here is some relevant text from the install log: NOTE: Linux drivers must be built against the kernel sources for the kernel that your Linux OS is currently running. This script automates this task for you. NOTE: You must have the Linux OS kernel header source files installed. If you plan on running the Jungo Debug Monitor, then you may also need to install "compat-libstdc++" and "libpng3". Your Linux is currently running the following kernel version: 3.7.5-1-ARCH
You're running Arch linux. According to pacman -Q -i linux-headers, the package "linux-headers" contains "Header files and scripts for building modules for linux kernel". When the linux kernel gets built, various constants, which might be numbers or strings or what have you, get defined. Some loadable modules need to know those numbers or strings. The files in "linux-headers" should contain all the build-specific numbers, strings etc for the kernel, in your case kernel version 3.7.5-1 . You can see what files package "linux-headers" owns: pacman -Q -l linux-headers You can install package "linux-headers" as root: pacman -S linux-headers The "apt-get" part of the script seems to assume you're running Debian or a derivative. Install linux-headers with pacman and see how it goes.
What package could "linux-headers-3.7.5-1-ARCH" mean?
1,406,248,737,000
After being a long-time Debian Linux user, I decided to give SUSE a try. One of the major selling points of SUSE is the YaST configuration system. It provides a set of wizards for common configuration tasks. Almost every tutorial I can find uses YaST at some point. Unfortunately, the text version of the utility seems to lack many of the features present in the GUI version. In fact, all of the tutorials I can find (for setting up LDAP services for example) assume that you are using the GUI. The only documented way I can find to use the YaST GUI remotely is forwarding a connection from a minimal X server over SSH. I was very surprised by this, as such heavy use of GUI tools is much more Windows like than UNIX like. Are SUSE servers simply designed to be used graphically?
Living in the town (Nuremberg, Germany) where SuSE has its current roots I have a little background-info from some people, who originally worked for SuSE. The current graphical yast2 (running on X11) has its predecessors of the time when it was usual to just have non-graphical interfaces. That predecessor was yast - which is still there, but does not have as many features now as its graphical follower. When yast came out - it was a revulutionary approach: A Linux-setup-tool where you could setup almost anything in one place. Later on it was a management decision (I think in times when ownership of SuSE changed over to Novell) to concentrate developement on the GUI-version. This is not uncommon - if you compare the curses-system-config-tools from RedHat whith their X11-counterparts - you will also find that the GUI-ones have much more settings. But as with every other GUI (even Ubuntu) you will discover that even yast2 lacks the full abilities of a plain, direct modification of the config-files. The SuSE-firewall is a good example for that. Look at what yast2 firewall offers to you, and then have a look at /etc/sysconfig/SuSEFirewall2 - you will see many things there that can not be set using the GUI. So IMHO - no - SuSE is just the same as every Linux - it just had a longer history for a better single-point-of-administration GUI.
Are SUSE servers intended to be used graphically?
1,406,248,737,000
Can the default permissions and ownership of /sys/class/gpio/ files be set, e.g. by configuring udev? The point would be to have a real gid for processes that can access GPIO pins on a board. Most "solutions" include suid wrappers, scripts with chown and trusted middleman binaries. Web searches turn up failed attempts to write udev rules. (related: Q1) (resources: avrfreaks, linux, udev)
The GPIO interface seems to be built for system (uid root) use only and doesn't have the features a /dev interface would for user processes. This includes creation permissions. In order to allow user processes (including daemons and other system services) access, the permissions need to be granted by a root process at some point. It could be an init script, a middleman, or a manual (sudo) command.
Set GPIO permissions cleanly
1,406,248,737,000
Running Gentoo 3.4.0 Having recently heard about the /etc/motd file, i tried to have it display random cowsay fortunes. I wrote some random bash script to act as a daemon, feeding the /etc/motd as a named pipe, as seen on some forums. I don't think there's any problem with the script because cat'ing the pipe works just fine, but the MOTD won't display on login (using a regular file works) ! fira@nyan ~ % cat /etc/motd _______________________________________ / We didn't put in ^^ because then we'd \ | have to keep telling people what it | | means, and then we'd have to keep | | telling them why it doesn't short | | circuit. :-/ | | | | -- Larry Wall in | \ <[email protected]> / --------------------------------------- \ \ .--. |o_o | |:_/ | // \ \ (| | ) /'\_ _/`\ \___)=(___/ Am i missing something obvious ? Not using anything like a .hushlogin or whatnot, tried using several shells, pipe is readable a+r.
You're not missing anything obvious. I dug into the source of the pam_motd module to figure this one out. The trick is that pam_motd does the following with /etc/motd: Check the size of the file. Allocate a buffer of that size. Read the entire file into the buffer. Output the buffer through whatever output method is in use. (PAM is modular, after all; can't assume it's a terminal.) Since a pipe doesn't have a file size, this fails at step 1. EDIT: Why is PAM concerned about the size in the first place? I imagine it's to prevent denials of service, either intentional or unintentional. When PAM checks the file size, it also refuses to output the motd if the file is larger than 64 kbytes. I imagine whoever tried to log into the system would be very sad if someone managed to pipe a DVD movie file into /etc/motd, for example -- not to mention how much memory that might take.
/etc/motd is not displayed when a named pipe?
1,406,248,737,000
In the version 2.6.15 kernel, I got that I can rewrite the task_struct in the file (include/linux/sched.h),like: struct task_struct { unsigned did_exec:1; pid_t pid; pid_t tgid; ... char hide; } But, unfortunately, when I upgraded to the version 2.6.30.5, I looked through the same file, I just find a declaration of the task_struct, like: struct task_struct; And I have no idea which file I should refer to for the purpose of specifying my own task_struct? Can someone help me?
Use grep or any other search tool to look for the definition: grep -r '^struct task_struct ' include Or search online at LXR: http://lxr.linux.no/linux+v2.6.30.5/+search?search=task_struct The structure is still defined in include/linux/sched.h. There's a forward declaration which is used in mutually recursive type definitions, and the definition is further down.
Where is the struct task_struct definition in the 2.6.30.5 Linux Kernel?
1,406,248,737,000
If I want to disable beep sounds from stuff like bash, I add this line to "/etc/inputrc": set bell-style none Sadly, this doesn't work for some other events like GDM start-up and shut down. I thought that adding this line to "/etc/modprobe.d/blacklist.conf" would help: blacklist pcspkr That makes me wonder and doubt where the sound actually comes from.
Solution for GNOME 2 (Debian 6): I tried one more thing... System -> Preferences -> Sound. This brings up the Volume Control application: From there I click on Preferences which brings up another window. I then click on Beep, and that mutes window thus: I then proceed to clicking on the speaker icon on PCM column, after which I become happy. Solution for GNOME 3 (Debian 7): Edit /etc/gdm3/greeter.gsettings such that you have this entry: # Disabling sound in the greeter [org.gnome.desktop.sound] event-sounds=false You'll probably just have to uncomment the 2 lines. Note that I can't find a way to do something like this as normal user. I guess GNOME 3 killed some configurability.
How to disable the beep sound system-wide
1,406,248,737,000
Is there a way to find out for any given process with which parameters it was started?
To find what arguments were passed to pdnsd, I'd do: [~]> pgrep -l pdnsd 1373 pdnsd [~]> cat /proc/1373/cmdline /usr/sbin/pdnsd--daemon-p/var/run/pdnsd.pid[~]> (cmdline file entries are separated by null characters; use something like tr '\0' '\n' </proc/<pid>/cmdline to see more legible output.) /proc/<pid>/ contains a lot of information.
Finding out with which parameters a program was started
1,406,248,737,000
I once had some package that you could run to instantly replace a running Windows instance with a running Linux instance. I'm not talking about virtualization or coLinux. I'm talking about the moral equivalent of hot-swapping out the Windows kernel and replacing it with a Linux kernel. It may have only worked on Win9x for all I can remember. But I haven't been able to think of the name or find it since I happened upon it many years ago.
You're probably remembering loadlin
What's the name of the technology that instantly boots to Linux from within Windows?
1,406,248,737,000
I just tried to open new terminal window and this error message displayed: Failed to open PTY: No space left on device It seems I can't open terminal window anymore unless closing existing one (or reboot). I don't have any other problem in my system. My system: Debian Buster (xfce4) Linux debian 4.19.0-18-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29) x86_64 GNU/Linux Storage usage: Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 786M 9.5M 776M 2% /run /dev/sda4 320G 244G 62G 80% / tmpfs 3.9G 315M 3.6G 9% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 786M 32K 786M 1% /run/user/1000 Inodes usage: Filesystem Inodes IUsed IFree IUse% Mounted on udev 978K 455 978K 1% /dev tmpfs 982K 872 981K 1% /run /dev/sda4 21M 7.2M 14M 36% / tmpfs 982K 394 982K 1% /dev/shm tmpfs 982K 5 982K 1% /run/lock tmpfs 982K 17 982K 1% /sys/fs/cgroup tmpfs 982K 34 982K 1% /run/user/1000 Pretty sure there isn't any problem with storage or inodes count. I have closed all opened programs, after that I can open a few more terminal window, but still getting the error message.
You are looking in completely wrong place. Storage devices have nothing to do with PTY. PTY is a "Pseudo Terminal Interfaces". It is responsible for creating connection from remote terminals. For example, you use xterm or ssh - the new PTY master channel is created on the actual machine. Max number of PTYs (or remote connections) is defined in /proc/sys/kernel/pty/max. Its complement: /proc/sys/kernel/pty/nr, shows how many PTYs are currently in use. For more detailed (and more official) explanation do man 7 pty.
"No space left on the device", but it's not
1,406,248,737,000
I am trying to build my source using gcc 8.3.0 root@eqx-sjc-engine2-staging:/usr/local/src# gcc --version gcc (Debian 8.3.0-2) 8.3.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. root@eqx-sjc-engine2-staging:/usr/local/src# I am getting the below error libs/esl/fs_cli.c:1679:43: error: '%s' directive output may be truncated writing up to 1023 bytes into a region of size 1020 [-Werror=format-truncation=] snprintf(cmd_str, sizeof(cmd_str), "api %s\nconsole_execute: true\n\n", argv_command); libs/esl/fs_cli.c:1679:3: note: 'snprintf' output between 29 and 1052 bytes into a destination of size 1024 snprintf(cmd_str, sizeof(cmd_str), "api %s\nconsole_execute: true\n\n", argv_command); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors make[2]: *** [Makefile:2693: fs_cli-fs_cli.o] Error 1 make[2]: Leaving directory '/usr/local/src' make[1]: *** [Makefile:3395: all-recursive] Error 1 make[1]: Leaving directory '/usr/local/src' make: *** [Makefile:1576: all] Error 2 I tried running the make like below make -Wno-error=format-truncation Still I see the same issue. my linux version is root@eqx-sjc-engine2-staging:~# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux buster/sid" NAME="Debian GNU/Linux" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" How to fix it?
Depending on the makefile, you probably need something like: make CFLAGS="-Wno-error=format-truncation" The default Makefile rules, and most well-written Makefiles, should see CFLAGS for option arguments to the C compiler being used. Similarly, you can use CXXFLAGS for providing options to the C++ compiler, and LDFLAGS for the linker.
How to suppress all warnings being treated as errors for format-truncation
1,406,248,737,000
I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database. Where can I find any of this information, or command to list all interfaces including containers? To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing. You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.
An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces. I'll provide examples in shell. enumerate the network namespaces For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later. process (actually thread) The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do stat -L -c '%20i %n' $procpid/ns/net done 2>/dev/null This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later): lsns -n -u -t net -o NS,PATH (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %s\n' $inode "$path"; done) mount point Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process. Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines. awk '$3 == "nsfs" { print $2 }' /proc/mounts | while read -r mount; do stat -c '%20i %n' "$mount" done open file descriptor Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology. I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do find $procpid/fd -mindepth 1 | while read -r procfd; do if [ "$(stat -f -c %T $procfd)" = nsfs ]; then stat -L -c '%20i %n' $procfd fi done done 2>/dev/null Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part): sort -k 1n | uniq -w 20 in each namespace enumerate the interfaces Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces. Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained: while read -r inode reference; do if nsenter --net="$reference" ip -br address show 2>/dev/null; then printf 'end of network %d\n\n' $inode fi done The init network's inode can be printed with pid 1 as reference: echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with: unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net & and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces): lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64 wlan0 DOWN dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64 lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64 virbr0 DOWN 192.168.122.1/24 virbr0-nic DOWN vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64 end of network 4026531992 lo DOWN end of network 4026532418 lo DOWN end of network 4026532518 lo DOWN end of network 4026532618 lo DOWN end of network 4026532718 lo UNKNOWN 127.0.0.1/8 ::1/128 eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64 end of network 4026532822 lo DOWN bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64 end of network 4026532923 lo DOWN dummy0 DOWN 10.11.12.13/24 end of network 4026533021 INIT NETWORK: 4026531992
How do I find all interfaces that have been configured in Linux, including those of containers?
1,406,248,737,000
I am on Arch Linux, Deepin Desktop. I am using Noto Serif as my standard font, but I don't like its Arabic characters. So my goal is to use another font just for arabic characters. Here is what I have tried. I created a new configuration file in /etc/fonts/conf.d/ with the following contents: <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> <match target="pattern"> <test name="lang" compare="contains"> <string>ar</string> </test> <test qual="any" name="family"> <string>sans-serif</string> </test> <edit name="family" mode="prepend" binding="strong"> <string>Noto Naskh Arabic</string> </edit> </match> <match target="pattern"> <test name="lang" compare="contains"> <string>ar</string> </test> <test qual="any" name="family"> <string>serif</string> </test> <edit name="family" mode="prepend" binding="strong"> <string>Noto Naskh Arabic</string> </edit> </match> </fontconfig> I then ran fc-cache -r. But this didn't work, the same font is still in use and running fc-match returns NotoSerif-Regular.ttf: "Noto Serif" "Regular" just as before.
Many Noto fonts report to the system that they support the Arabic script, which they do—in part. One of these fonts is the Urdu font, and for whatever reason it has the priority over other fonts that support the Arabic Script. You can prefer a specific font over the others as follows: <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> <alias> <family>sans-serif</family> <prefer> <family>Noto Sans</family> <family>Noto Naskh Arabic</family> </prefer> </alias> <alias> <family>serif</family> <prefer> <family>Noto Serif</family> <family>Noto Naskh Arabic</family> </prefer> </alias> <alias> <family>monospace</family> <prefer> <family>Noto Sans Mono</family> <family>Noto Naskh Arabic</family> </prefer> </alias> </fontconfig> The higher the position of the font is, the more preferred it is. In this case, we prefer Noto Naskh Arabic over other Arabic script fonts. You can do this of course with any language or font of your choice. For some reason, only the user configuration file worked for me, those located in ~/.config/fontconfig/fonts.conf. Note that $XDG_CONFIG_HOME environmental variable has to be set to your .config directory in your home directory, so $HOME/.config. You then have to rebuild the configuration for it to take effect using fc-cache. Only newly launched application will be displayed with the new configuration. Restart the X server or your Desktop for the changes to take effect globally. Edit: If you match against the ar locale that simply won't work on all websites, because some websites use an en locale whilst displaying Arabic UTF-8 characters. If you go to /etc/fonts/conf.d and read the README and then read any configuration file starting with [30-40] you'd know that this is the right answer. If say a website asks for a Serif font; fontconfig goes through this list, first to Noto Serif, when it finds an Arabic character it resorts to the second font in the list —Noto Naskh Arabic— and finds that the font supports the Arabic script, thus it's used.
Changing font family for characters of a certain language/script using fontconfig?
1,406,248,737,000
I have a service which is sporadically publishing content in a certain server-side directory via rsync. When this happens I would like to trigger the execution of a server-side procedure. Thanks to the inotifywait command it is fairly easy to monitor a file or directory for changes. I would like however to be notified only once for every burst of modifications, since the post-upload procedure is heavy, and don't want to execute it for each modified file. It should not be a huge effort to come up with some hack based on the event timestamp… I believe however this is a quite common problem. I was not able to find anything useful though. Is there some clever command which can figure out a burst? I was thinking of something I can use in this way: inotifywait -m "$dir" $opts | detect_burst --execute "$post_upload"
Drawing on your own answer, if you want to use the shell read you could take advantage of the -t timeout option, which sets the return code to >128 if there is a timeout. Eg your burst script can become, loosely: interval=$1; shift while : do if read -t $interval then echo "$REPLY" # not timeout else [ $? -lt 128 ] && exit # eof "$@" read || exit # blocking read infinite timeout echo "$REPLY" fi done You may want to start with an initial blocking read to avoid detecting an end of burst at the start.
Monitor a burst of events with inotifywait
1,406,248,737,000
Quite interested in the size of the kernel ring buffer, how much information it can hold, and what data types?
Regarding the size, it's recorded in your kernel's config file. For example, on Amazon EC2 here, it's 256 KiB. # grep CONFIG_LOG_BUF_SHIFT /boot/config-`uname -r` CONFIG_LOG_BUF_SHIFT=18 # perl -e 'printf "%d KiB\n",(1<<18)/1024' 256 KiB # Referenced in /kernel/printk/printk.c #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT) More information in /kernel/trace/ring_buffer.c Note that if you've passed a kernel boot param "log_buf_len=N" (check using cat /proc/cmdline) then that overrides the value in the config file.
How to find out a linux kernel ring buffer size?
1,406,248,737,000
How does systemd handle the death of the children of managed processes? Suppose that systemd launches the daemon foo, which then launches three other daemons: bar1, bar2, and bar3. Will systemd do anything to foo if bar2 terminates unexpectedly? From my understanding, under Service Management Facility (SMF) on Solaris foo would be killed or restarted if you didn't tell startd otherwise by changing the property ignore_error. Does systemd behave differently? Edit #1: I've written a test daemon to test systemd's behavior. The daemon is called mother_daemon because it spawns children. #include <iostream> #include <unistd.h> #include <string> #include <cstring> using namespace std; int main(int argc, char* argv[]) { cout << "Hi! I'm going to fork and make 5 child processes!" << endl; for (int i = 0; i < 5; i++) { pid_t pid = fork(); if (pid > 0) { cout << "I'm the parent process, and i = " << i << endl; } if (pid == 0) { // The following four lines rename the process to make it easier to keep track of with ps int argv0size = strlen(argv[0]); string childThreadName = "mother_daemon child thread PID: "; childThreadName.append( to_string(::getpid()) ); strncpy(argv[0],childThreadName.c_str(),argv0size + 25); cout << "I'm a child process, and i = " << i << endl; pause(); // I don't want each child process spawning its own process break; } } pause(); return 0; } This is controlled with a systemd unit called mother_daemon.service: [Unit] Description=Testing how systemd handles the death of the children of a managed process StopWhenUnneeded=true [Service] ExecStart=/home/my_user/test_program/mother_daemon Restart=always The mother_daemon.service unit is controlled by the mother_daemon.target: [Unit] Description=A target that wants mother_daemon.service Wants=mother_daemon.service When I run sudo systemctl start mother_daemon.target (after sudo systemctl daemon-reload) I can see the parent daemon and the five children daemons. Killing one of the children has no effect on the parent, but killing the parent (and thus triggering a restart) does restart the children. Stopping mother_daemon.target with sudo systemctl stop mother_daemon.target ends the children as well. I think that this answers my question.
It doesn't. The main process handles the death of its children, in the normal way. This is the POSIX world. If process A has forked B, and process B has forked C, D, and E; then process B is what sees the SIGCHLD and wait() status from the termination of C, D, and E. Process A is unaware of what happens to C, D, and E, and this is irrespective of systemd. For A to be aware of C, D, and E terminating, two things have to happen. A has to register itself as a "subreaper". systemd does this, as do various other service managers including upstart and the nosh service-manager. B has to exit(). Services that foolishly, erroneously, and vainly try to "dæmonize" themselves do this. (One can get clever with kevent() on the BSDs. But this is a Linux question.)
How does systemd handle the death of a child of a managed process?
1,406,248,737,000
I'd like to create a small terminal utility requiring a bit of very simple graphics. Therefore I'd like to use ncurses. Now what I'm wondering is: will a ncurses program or python script that uses ncurses be visible over ssh? I'd also like the colors to be visible as well.
It works (no surprise), but if you are running a command via ssh (rather than the default shell), you will have to use the -t option to allocate a terminal. The ssh manual page says -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
Ncurses over ssh - will they be displayed?
1,406,248,737,000
I'm trying to upgrade glibc on a system on which I do not have root access. Therefore, I'm installing to a local prefix. I would like some help understanding best practices for setting this up, as well as help resolving a particular issue. (The quick summary of my issue: when I include the newly-installed glibc lib path in my LD_LIBRARY_PATH, every program I tried to run, including ls, vim, pwd, etc. segfault.) Background information: $ uname -a Linux 3.13.0-68-generic #111-Ubuntu SMP Fri Nov 6 18:17:06 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Compiler/toolchain: I'm running a locally compiled and installed from source version of gcc 5.3.0. It seems to work fine. This is installed at ~/toolchains/gcc_5.3.0 $ ls ~/toolchains/gcc_5.3.0 bin include lib lib32 lib64 libexec share Attempting to install: glibc-2.23 from source with --prefix=~/local/ I don't have sudo on this machine (it's a shared cluster; the policy is to install your own toolchains if you need customizability, as I do). $ echo $LD_LIBRARY_PATH ~/toolchains/gcc_5.3.0/lib:~/toolchains/gcc_5.3.0/lib64 The system-installed version of glibc is 2.19: $ ldd --version ldd (Ubuntu EGLIBC 2.19-0ubuntu6.7) 2.19 (Above and below, I'm substituting ~ for absolute paths above for clarity) Problem: I'm able to compile and install glibc-2.23 with gcc 5.3.0 as well as the system-installed gcc 4.8.4. Compilation and installation to ~/local/ works fine when LD_LIBRARY_PATH is set as above. However, in order to leverage the new glibc libraries (installed in ~/local/lib), I added ~/local/lib to the end of my current LD_LIBRARY_PATH: $ echo $LD_LIBRARY_PATH ~/toolchains/gcc_5.3.0/lib:~/toolchains/gcc_5.3.0/lib64:~/local/lib As soon as I do this, everything I try to run segfaults. I can't even ls or run vim. I just see bash print "Segmentation fault". I have to change my LD_LIBRARY_PATH, and then everything works fine again. I'm not able to run gdb or strace or anything to try to figure out what's going on (those segfault, too). Questions: Any ideas about what is happening here? I have a feeling that my approach for installation and/or setting of LD_LIBRARY_PATH is not right. What is the best practice for locally-installed gcc and locally-installed glibc? Do I need to more carefully match up versions? I just grabbed the latest stable sources of each. For my own knowledge for the future, given that gdb doesn't work, are there other ways to debug this sort of thing so I can precisely locate where the segfault is occurring? Thank you for any thoughts you have. Edit: I'm generally trying to get an updated set of tools on my system, and get required dev headers and libraries, etc. For example, to use some advanced features of perf_events, I need a bunch of other things, such as libaudit. That, of course, needs ldap, berkeley db, etc., etc. Ultimately I'm left with needing some headers that seem to only be provided by more modern versions of glibc. For example, here's an error that I get when I'm trying to compile berkeley db; this type seems to be defined in dirent.h, which is a header in glibc, although the system is not finding it on the packages installed on my system: -fPIC -DPIC -o .libs/os_dir.o ../src/os/os_dir.c: In function '__os_dirlist': ../src/os/os_dir.c:45:2: error: unknown type name 'DIR' DIR *dirp; ^ I'd be interested to hear if there are alternative approaches here to getting access to development headers and libs that my system is not finding on its own. The error above with DIR is perhaps a good example.
Because of a mismatch of ld-linux-x86-64.so.2 (man ld.so) and libc.so. If you want to run gdb under the LD_LIBRARY_PATH setting, run as follows: export LD_LIBRARY_PATH=~/local/lib /lib64/ld-linux-x86-64.so.2 --library-path /lib64 /usr/bin/gdb /bin/ls This runs /usr/bin/gdb in the old library environment and /bin/ls in the new library environment. Similarly, you can run only one command in the new library environment as follows: export LD_LIBRARY_PATH=~/local/lib ~/local/lib/ld-linux-x86-64.so.2 /bin/echo
Locally-installing glibc-2.23 causes all programs to segfault
1,406,248,737,000
I have a file myfile that must be re-generated periodically. Re-generation takes some seconds. On the other hand, I have to periodically read the last (or next to last) file generated. What is the best way to guarantee that I am reading a completely generated file and that, once I begin reading it, I will be able to read it completely? One possible solution is myfile is actually a soft link to the last generated file, say myfile.last. regeneration is done on a new file, say myfile.new after regeneration, myfile.new is moved onto myfile.last The problem I see (and I don't know the answer to) is: if another script is copying myfile while the mv takes place, does cp finish correctly? Another possible solution would be to generate files with a timestamp on its name, say myfile-2014-09-03_12:34 and myfile is again a soft link to the last created file. This link should be changed after creation to point to the new file. Again: what are the odds that something like cp myfile anotherfile copies a corrupted file?
If you're moving within the same filesystem, mv is atomic -- it's just a rename, not copying contents. So if the last step of your generation is: mv myfile.new myfile.last The reading processes will always see either the old or new version of the file, never anything incomplete.
What is a good strategy to generate and copy files atomically
1,406,268,808,000
I'm using tmpfs for my /tmp directory. How can I make the computer decide to swap out the files inside the /tmp before swapping out anything that is being used by applications? Basically the files inside /tmp should have a higher swappiness compared to the memory being used by processes. It seems this answer https://unix.stackexchange.com/a/90337/56970 makes a lot of sense, but you can't change swappiness for a single directory. I know about cgroups though, but I don't see any way of making tmp into a cgroup?
If all goes well, your kernel should decide to "do the right thing" all by itself. It uses a lot of fancy heuristics to decide what to swap out and what to keep when there is memory pressure. Those heuristics have been carefully built by really smart people with a lot of experience in memory management and are already good enough that they're pretty hard to improve upon. The kernel uses a combination of things like this to decide what to swap out: How recently the memory has been used. Whether the memory has been modified since it was mapped. So for example a shared library will be pushed out ahead of heap memory because the heap memory is dirty and needs to be written to swap, whereas the shared library mapped memory can be loaded again from the original file on disk in case it is needed again, so no need to write those pages to swap. Here you should realize that tmpfs memory is always dirty (unless it's a fresh page filled with zeros) because it is not backed by anything. Hints from mprotect(). Likely many more. Short answer: no, you can't directly override the kernel's decisions about how to manage memory.
How to make files inside TMPFS more likely to swap
1,406,268,808,000
How do I check the health of my hard drives? I know that you can do it with system rescue, but is there a way to do it from root without booting onto system rescue?
smartmontools is the package you are looking for. Using the smartctl command you could try: sudo smartctl -a /dev/sda Of course as mentioned below the drive needs to support SMART for information to be available, but whether it is supported/enabled will be in the output of the above command. If you look at the man page for smartctl, there are also various options for running self tests and enabling/disabling SMART etc.
how to check health of hard drives?
1,406,268,808,000
Problem Run iftop for 5 seconds, capture the screenshot and save it to a file. iftop is a beautiful program for visualizing network traffic, but it doesn't have a batch mode where I can run it for few seconds and capture the output to a file. So my idea is use commands like screen to create a virtual display and run iftop in it. look for any tools (screendump) to take a screen shot of the screen. Any idea on how do I go with this?
I don't think you'll be able to do this with screen unless the output is actually rendered in a window, which probably defeats the point of using screen. However, the window does not have to be in the foreground. The ImageMagick suite contains a utility called import you can use for this. If import --help gives you "command not found", install the imagemagick package, it will be available in any linux distro. import needs the name of the window. iftop is a terminal interface, so to make sure you use the right name, you'll have to set the title of the GUI terminal it runs in. How you do that depends on which GUI terminal you use. For example, I prefer the XFCE Terminal, which would be: Terminal -T Iftop -e iftop Opens a new terminal running iftop with the title "Iftop". A screenshot of that can be taken: import -window Iftop ss.jpg If you are going to do this every five seconds, you probably want to instead open the window running a script so you can reuse the same terminal: count=0; while ((1)); do iftop & pid=$! sleep 1 # make sure iftop is up count=$(($count+1)) import -window Iftop iftop_sshot$count.jpg kill $pid sleep 5 done If the script is "iftopSShot.sh" then you'd start this Terminal -T Iftop -e iftopSShot.sh -- except you're probably not using Terminal. Most of the linux GUI terminals are associated with specific DE's, although they are stand-alone applications which can be used independently. I believe the name of the default terminal on KDE is Konsole and it follows the -T and -e conventions; for GNOME it is probably gnome-terminal (this may have changed) and it appears to use -t and not -T. Beware import by default rings the bell, which will get irritating, but there is a -silent option.
Capturing Screenshot of terminal application via shell script?
1,406,268,808,000
I'm working with a fresh install of Ubuntu 12.04 and I've just added a new user: useradd -m testuser I thought that the -m flag to create a home directory for users was pretty standard, but now that I've taken a closer look I'm a little confused: By default the new directory that was just created shows up as: drwxr-xr-x 4 testuser testuser 4.0K May 20 20:24 testuser With the g+r and o+r permissions that means every other user on the system can not only cd to that user's home directory, but also see what is stored there. When reading over some documentation for suPHP it recommends setting the permissions as 711 or drwx--x--x which is how it would make the most sense to me. I noticed that I can change the permissions on the files inside /etc/skel and they are set correctly when creating new users with useradd -m but changing the permissions on the /etc/skel directory itself does not seem to have any effect on the new directories that are created for users in /home/ So - what type of permissions should a user's home directory and files have - and why? If I wanted permissions to be different for useradd -m - like the 711 / drwx--x--x as I saw mentioned, how is one to do that? Must you create the user and then run chmod ?
To make the creation of the home directory behave differently do useradd -m -K UMASK=0066 testuser Giving other no access at all should be safe.
What type of permissions should a user's home directory and files have?
1,406,268,808,000
I am mainly using Linux for programming. I basically started with Archlinux and Manjaro and I kinda like it. What I really like is the package management. It has a huge collection of new software and the updates are coming out really fast. For example when GCC 4.8 was released I instantly had it 2 days after the release which was pretty neat. Even small libraries such as "OpenAssetImporter" are in the repos. It is so convenient because if you have a huge collection of libraries that are coming out frequently, all you have to do is a system update. What bugs me is that my system breaks really often, and I don't want to spend so much time to fix stuff. Basically all I want is up to date libraries such as gcc etc. I don't really care if I have up to date Gnome etc. Any recommendations that you can give me?
I'd recommand you Gentoo for programming. I use it myself and it's very convenient: latest updates with a powerful system to prevent you break all the dependencies rolling release, so there is no jumping from a version to another it's a compiled distribution, so they are particularly concerned with the packaging of the toolchains, and the fact you compile all your packets yourself give you a great control over the options of compilation and may optimize a little your software tools for cross-development are very handy you can install several versions of the same library at the same time in different "slots", that can be useful sometimes, when there are huge changes between two versions and you want to be able to use both. For example, I've got three versions of python and two versions af gcc. It's a matter of choice, of course, but I used Fedora before and I can tell you that it's a lot easier to start developping on a Gentoo.
Linux distro for a developer
1,406,268,808,000
The other day, I was using a laptop for general desktop use when its keyboard began to act up. Most of the keys on the keyboard's right side stopped working entirely and key combinations such as Ctrlu made characters appear that should not have appeared. The backspace key exhibited the strangest behavior; it was somehow able to cause the deletion characters in the shell prompt. I was unable to reboot the computer cleanly so I did a hard shutdown. When I turned the computer on again, I received this message from Grub: GRUB loading. Welcome to GRUB! incompatible license Aborted. Press any key to exit. I pressed the any key and Grub responded with Operating System Not Found. Pressing another key causes the first message to appear again. Pressing another key after that causes the second message to appear... and so on. If I leave the laptop on for a few minutes, its fan speeds up significantly as if the laptop is running a CPU-intensive program. I took the hard drive out of the laptop, mounted it on a server, and looked around. I saw nothing strange in /boot. The laptop is running Arch Linux. The drive is partitioned with GPT. The laptop works fine with a hard drive from another machine. And other machines do not work with the laptop's hard drive. I am not certain that the keyboard issues are directly related to the Grub issues. What could be causing the problems that I am having? Or, what should I do to find out or narrow down the list of potential causes? Just in case it's relevant, here (removed) is a tarball with /boot and /etc/grub.d and here is my Grub configuration: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### insmod part_gpt insmod part_msdos if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 d44f2a2f-c369-456b-81f1-efa13f9caae2 else search --no-floppy --fs-uuid --set=root d44f2a2f-c369-456b-81f1-efa13f9caae2 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_input console terminal_output gfxterm set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Arch GNU/Linux, with Linux PARA kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-PARA kernel-true-d44f2a2f-c369-456b-81f1-efa13f9caae2' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd1,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt1 --hint-efi=hd1,gpt1 --hint-baremetal=ahci1,gpt1 b4fbf4f8-303c-49bd-a52f-6049e1623a26 else search --no-floppy --fs-uuid --set=root b4fbf4f8-303c-49bd-a52f-6049e1623a26 fi echo 'Loading Linux PARA kernel ...' linux /boot/vmlinuz-linux-PARA root=UUID=d44f2a2f-c369-456b-81f1-efa13f9caae2 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux-PARA.img } menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-d44f2a2f-c369-456b-81f1-efa13f9caae2' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd1,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt1 --hint-efi=hd1,gpt1 --hint-baremetal=ahci1,gpt1 b4fbf4f8-303c-49bd-a52f-6049e1623a26 else search --no-floppy --fs-uuid --set=root b4fbf4f8-303c-49bd-a52f-6049e1623a26 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=d44f2a2f-c369-456b-81f1-efa13f9caae2 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux.img } menuentry 'Arch GNU/Linux, with Linux core repo kernel (Fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-fallback-d44f2a2f-c369-456b-81f1-efa13f9caae2' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd1,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt1 --hint-efi=hd1,gpt1 --hint-baremetal=ahci1,gpt1 b4fbf4f8-303c-49bd-a52f-6049e1623a26 else search --no-floppy --fs-uuid --set=root b4fbf4f8-303c-49bd-a52f-6049e1623a26 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=d44f2a2f-c369-456b-81f1-efa13f9caae2 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux-fallback.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" --class memtest86 --class gnu --class tool { insmod part_gpt insmod ext2 set root='hd1,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt1 --hint-efi=hd1,gpt1 --hint-baremetal=ahci1,gpt1 b4fbf4f8-303c-49bd-a52f-6049e1623a26 else search --no-floppy --fs-uuid --set=root b4fbf4f8-303c-49bd-a52f-6049e1623a26 fi linux16 ($root)/boot/memtest86+/memtest.bin } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### Update After installing LILO last night, the computer booted just fine at least once. When I booted the computer this morning, I was faced with a kernel panic: Initramfs unpacking failed: junk in compressed archive Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) Pid: 1, comm: swapper/0 Not tainted 3.8.7-1-ARCH #1 Call Trace: ... Here is a picture of the kernel panic. Update 2 I reinstalled LILO and no longer receive the kernel panic on boot.
Update2: Forgot you posted the tar ball. Too bad. Anyhow, did a test on your .mod files by using the below code and: ./grum_lic_test32 evan_teitelman/boot/grub/i386-pc/*.mod which yielded the following error: ... bufio.mod License: LICENSE=GPLv3+ OK cacheinfo.mod License: LICENSE=NONE_FOUND ERR cat.mod License: LICENSE=GPLv3+ OK chain.mod License: LICENSE=GPLv3+ OK ... but that file is identical with the one from Archlinux download, so it should not be an issue. In other words, was not the cause. Also, first now, notice you have installed LILO, – and guess by that the case is closed. If not there is always the question about GPT and BIOS + other issues. Did you install it the first time? Can be that there was some tweak involved on first install that reinstall of GRUB did not fix. Update1: OK. Fixed. Should work for both 32 and 64-bit ELF's. When GRUB get to the phase of loading modules it check for license embedded in ELF file for each module. If non valid is found the module is ignored – and that specific error is printed. Could be one or more modules are corrupted. If it is an essential module everything would go bad. Say e.g part_gpt.mod or part_msdos.mod. Accepted licenses are GPLv2+, GPLv3 and GPLv3+. It could of course be other reasons; but one of many could be corrupted module file(s). It seems like the modules are valid ELF files as they are validated as such before the license test. As in: if ELF test fail license test is not executed. Had another issue with modules where I needed to check for various, have extracted parts of that code and made it into a quick license tester. You could test each *.mod file in /boot/grub/* to see which one(s) are corrupt. This code does not validate ELF or anything else. Only try to locate license string and check that. Further it is only tested under i386/32-bit. The original code where it is extracted from worked for x86-64 as well – but here a lot is stripped and hacked so I'm not sure of the result. If it doesn't work under 64-bit it should most likely only print License: LICENSE=NONE_FOUND. (As noted in edit above I have now tested for 32 and 64-bit, Intel.) As a separate test then would be to do something like: xxd file.mod | grep -C1 LIC Not the most beautiful code – but as a quick and dirty check. (As in; you could try.) Compile instructions e.g.: gcc -o grub_lic_test32 source.c # 32-bit variant gcc -o grub_lic_test64 source.c -DELF64 # 64-bit variant Run: ./grub_lic_test32 /path/to/mods/*.mod Prints each file and license, eg: ./grub_lic_test32 tar.mod gettext.mod pxe.mod tar.mod License: LICENSE=GPLv1+ BAD gettext.mod License: LICENSE=GPLv3+ OK pxe.mod License: LICENSE=GPLv3+ OK Code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #include <errno.h> #ifdef ELF64 struct ELF_hdr { unsigned char dummy0[16]; uint32_t dummy1[6]; uint64_t sh_off; uint16_t dummy2[5]; uint16_t sh_entsize; uint16_t sh_num; uint16_t sh_strndx; }; struct ELF_sect_hdr { uint32_t sh_name; uint32_t dummy0[5]; uint64_t sh_offset; }; #else struct ELF_hdr { unsigned char dummy0[16]; uint32_t dummy1[4]; uint32_t sh_off; uint16_t dummy2[5]; uint16_t sh_entsize; uint16_t sh_num; uint16_t sh_strndx; }; struct ELF_sect_hdr { uint32_t sh_name; uint32_t dummy[3]; uint32_t sh_offset; }; #endif enum { ERR_FILE_OPEN = 1, ERR_FILE_READ, ERR_MEM, ERR_BAD_LICENSE, ERR_ELF_SECT_CORE_BREACH }; int file_size(FILE *fh, size_t *fs) { size_t cp; cp = ftell(fh); fseek(fh, 0, SEEK_END); *fs = ftell(fh); fseek(fh, cp, SEEK_SET); return 0; } static const char *valid_licenses[] = { "LICENSE=GPLv2+", "LICENSE=GPLv3", "LICENSE=GPLv3+", NULL }; int grub_check_license(struct ELF_hdr *e) { struct ELF_sect_hdr *s; const char *txt; const char *lic; unsigned i, j = 0; s = (struct ELF_sect_hdr *) ((char *) e + e->sh_off + e->sh_strndx * e->sh_entsize); txt = (char *) e + s->sh_offset; s = (struct ELF_sect_hdr *) ((char *) e + e->sh_off); for (i = 0; i < e->sh_num; ++i) { if (strcmp (txt + s->sh_name, ".module_license") == 0) { lic = (char*) e + s->sh_offset; if (j) fprintf(stdout, "%25s", ""); fprintf(stdout, "License: %-25s ", lic); for (j = 0; valid_licenses[j]; ++j) { if (!strcmp (lic, valid_licenses[j])) { fprintf(stdout, "OK\n"); return 0; } } fprintf(stdout, "BAD\n"); } s = (struct ELF_sect_hdr *) ((char *) s + e->sh_entsize); } if (!j) fprintf(stdout, "License: %-25s ERR\n", "LICENSE=NONE_FOUND"); return ERR_BAD_LICENSE; } int grub_check_module(void *buf, size_t size, int verbose) { struct ELF_hdr *e = buf; /* Make sure that every section is within the core. */ if (e->sh_off + e->sh_entsize * e->sh_num > size) { fprintf(stderr, "ERR: Sections outside core\n"); if (verbose) fprintf(stderr, " %*s: %u bytes\n" #ifdef ELF64 " %*s %u < %llu\n" " %*s: %llu\n" #else " %*s %u < %u\n" " %*s: %u\n" #endif " %*s: %u\n" " %*s: %u\n" , -25, "file-size", size, -25, "", size, e->sh_off + e->sh_entsize * e->sh_num, -25, "sector header offset", e->sh_off, -25, "sector header entry size", e->sh_entsize, -25, "sector header num", e->sh_num ); return ERR_ELF_SECT_CORE_BREACH; } return grub_check_license(e); } int grub_check_module_file(const char *fn, int verbose) { FILE *fh; void *buf; size_t fs; int eno; char *base_fn; if (!(base_fn = strrchr(fn, '/'))) base_fn = (char*)fn; else ++base_fn; fprintf(stderr, "%-25s ", base_fn); if (!(fh = fopen(fn, "rb"))) { fprintf(stderr, "ERR: Unable to open `%s'\n", fn); perror("fopen"); return ERR_FILE_OPEN; } file_size(fh, &fs); if (!(buf = malloc(fs))) { fprintf(stderr, "ERR: Memory.\n"); fclose(fh); return ERR_MEM; } if (fread(buf, 1, fs, fh) != fs) { fprintf(stderr, "ERR: Reading `%s'\n", fn); perror("fread"); free(buf); fclose(fh); return ERR_FILE_READ; } fclose(fh); eno = grub_check_module(buf, fs, verbose); free(buf); return eno; } int main(int argc, char *argv[]) { int i = 1; int eno = 0; int verbose = 0; if (argc > 1 && argv[1][0] == '-' && argv[1][1] == 'v') { verbose = 1; ++i; } if (argc - i < 1) { fprintf(stderr, "Usage: %s [-v] <FILE>[, FILE[, ...]]\n", argv[0]); return 1; } for (; i < argc; ++i) { eno |= grub_check_module_file(argv[i], verbose); if (eno == ERR_MEM) return eno; } return eno; }
Grub 'incompatible license' error
1,406,268,808,000
My previous question produced the commands to add an encrypted swap file: # One-time setup: fallocate -l 4G /root/swapfile.crypt chmod 600 /root/swapfile.crypt # On every boot: loop=$(losetup -f) losetup ${loop} /root/swapfile.crypt cryptsetup open --type plain --key-file /dev/urandom ${loop} swapfile mkswap /dev/mapper/swapfile swapon /dev/mapper/swapfile But Arch Linux uses systemd, and I'm having trouble figuring out how to best get systemd to activate my swap file automatically. systemd.swap suggests that I should have a dev-mapper-swapfile.swap unit that looks something like: [Unit] Description=Encrypted Swap File [Swap] What=/dev/mapper/swapfile That would execute the swapon command. But I'm not sure how to execute the commands to prepare /dev/mapper/swapfile. I gather that dev-mapper-swapfile.swap should declare a dependency on some other unit, but I'm not sure what that unit should look like.
You may want to have a look at: crypttab(5) [email protected](8) systemd-cryptsetup-generator(8) Those work for encrypted volumes backed by block devices. They should also work for file backed volumes. Update: This does work for me: # Automatically generated by systemd-cryptsetup-generator [Unit] Description=Cryptography Setup for %I Documentation=man:[email protected](8) man:crypttab(5) SourcePath=/etc/crypttab Conflicts=umount.target DefaultDependencies=no BindsTo=dev-mapper-%i.device After=systemd-readahead-collect.service systemd-readahead-replay.service Before=umount.target Before=cryptsetup.target After=systemd-random-seed-load.service [Service] Type=oneshot RemainAfterExit=yes TimeoutSec=0 ExecStart=/usr/lib/systemd/systemd-cryptsetup attach 'swap2' '/swap.test' '/dev/urandom' 'swap' ExecStop=/usr/lib/systemd/systemd-cryptsetup detach 'swap2' ExecStartPost=/sbin/mkswap '/dev/mapper/swap2' Steps to get this file: Create an entry in /etc/crypttab: swap2 /swap.test /dev/urandom swap Run this command: /usr/lib/systemd/system-generators/systemd-cryptsetup-generator This creates unit files in the /tmp/ directory. Search for the generated unit file. Open it and remove the entry swap.test.device from the After= and BindsTo= directives. This is important, as there is by definition no device for the swapfile. This prevents the start of the unitfile. Copy the unitfile to /etc/systemd/system/ Activate it for your favourite target.
How do I configure systemd to activate an encrypted swap file?
1,406,268,808,000
Possible Duplicate: Make all new files in a directory accessible to a group I have a directory in which collaborative files / directories are stored. Say directory abc is owned by root and the group is project-abc. I'd like for this directory to have the following: Only members of group project-abc are allowed to change the contents of this directory. Files added to abc must have read and write permissions set for members or group abc Directories added must have read, write and execute permissions for group abc This is straightforward for static directories, but the contents of this directory are expected to change quite often. What's my best approach to producing the desired result?
The best thing you can do is to add the setgid bit (chmod g+s) to your directories. See Directory Setuid and Setgid in the coreutils manual. New directories will then preserve group ownership. As for permissions, the best you can do is make sure umask 002 is in use every time someone works inside this directory. (Yes, basic unix-style permissions are too basic sometimes… I don't know if ACLs can make collaborative work inside a directory easier. If they are activated in your system, you might have a look.)
How do I set permissions for a directory so that files and directories created under it maintain group write permissions? [duplicate]
1,406,268,808,000
I want to gather the Edid information of the monitor. I can get it from the xorg.0.log file when I run X with the -logverbose option. But the problem is that If I switch the monitor (unplug the current monitor and then plug-in another monitor), then there is no way to get this information. Is there any way to get the EDID dynamically (at runtime)? Or any utility/tool which will inform me as soon as monitor is connected and disconnected? I am using the LFS-6.4.
There is a tool called read-edid doing exactly what its name suggests.
Edid information
1,406,268,808,000
I'm trying to understand which FAT based filesystems my Real Time 2.6 Linux supports. I have tried 3 things: /proc/filesystems shows vfat among others non-relevant for the question (like ext2, etc) /proc/config.gz shows: # DOS/FAT/NT Filesystems # CONFIG_FAT_FS=y CONFIG_MSDOS_FS=y CONFIG_VFAT_FS=y CONFIG_FAT_DEFAULT_CODEPAGE=437 CONFIG_FAT_DEFAULT_IOCHARSET="ascii" # CONFIG_NTFS_FS is not set Commands like ls /lib/modules/$(uname -r)/kernel/fs show nothing as .../fs folder doesn't exist. So, looking at this, is safe to asume that FAT and VFAT are supported, but what about FAT32 or exFAT? It's not explicitly specified. How can I know?
The FAT drivers include support for FAT32; it’s treated as a variant along with FAT12 and FAT16. If you see vfat in /proc/filesystems, then FAT32 is supported. exFAT is supported, in recent kernels, by a specific exFAT driver, with its own configuration option (EXFAT_FS). It’s listed separately in /proc/filesystems. exFAT support is also available as a FUSE exFAT driver.
Understanding Linux FAT fs (FAT, VFAT, FAT32, exFAT) support
1,406,268,808,000
The Linux Filesystem Hierarchy Standard says that /var/lib "holds state information pertaining to an application or the system." FreeBSD doesn't mention /var/lib in hier(7). The closest thing I can find is /var/db ("miscellaneous automatically generated system-specific database files"), which seems like a more descriptive name. Where did /var/lib come from, and how did it get its name? I don't see the connection between /var/lib and libraries. Or does lib stand for something else in this case? Is it Linux-specific, or is this a System V vs BSD difference?
The LHFS also says about /var as a whole: /var is specified here in order to make it possible to mount /usr read-only. Everything that once went into /usr that is written to during system operation (as opposed to installation and software maintenance) must be in /var. And this is the link to the past for /var/lib: "once" there were files in /usr/lib that were written - and those now should be put into the /var structure - to /var/lib (probably to find them easier if you once were used to look for them in /usr/lib? ) BTW, /var/db from BSD is also mentioned in the LHFS, those files are put into /var/lib/misc in the LHFS.
How did /var/lib get its name?
1,406,268,808,000
I'm trying to write a tun/tap program in Rust. Since I don't want it to run as root I've added CAP_NET_ADMIN to the binary's capabilities: $sudo setcap cap_net_admin=eip target/release/tunnel $getcap target/release/tunnel target/release/tunnel = cap_net_admin+eip However, this is not working. Everything I've read says that this is the only capability required to create tuns, but the program gets an EPERM on the ioctl. In strace, I see this error: openat(AT_FDCWD, "/dev/net/tun", O_RDWR|O_CLOEXEC) = 3 fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC) ioctl(3, TUNSETIFF, 0x7ffcdac7c7c0) = -1 EPERM (Operation not permitted) I've verified that the binary runs successfully with full root permissions, but I don't want this to require sudo to run. Why is CAP_NET_ADMIN not sufficient here? For reference, I'm on Linux version 4.15.0-45 there are only a few ways I see that this ioctl can return EPERM in the kernel (https://elixir.bootlin.com/linux/v4.15/source/drivers/net/tun.c#L2194) and at least one of them seems to be satisfied. I'm not sure how to probe the others: if (!capable(CAP_NET_ADMIN)) return -EPERM; ... if (tun_not_capable(tun)) return -EPERM; ... if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM;
I experienced the same issue when writing a Rust program that spawns a tunctl process for creating and managing TUN/TAP interfaces. For instance: let tunctl_status = Command::new("tunctl") .args(&["-u", "user", "-t", "tap0"]) .stdout(Stdio::null()) .status()?; failed with: $ ./target/debug/nio TUNSETIFF: Operation not permitted tunctl failed to create tap network device. even though the NET_ADMIN file capability was set: $ sudo setcap cap_net_admin=+ep ./target/debug/nio $ getcap ./target/debug/nio ./target/debug/nio cap_net_admin=ep The manual states: Because inheritable capabilities are not generally preserved across execve(2) when running as a non-root user, applications that wish to run helper programs with elevated capabilities should consider using ambient capabilities, described below. To cover the case of execve() system calls, I used ambient capabilities. Ambient (since Linux 4.3) This is a set of capabilities that are preserved across an execve(2) of a program that is not privileged. The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable. Example solution: For convenience, I use the caps-rs library. // Check if `NET_ADMIN` is in permitted set. let perm_net_admin = caps::has_cap(None, CapSet::Permitted, Capability::CAP_NET_ADMIN); match perm_net_admin { Ok(is_in_perm) => { if !is_in_perm { eprintln!("Error: The capability 'NET_ADMIN' is not in the permitted set!"); std::process::exit(1) } } Err(e) => { eprintln!("Error: {:?}", e); std::process::exit(1) } } // Note: The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable. caps::raise( None, caps::CapSet::Inheritable, caps::Capability::CAP_NET_ADMIN, ) .unwrap_or_else(fail_due_to_caps_err); caps::raise(None, caps::CapSet::Ambient, caps::Capability::CAP_NET_ADMIN) .unwrap_or_else(fail_due_to_caps_err); Finally, setting the NET_ADMIN file capability suffices: $ sudo setcap cap_net_admin=+ep ./target/debug/nio
Why is CAP_NET_ADMIN insufficient permissions for ioctl(TUNSETIFF)?
1,406,268,808,000
As far as I know in Linux kernel, the structure task_struct represents threads i.e. light weight processes, but not processes. processes are not represented by any structure, but by groups of threads sharing the same thread group id. So is the following from Operating System Concepts correct? Linux also provides the ability to create threads using the clone() system call. However, Linux does not distinguish between processes and threads. In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a program. What does it mean? Thanks. Related How does Linux tell threads apart from child processes?
Linux also provides the ability to create threads using the clone() system call. However, Linux does not distinguish between processes and threads. In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a program. We need to distinguish between the actual implementation and the surface you see. From user (system software developer) point of view there is a big difference: threads share a lot of common resources (e.g. memory mappings - apart from stack, of course - file descriptors). Internally (warning: imprecise handwaving arguments) the Linux kernel1) is using what it has at hand, i.e. the same structure for processes and for threads, where for threads of a single process it doesn't duplicate some things rather it references a single instance thereof (memory map description). Thus on the level of directly representing a thread or a process there is not much difference in the basic structure, the devil lies in how the information is handled. You may as well be interested in reading Are threads implemented as processes on Linux? 1) Remember that "Linux" these days stands mostly for the whole OS, while in fact it only is the kernel itself.
Does Linux not distinguish between processes and threads?
1,406,268,808,000
How to find files that were created or modified based on a particular timestamp. Let's say timestamp be date +%d-%m-%y_%H.%M Could you suggest a command which fetches files based on a particular a timestamp ?
You could use the following command: find /path/to/dir -newermt "yyyy-mm-dd HH:mm:ss" -not -newermt "yyyy-mm-dd HH:mm:ss+1" This command will list file in the folder /path/to/dir modified between yyyy-mm-dd HH:mm:ss and yyyy-mm-dd HH:mm:ss + 1 second This should do the trick, You can also adapt this command to find file modified at a certain minute, hour, day , month this is very flexible. If you want to find file by access time, you can tune it like this: find /path/to/dir -newerat "yyyy-mm-dd HH:mm:ss" -not -newerat "yyyy-mm-dd HH:mm:ss+1" And if you want only the creation time: find /path/to/dir -newerct "yyyy-mm-dd HH:mm:ss" -not -newerct "yyyy-mm-dd HH:mm:ss+1" This command search between the two date you mention the first date being inclusive and the second exclusive; it find file modified at or after date 1 and before date 2. you want more information look at this blog article it's nice: Find Files Modified On Specific Date
How to find files based on timestamp
1,406,268,808,000
Is there a way to set the pipe capacity of pipes defined in a Bash (or other shell) script? Take e.g. cmd1 | cmd2 In recent Linuxes the pipe capacity is set to 64KB by default. I know I can control the amount of data "buffered" between the two processes in two ways: Using buffer(1): e.g. cmd1 | buffer | cmd2 Using fcntl(2) with the F_SETPIPE_SZ flag from inside cmd1 or cmd2 Each solution has downsides: buffer can only be used to increase the buffer; also writes over the default pipe capacity will still require waking up the downstream command. fcntl, as far as I know, can only be called from inside cmd1 or cmd2. My question is: is there a way, when the shell creates the pipe, to specify in the shell how much capacity the pipe should have?
Based on DepressedDaniel and Stéphane Chazelas suggestions I settled on the closest thing to a oneliner I could find: function hugepipe { perl -MFcntl -e 'fcntl(STDOUT, 1031, 1048576) or die $!; exec { $ARGV[0] } @ARGV or die $!' "$@" } This allows to do: hugepipe <command> | <command> and the pipe between the two commands is going to have the capacity specified via the fcntl in the perl script.
Set pipe capacity in Linux
1,406,268,808,000
GCC documentation says that the -g option produces debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF 2). So, what is Linux native debugging symbols format? What is it called? Update: I've just found a 15-year old gcc mailing list discussion where it was said that the native format at that point was stabs and then they were considering to switch to DWARF2. But it was 15 years ago... Any updates? =)
On Linux the default is now Dwarf 2 and/or 4. To see this, run readelf --debug-dump=info on a binary containing debug symbols (or stripped symbols); for example, on Fedora, with glibc-debuginfo installed, running readelf --debug-dump=info /usr/lib/debug/bin/gencat.debug will give you something like <1><ea>: Abbrev Number: 0 Compilation Unit @ offset 0xeb: Length: 0x5c (32-bit) Version: 2 Abbrev Offset: 0x52 Pointer Size: 8 <0><f6>: Abbrev Number: 1 (DW_TAG_compile_unit) <f7> DW_AT_stmt_list : 0x83 <fb> DW_AT_ranges : 0x0 <ff> DW_AT_name : ../sysdeps/x86_64/crti.S <118> DW_AT_comp_dir : /usr/src/debug////////glibc-2.21/csu <13d> DW_AT_producer : GNU AS 2.25 <149> DW_AT_language : 32769 (MIPS assembler) This is a set of Dwarf 2 information (see the Version: header for version information; the same binary includes Dwarf 2 and Dwarf 4 sections).
What is Linux native debugging symbols format?
1,406,268,808,000
I'm looking for a reliable way to detect renaming of files and get both old and new file names. This is what I have so far: COUNTER=0; inotifywait -m --format '%f' -e moved_from,moved_to ./ | while read FILE do if [ $COUNTER -eq 0 ]; then FROM=$FILE; COUNTER=1; else TO=$FILE; COUNTER=0; echo "sed -i 's/\/$FROM)/\/$TO)/g' /home/a/b/c/post/*.md" sed -i 's/\/'$FROM')/\/'$TO')/g' /home/a/b/c/post/*.md fi done It works, but it assumes you will never move files into or out of the watched folder. It also assumes that events come in pairs, first moved_from, then moved_to. I don't know if this is always true (works so far). I read inotify uses a cookie to link events. Is the cookie accessible somehow? Lacking the cookie, I thought about using timestamps to link events together. Any tips on getting FROM and TO in a more reliable way? Full script gist.
I think your approach is correct, and tracking the cookie is a robust way of doing this. However, the only place in the source of inotify-tools (3.14) that cookie is referenced is in the header defining the struct to match the kernel API. If you like living on the edge, this patch (issue #72) applies cleanly to 3.14 and adds a %c format specifier for the event cookie in hex: --- libinotifytools/src/inotifytools.c.orig 2014-10-23 18:05:24.000000000 +0100 +++ libinotifytools/src/inotifytools.c 2014-10-23 18:15:47.000000000 +0100 @@ -1881,6 +1881,12 @@ continue; } + if ( ch1 == 'c' ) { + ind += snprintf( &out[ind], size-ind, "%x", event->cookie); + ++i; + continue; + } + if ( ch1 == 'e' ) { eventstr = inotifytools_event_to_str( event->mask ); strncpy( &out[ind], eventstr, size - ind ); This change modifies libinotifytools.so, not the inotifywait binary. To test before installation: LD_PRELOAD=./libinotifytools/src/.libs/libinotifytools.so.0.4.1 \ inotifywait --format="%c %e %f" -m -e move /tmp/test Setting up watches. Watches established. 40ff8 MOVED_FROM b 40ff8 MOVED_TO a Assuming that MOVED_FROM always occurs before MOVED_TO (it does, see fsnotify_move(), and it's an ordered queue, though independent moves might get interleaved) in your script you cache the details when you see a MOVED_FROM line (perhaps in an associative array indexed by ID), and run your processing when you see a MOVED_TO with the matching half of the information. declare -A cache inotifywait --format="%c %e %f" -m -e move /tmp/test | while read id event file; do if [ "$event" = "MOVED_FROM" ]; then cache[$id]=$file fi if [ "$event" = "MOVED_TO" ]; then if [ "${cache[$id]}" ]; then echo "processing ..." unset cache[$id] else echo "mismatch for $id" fi fi done (With three threads running to shuffle a pair of files each 10,000 times, I never saw a single out of order event, or event interleaving. It may depend on filesystem and other conditions of course.)
inotifywait - get old and new file name when renaming
1,406,268,808,000
I'm just getting into upstart so i wrote a very basic script just to print to log file called: vm-service.conf that I put in /etc/init: description "Virtual Images" author "Me" start on runlevel [2345] stop on runlevel [016] respawn script echo "DEBUG: `set`" >> /tmp/vm-service.log end script pre-stop script echo "DEBUG: `set`" >> /tmp/vm-service.log end script if I run sudo start vm-service, it outputs: vm-service start/running, process 29034 But, when I run sudo stop vm-service, it outputs: stop: Unknown instance I've tried running sudo initctl reload-configuration, but i still get the error on stop. I've looked at the cookbook but I'm probably missing something obvious.
Upstart will consider the job stopped if the main process (what is run if the script or exec stanzas are specified) exits. Upstart will then run the post-start process. So what is happening is the first script is running and exiting, Upstart is considering the job stopped, then the second script is running and exiting. If you run the stop command on an already stopped job, it prints the message you saw. To handle this, use a pre-start stanza: pre-start exec foo --bar post-start exec baz --foo if you do this, Upstart will see the job as started once the pre-start stanza finishes, and not as stopped.
Using upstart with stop unknown instance
1,406,268,808,000
If I have a tmpfs set to 50%, and later on I add or remove RAM, does tmpfs automatically adjust its partition size? Also what if I have multiple tmpfs each set at 50%. Do multiple tmpfs compete against each other for the same 50%? How is this managed by the OS?
If you mount a tmpfs instance with a percentage it will take the percent size of the systems physical ram. For instance, if you have 2gb of physical ram and you mount a tmpfs with 50%, your tmpfs will have a size of 1gb. In your scenario, you add physical ram to your system, let's say another 2gb, that your system has 4gb of physical ram. When mounting the tmpfs it will have a size of 2gb now. When mounting multiple instances of tmpfs each with 50% set, it will work. If both tmpfs instances were filled completely, the system will swap out the lesser used pages. If swap space is full too, you will have No space left on device errors. Edit: tmpfs only uses the amount of memory that is taken, not the full 50%. So, if only 10mb of those 1gb are taken, your tmpfs instance only occupies those 10mb. It's not not reserved, it's dynmically. With multiple instances of 50%, the first one that need memory gets memory. The system swapps the lesser used pages, if 50% is occupied or not. The tmpfs instance is not aware of the fact whether it uses physical ram or swap space. You can mount a tmpfs of 100gb if you want and it will work. I assume that you shut the system down before adding ram. So the tmpfs is remounted at startup anyway. If you add ram while the system runs, you will fry the ram, the motherboard and most likely your hand. I can't really recommand that :-) Sources: Kernel Documentation
Does tmpfs automatically resize when the amount RAM changes, and does it compete when there's multiple tmpfs?
1,406,268,808,000
In my system I have eth0 (which may or may not be connected) and a modem on ppp0 (which always may be up or down). The the case where both interfaces are up and ppp0 is the default route, I'd like to find a way to determine the gateway IP actual address of eth0. I tried "netstat -rn" but in this configuration the output is: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface xx.xx.xxx.xxx 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.98.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 So how do I determine eth0's gateway address? In the above case the actual gateway address is 192.168.98.1.
Assume eth0 is DHCP client interface. One option is to check the DHCP client lease files dhcpd.leases Place and name depends on the system; on some Fedora systems, the files under /var/lib/dhclient/ are lease files, where the interesting string is like that : option routers 192.168.1.1; Another option, which worked for me on a funtoo box: dhcpcd -U eth0 prints a nice table, ready to source in scripts broadcast_address=192.168.1.255 dhcp_lease_time=86400 dhcp_message_type=5 dhcp_server_identifier=192.168.1.1 domain_name_servers='192.168.1.1 192.168.1.101' ip_address=192.168.1.101 network_number=192.168.1.0 routers=192.168.1.1 subnet_cidr=24 subnet_mask=255.255.255.0 There other options like dhcping, dhclient -n, according to google and manpages, but they fail on my boxes, but may work for you.
How to determine eth0 gateway address when it is not the default gateway?