date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,554,251,819,000
I'm trying to determine which process is using a large number of Huge Pages, but I can't find a simple Linux command (like top) to view the Huge Page usage. The best I could find was $ cat /sys/devices/system/node/node*/meminfo | fgrep Huge Node 0 HugePages_Total: 512 Node 0 HugePages_Free: 159 Node 0 HugePages_Surp: 0 Node 1 HugePages_Total: 512 Node 1 HugePages_Free: 0 Node 1 HugePages_Surp: 0 which tells me at the granularity of Nodes where the Huge Pages are in use, but I would like to see the Huge Page usage per process. I wouldn't mind iterating over all processes and cating some /sys special device to get this information. A similiar question here got no reponses: https://stackoverflow.com/q/25731343/364818 I am not running Oracle, btw.
I found a discussion on ServerFault that discusses this. Basically, $ sudo grep huge /proc/*/numa_maps /proc/4131/numa_maps:80000000 default file=/anon_hugepage\040(deleted) huge anon=4 dirty=4 N0=3 N1=1 /proc/4131/numa_maps:581a00000 default file=/anon_hugepage\040(deleted) huge anon=258 dirty=258 N0=150 N1=108 /proc/4131/numa_maps:7f6c40400000 default file=/anon_hugepage\040(deleted) huge /proc/4131/numa_maps:7f6ce5000000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N0=1 /proc/4153/numa_maps:80000000 default file=/anon_hugepage\040(deleted) huge anon=7 dirty=7 N0=6 N1=1 /proc/4153/numa_maps:581a00000 default file=/anon_hugepage\040(deleted) huge anon=265 dirty=265 N0=162 N1=103 /proc/4153/numa_maps:7f3dc8400000 default file=/anon_hugepage\040(deleted) huge /proc/4153/numa_maps:7f3e00600000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N0=1 and getting the process name $ ps 4131 PID TTY STAT TIME COMMAND 4131 ? Sl 1:08 /var/lib/jenkins/java/bin/java -jar slave.jar $ ps 4153 PID TTY STAT TIME COMMAND 4153 ? Sl 1:09 /var/lib/jenkins/java/bin/java -jar slave.jar will give you an idea of what processes are using huge memory. $ grep HugePages /proc/meminfo AnonHugePages: 1079296 kB HugePages_Total: 4096 HugePages_Free: 3560 HugePages_Rsvd: 234 HugePages_Surp: 0 $ sudo ~/bin/counthugepages.pl 4153 273 huge pages $ sudo ~/bin/counthugepages.pl 4131 263 huge pages The sum of free pages (3560) plus the pages from the 2 process (273+263) equals 4096. All accounted for! The perl script to sum the dirty= fields is here: https://serverfault.com/questions/527085/linux-non-transparent-per-process-hugepage-accounting/644471#644471
How to monitor use of Huge Pages per process
1,554,251,819,000
How does a Linux instance determine its IP address? That is, not the 127.0.0.1. Is this stored in file or does ifconfig somehow calculate it at every invocation? I managed to solve it just to show I made an effort, but it is not anything I'd put in a serious application: sudo ifconfig | head -n 2 | tail -n 1 | tr -s " " | tr " " ":" | cut -d":" -f 4
The information can change at any time, so it needs to be retrieved from the kernel, it can't be stored in a file. There is no really nice way to obtain this information. Your parsing is as good as any, except that hard-coding the second line is wrong: there is no guarantee that the interfaces will be listed in any particular order. It is fairly common for a machine to have more than one interface: you may have multiple network cards, or virtual interfaces. Often, the IP address you're interested in is the one associated with the default route. With most configurations, you can obtain the right interface with the route command, then extract the IP address of that interface with ifconfig. /sbin/ifconfig $(/sbin/route -n | awk '$1 == "0.0.0.0" {print $8}') | awk 'match($0, /inet addr:[.0-9]+/) {print substr($0, RSTART+10, RLENGTH-10)}' Note that there is no need to call sudo. ifconfig and route are often not in the default PATH for non-root users, but you can use them with no special privilege as long as you're only reading information and not changing the settings. On unix variants other than Linux, you may have to tweak the commands above. Most have commands called ifconfig and route, but the output format may be different. Under Linux, instead of ifconfig and route, you can use the ip command from the iproute2 tool suite. While the authors of iproute2 consider ifconfig and route to be deprecated, there is in fact little advantage to using ip, since the output of ip is not markedly easier to parse, and ifconfig and route are always available whereas some stripped-down Linux installations omit ip.
IP of localhost
1,554,251,819,000
I understand that Spinlocks are real waste in Linux Kernel Design. I would like to know why is it like spin locks are good choices in Linux Kernel Design instead of something more common in userland code, such as semaphore or mutex?
The choice between a spinlock and another construct which causes the caller to block and relinquish control of a cpu is to a large extent governed by the time it takes to perform a context switch (save registers/state in the locking thread and restore registers/state in another thread). The time it takes and also the cache cost of doing this can be significant. If a spinlock is being used to protect access to hardware registers or similar where any other thread that is accessing is only going to take a matter of milliseconds or less before it releases the lock then it is a much better use of cpu time to spin waiting rather than to context switch and carry on.
Why are spin locks good choices in Linux Kernel Design instead of something more common in userland code, such as semaphore or mutex?
1,554,251,819,000
I'm trying to override malloc/free functions for the program, that requires setuid/setgid permissions. I use the LD_PRELOAD variable for this purpose. According to the ld documentation, I need to put my library into one of the standard search directories (I chose /usr/lib) and give it setuid/setgid permissions. I've done that. However, I still can't link to my .so file, getting the error: object 'liballoc.so' from LD_PRELOAD cannot be preloaded: ignored What can be the possible reasons for that? Tested this .so file on programs that don't have setuid/setgid permissions and all works fine. OS: RedHat 7.0
According to the ld documentation, I need to put my library into one of the standard search directories (I chose /usr/lib) That was the mistake. You should've put it in /usr/lib64 (assuming that your machine is a x86_64). I've just tried the recipe from the manpage on a Centos 7 VM (which should be ~identical to RHEL 7) and it works: As root: cynt# cc -shared -fPIC -xc - <<<'char *setlocale(int c, const char *l){ errx(1, "not today"); }' -o /usr/lib64/liblo.so cynt# chmod 4755 /usr/lib64/liblo.so As a regular user with a setuid program: cynt$ LD_PRELOAD=liblo.so su - su: not today Whether it's a good idea to use that feature is a totally different matter (IMHO, it isn't).
LD_PRELOAD for setuid binary
1,554,251,819,000
I'm looking for some interfaces to monitor the use of the CPU and the temperature, i have already installed lm-sensors for temp and htop for CPU but i want something that shows them always in real-time in the bar at the top of the screen (the one which says time, battery% ecc.. sorry i don't know how it is called) so that i shouldn't always run the mentioned command from the terminal. I have Ubuntu 16.04.
The software is called psensor. Linux: https://wpitchoune.net/psensor/ Specific for Ubuntu: https://wpitchoune.net/psensor/ubuntu.html There is an option to display the info on the toolbar, as well as in a stand-alone window.
Monitoring CPU and temperature
1,554,251,819,000
I have a problem, like I need to find the directories that got updated yesterday. I tried using find command but its listing all the files that got updated in the directories. But I need only the directory names.
You can use -type d in the find string: find /path/to/target -type d -mtime 1
How to find Directories that updated last day in linux?
1,554,251,819,000
sudo -EH -u someuser nohup sh check.sh & Above commands runs the process as root instead of the user specified by -u flag. root 4056 2388 0 13:00 pts/4 00:00:00 sudo -EH -u someuser nohup sh /tmp/check.sh & Below are the sudoers entry. Cmnd_Alias SUDO_CMNDS = /bin/echo,/bin/ls,/bin/cat,/bin/vim,/bin/mv,/bin/cp,/bin/rm,/bin/mkdir,/bin/diff,/bin/id,/bin/hostname,/bin/grep,/bin/nohup,/bin/locate,/bin/find,/bin/sed,/bin/awk,/usr/bin/whoami %sudomygroup ALL=(someuser) NOPASSWD:SETENV: SUDO_CMNDS Extra output as suggested by @michael homer $ ps -ef|grep -i check root 14260 14090 0 13:20 pts/4 00:00:00 sudo -HE -u someuser nohup sh /tmp/check.sh someuser 14261 14260 0 13:20 pts/4 00:00:00 sh /tmp/check.sh
This line: root 4056 2388 0 13:00 pts/4 00:00:00 sudo -EH -u someuser nohup sh /tmp/check.sh is reporting that sudo ... was run as the root user. That happens because the sudo binary is setuid, and it's expected (regardless of which user asked sudo to run). What you're trying to find out is what user the command that sudo then ran is executing as. Using ps -ef|grep -i nohup gave you only that single line of output, because when nohup runs it immediately shuts itself off upon executing the command it was asked to run, and then there's no nohup left in the ps output to grep for afterwards. If you instead search for check.sh, you'll get (at least) two lines of output: the one you already see, and another one that's just for sh /tmp/check.sh: root 14260 14090 0 13:20 pts/4 00:00:00 sudo -HE -u someuser nohup sh /tmp/check.sh someuser 14261 14260 0 13:20 pts/4 00:00:00 sh /tmp/check.sh That shows that the sh command is running as someuser, while sudo is just sitting there waiting for the inner command to finish, still running as root itself.
Sudo command gets executed as root instead of specified user
1,554,251,819,000
There is a command in Microsoft's cmd, called color. I know that, in bash, there are special characters that allows you, during the echos, to change the text colors. As well I do know that in ubuntu you can edit the parameters of the terminal setting a "style" going inside the config, editing it and applying it with mouse under the menus. What I ask is, if there exists under debian, ubuntu and centOS something very simple like: color 1b so that the console turns from: to
There are multiple ways you can do this. One way is by using tput: tput setab 4 sets the background color to blue. To set the foreground color, use tput setaf. Another way is by using raw ANSI escapes, here is a good documentation: https://misc.flogisoft.com/bash/tip_colors_and_formatting
Does bash have a color command, as seen in MS-Windows CMD? [duplicate]
1,554,251,819,000
I'm trying to make a script that will unzip a password protected file, the password being the name of the file that I will get when unzipping Eg. file1.zip contains file2.zip and its password is file2. file2.zip contains file3.zip and its password is file3 How do I unzip file1.zip, and read the name of file2.zip so it can be entered in the script? Here's a screenshot of the command line: root@kaliVM:~/Desktop# unzip 49805.zip Archive: 49805.zip [49805.zip] 13811.zip password: I just need Bash to read that output in order to know the new password (In this case the password is 13811). Here's what I've done so far #!/bin/bash echo First zip name: read firstfile pw=$(zipinfo -1 $firstfile | cut -d. -f1) nextfile=$(zipinfo -1 $firstfile) unzip -P $pw $firstfile rm $firstfile nextfile=$firstfile Now how can I make it do the loop?
If you don't have and cannot install zipinfo for any reason, you can imitate it by using unzip with -Z option. To list the contents of the zip use unzip -Z1: pw="$(unzip -Z1 file1.zip | cut -f1 -d'.')" unzip -P "$pw" file1.zip Put it to a loop: zipfile="file1.zip" while unzip -Z1 "$zipfile" | head -n1 | grep "\.zip$"; do next_zipfile="$(unzip -Z1 "$zipfile" | head -n1)" unzip -P "${next_zipfile%.*}" "$zipfile" zipfile="$next_zipfile" done or a recursive function: unzip_all() { zipfile="$1" next_zipfile="$(unzip -Z1 "$zipfile" | head -n1)" if echo "$next_zipfile" | grep "\.zip$"; then unzip -P "${next_zipfile%%.*}" "$zipfile" unzip_all "$next_zipfile" fi } unzip_all "file1.zip" -Z zipinfo(1) mode. If the first option on the command line is -Z, the remaining options are taken to be zipinfo(1) options. See the appropriate manual page for a description of these options. -1 : list filenames only, one per line. This option excludes all others; headers, trailers and zipfile comments are never printed. It is intended for use in Unix shell scripts.
Bash loop unzip passworded file script
1,554,251,819,000
From what I had read sometime back, it seems iwconfig is deprecated and the current methods is - $ sudo ifconfig wlan0 up and $ sudo ifconfig wlan0 down But couldn't find anything which tells the status of the wifi and know which mode it is on, which AP it is attached to, how much data is being transferred and so on and so forth on the CLI.
The current (in 2017) methods are: ip for all network interfaces, including setting up and down: ip link set wlan0 up ip link set wlan0 down ip help ip link help ip addr help iw for wireless extensions (needs to be called as root): iw dev iw phy iw wlan0 scan iw wlan0 station dump iw help ifconfig and iwconfig are still supported with the appropriate packages, but some features are only available with ip and iw.
How to find status of wlan0?
1,554,251,819,000
I'm really trying to understand why our guest VMs aren't using the kvm-clock driver "like they're supposed to". They're running RHEL 7.2, glibc-2.17, kern 3.10.0. Programs such as date and perl -e 'print time' get the current time, but do so without making a system call. This is confirmed with strace and ltrace and further confirmed by using gdb and tracing through assembly which bypassed the syscall and instead executed some instruction called rtdscp. Is this an attempt at optimization by the glibc authors? Is there any way to disable this and force glibc calls to make the systemcall (short of LD_PRELOAD hacks)? UPDATE 2016-10-14: After reviewing the latest POSIX draft, part of the answer is clear: there is a way to request the clock from the CPU, but GNU glibc has wrongly forced this implementation on its users. The work-around is to invoke the system call directly. (Booooh) If _POSIX_CPUTIME is defined, implementations shall support clock ID values obtained by invoking clock_getcpuclockid(), which represent the CPU-time clock of a given process. Implementations shall also support the special clockid_t value CLOCK_PROCESS_CPUTIME_ID, which represents the CPU-time clock of the calling process when invoking one of the clock_() or timer_() functions. Given that the user can Is there any real argument against the notion that if clock_id is set to CLOCK_REALTIME, the system call should be used?
I think the reason you don't see a syscall happening, is that some Linux system calls (esp. those related to time, like gettimeofday(2) and time(2)) have special implementations through the vDSO, which contains somewhat optimized implementations of some syscalls: The "vDSO" (virtual dynamic shared object) is a small shared library that the kernel automatically maps into the address space of all user-space applications. There are some system calls the kernel provides that user-space code ends up using frequently, to the point that such calls can dominate overall performance. This is due both to the frequency of the call as well as the context-switch overhead that results from exiting user space and entering the kernel. Now, the manual mentions that the required information is just placed in memory so that a process can access it directly (the current time isn't much of a secret, after all). I don't know about the exact implementation, and could only guess about the role of the CPU's time stamp counter in the it. So, it's not really glibc doing an optimization, but the kernel. It can be disabled by setting vdso=0 on the kernel command line, and it should be possible to compile it out. I can't find if it's possible to disable it on the glibc side, however (at least without patching the library). There's a bunch of other information and sources on this question on SE. You said in the question: After reviewing the latest POSIX draft, part of the answer is clear: there is a way to request the clock from the CPU, but GNU glibc has wrongly forced this implementation on its users. Which I think is a rather bold statement. I don't see any evidence of "wrongly forcing" anything on users, at least not to their disadvantage. The vDSO implementation is used by almost every Linux process running on current systems, meaning that if it didn't work correctly, some very loud complaints would have been already heard. Also you said yourself that the time received is correct. The quote you give from the clock_gettime manual only seems to mention that the call must support clock id's returned by clock_getcpuclockid, not anything about the behaviour of CLOCK_REALTIME or gettimeofday.
Why don't Linux utils use a system call to get the current time?
1,554,251,819,000
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying two files in parallel. Currently, I am copying the PRIMARY_PARTITION files in PRIMARY folder using GNU parallel and once that is done, then I copy the SECONDARY_PARTITION files in SECONDARY folder using same GNU parallel so it is sequential as of now w.r.t PRIMARY and SECONDARY folder Below is my shell script and everything works fine - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbers export dir3=/testing/snapshot/20140103 # delete primary files first and then copy find "$PRIMARY" -mindepth 1 -delete do_CopyInPrimary() { el=$1 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. } export -f do_CopyInPrimary parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" # delete secondary files first and then copy find "$SECONDARY" -mindepth 1 -delete do_CopyInSecondary() { el=$1 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. } export -f do_CopyInSecondary parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" Problem Statement:- Is there any way I can launch two threads, one to copy files in PRIMARY folder using the same setup as I have above, meaning it will copy two files in parallel. And second thread to copy the files in SECONDARY folder using the same setup as I have above, it should also copy two files parallel simultaneously? Meaning they should copy files in parallel both in PRIMARY and SECONDARY folder simultaneously not once PRIMARY folder is done, then copy files in SECONDARY folder. Currently, once PRIMARY folder file is done, then only I try copying the files in SECONDARY folder. In short, I just need to launch two threads one thread will run this - # delete primary files first and then copy find "$PRIMARY" -mindepth 1 -delete do_CopyInPrimary() { el=$1 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. } export -f do_CopyInPrimary parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" And second thread will run this - # delete secondary files first and then copy find "$SECONDARY" -mindepth 1 -delete do_CopyInSecondary() { el=$1 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. } export -f do_CopyInSecondary parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" And once all the files are copied successfully, it should echo the message, that all the files are copied. In java, I know how to launch two threads and each thread is performing certain task but not sure how in bash shell script this will work? My main task is to copy two files in parallel using GNU parallel in PRIMARY folder and SECONDARY folder at a same time? Is this possible to do in bash shell script?
The obvious is: parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" & parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" & wait But this way the secondary does not wait for the primary to finish and it does not check if the primary was successful. Let us assume that $PRIMARY_PARTITION[1] corresponds to $SECONDARY_PARTITION[1] (so if you cannot read the file from $PRIMARY_PARTITION[1] you will read it from $SECONDARY_PARTITION[1] - That also means that $PRIMARY_PARTITION and $SECONDARY_PARTITION have the same number of elements). Then you can condition the running of $SECONDARY_PARTITION[1] on $PRIMARY_PARTITION[1]. do_Copy() { PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbers pel=${PRIMARY_PARTITION[$1]} sel=${SECONDARY_PARTITION[$1]} do_CopyInPrimary $pel || do_CopyInSecondary $sel || echo Could not copy neither $pel nor $sel } export -f do_Copy # Number of elements in PRIMARY_PARTITION == SECONDARY_PARTITION seq ${#PRIMARY_PARTITION[@]} | parallel -j 2 do_Copy This will get the dependency right, but it will only copy 2 at a time in total. With -j4 you risk running 4 primaries at the same time, so we need to guard against that, too: do_Copy() { PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbers pel=${PRIMARY_PARTITION[$1]} sel=${SECONDARY_PARTITION[$1]} sem -j2 --fg --id primary do_CopyInPrimary $pel || sem -j2 --fg --id secondary do_CopyInSecondary $sel || echo Could not copy neither $pel nor $sel } export -f do_Copy # Number of elements in PRIMARY_PARTITION == SECONDARY_PARTITION seq ${#PRIMARY_PARTITION[@]} | parallel -j 4 do_Copy sem will limit the number of primaries to 2 and the number of secondaries to 2.
How to launch two threads in bash shell script?
1,554,251,819,000
Possible Duplicate: Linux tools to treat files as sets and perform set operations on them I have two data sets, A and B. The format for each data set is one number per line. For instance, 12345 23456 67891 2345900 12345 Some of the data in A are not included in data set B. How to list all of these data in A, and how to list all of those data shared by A and B. How can I do that using Linux/UNIX commands?
Use the comm command. If you lists are in files listA and listB: comm listA listB By default, comm will return 3 columns. Items only in listA, items only in listB, and items common to both lists. You can suppress individual columns, with a -1, -2, or -3 arg.
list the difference and overlap between two plain data set [duplicate]
1,554,251,819,000
Windows has a "Resilient FileSystem" since Windows 8. Are there similarly resilient filesystems for Linux? What I expect from such a filesystem is that a bad block won't screw up either files or the journal. I'm no FS geek, so please explain if such error-resilience is unfit for a desktop/CPU intensive/memory intensive/lowers the HDD's lifespan/is already in some FS like Ext4/etc. Is there something like this available for Linux?
If you're looking for advanced filesystems for general-purpose computers in the Linux world, there are two candidates: ZFS and BTRFS. ZFS is older and more mature, but it's originally from Solaris and the port to Linux isn't seamless. BTRFS is still under heavy development, and not all features are ready for prime time yet. Both filesystems offer per-file checksumming, so you will know if a file is corrupted; this is more of a security protection than a protection against failing hardware, because failing hardware tends to make a file unreadable, the hardware has its own checksums so reading wrong data is extremely unlikely (if a disk read returns wrong data, and you're sure it's not an application error, blame your RAM, not your disk). If you want resilience, by far the best thing to do is RAID-1 (i.e. mirroring) over two disks. When a disk starts failing, it's rare that only a few sectors are affected; usually, more sectors follow quickly, if the disk hasn't stopped working altogether. So replicating data over the same disk doesn't help very often. Replicating data over two disks doesn't need any filesystem support. The only reason you might want to replicate data on the same disk is if you have a laptop which can only accommodate one disk, but even then the benefits are very small. Remember that no matter how much replication you have, you still need to have offline backups, to protect against massive hardware failures (power surge, fire, …) and against software-level problems (e.g. accidental file deletion or overwrite).
Are there error resilient filesystems for Linux?
1,554,251,819,000
As far as I understand, the traditional place for home directories is beneath /home. Some Linux variants seem to keep them in /var/home, what's the reason for that?
My guess is that WebOS is designed to be installed on two different filesystems, a root filesystem that is read-only in normal operation and a filesystem mounted on /var that is read-write in normal operation. Since home directories need to be writable, they are placed somewhere under /var. This kind of setup is fairly common on unix systems that run off flash (such as PDAs¹ and embedded unices). While /home is mentioned by the Filesystem Hierarchy Standard on Linux and is generally common amongst unices, it is not universal (the FHS lists it as “optional” and specifies that “no program should rely on this location”). Sites with a large number of users sometimes use /home/GROUP/USER or /home/SERVER/USER or /home/SERVER/GROUP/USER. And I've seen directories rooted in other places: /homes, /export/home, /users, /net, ... In fact, a long long time ago, the standard location for home directories was /usr. ¹ For example Android (not a unix, but running on a Linux kernel) has a read-only root filesystem and a writable filesystem on /data.
Why would I keep home directories in /var/home?
1,554,251,819,000
How do I use GNU touch to update a file called -? How do I use GNU cat to display a file called -? I'm running: % cat --version | head -n1 cat (GNU coreutils) 8.29 % touch --version | head -n1 touch (GNU coreutils) 8.29 Firstly, touch: % touch - % ls -l total 0 % touch -- - % ls -l -- - ls: cannot access '-': No such file or directory Ok, I'll give up on creating a file with touch. Let's create it with date instead: % date > - % ls -l - -rw-r--r-- 1 ravi ravi 29 Sep 8 19:54 - % Now, let's try to cat it: % cat - % # I pressed ^D % cat -- - % # Same again - I pressed ^D I know I can work around with: % > - and % cat < - But why don't these GNU utils support the convention that -- means that everything following is treated as a non-option? How do I use these tools in the general case, for example I have a variable with the contents -?
Use an explicit path to the file: touch ./- cat ./- GNU touch treats a file operand of - specially: A FILE argument string of - is handled specially and causes touch to change the times of the file associated with standard output. For cat, the POSIX standard specifies that a file operand - should be interpreted as a request to read from standard input. The double-dash convention is still in effect, but it's not for signalling the end of arguments but the end of options. In neither of these cases would - be taken as an option (a lone - can not be an option) but as an operand ("file name argument"). Regarding your last question: To protect the contents of a variable against being interpreted as a set of options when using it as utility "$variable" use utility -- "$variable" Note that if the utility is cat, sed, awk, paste, sort and possibly a few others (or GNU touch), and $variable is -, this will still cause the utility to do its special processing since, as said above, - is not an option. Instead, make provisions so that filenames, if they may start with or are equal to -, are preceded by a path, for example ./ for files in the current working directory. A good habit is to use for filename in ./*; do rather than for filename in *; do for example.
How to `touch` and `cat` file named `-` [duplicate]
1,554,251,819,000
I am trying to run VMware on kali linux but when I try to run it show message that Before you can run VMware several modules must be compiled and loaded into the running kernel Here is log: 2018-04-23T20:11:48.254+04:30| vthread-1| I125: Log for VMware Workstation pid=8508 version=14.1.0 build=build-7370693 option=Release 2018-04-23T20:11:48.254+04:30| vthread-1| I125: The process is 64-bit. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: Host codepage=UTF-8 encoding=UTF-8 2018-04-23T20:11:48.254+04:30| vthread-1| I125: Host is Linux 4.15.0-2-amd64 Kali GNU/Linux Rolling 2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/usr/lib/vmware/settings": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/usr/lib/vmware/settings": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /usr/lib/vmware/settings. Using default values. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/home/linux/.vmware/config": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/home/linux/.vmware/config": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /home/linux/.vmware/config. Using default values. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/home/linux/.vmware/preferences": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/home/linux/.vmware/preferences": No such file or directory. 2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /home/linux/.vmware/preferences. Using default values. 2018-04-23T20:11:48.326+04:30| vthread-1| W115: Logging to /tmp/vmware-root/vmware-8508.log 2018-04-23T20:11:48.340+04:30| vthread-1| I125: Obtaining info using the running kernel. 2018-04-23T20:11:48.340+04:30| vthread-1| I125: Created new pathsHash. 2018-04-23T20:11:48.340+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include". 2018-04-23T20:11:48.340+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.340+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.340+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.340+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.348+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.348+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:48.571+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers 2018-04-23T20:11:48.571+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Read 20056 symbol versions 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmmon module. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmnet module. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmblock module. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmci module. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vsock module. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Setting vsock to depend on vmci. 2018-04-23T20:11:48.597+04:30| vthread-1| I125: Invoking modinfo on "vmmon". 2018-04-23T20:11:48.600+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:48.600+04:30| vthread-1| I125: Invoking modinfo on "vmnet". 2018-04-23T20:11:48.602+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:48.602+04:30| vthread-1| I125: Invoking modinfo on "vmblock". 2018-04-23T20:11:48.604+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:48.604+04:30| vthread-1| I125: Invoking modinfo on "vmci". 2018-04-23T20:11:48.606+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:48.606+04:30| vthread-1| I125: Invoking modinfo on "vsock". 2018-04-23T20:11:48.608+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 0. 2018-04-23T20:11:48.623+04:30| vthread-1| I125: to be installed: vmmon status: 0 2018-04-23T20:11:48.623+04:30| vthread-1| I125: to be installed: vmnet status: 0 2018-04-23T20:11:48.639+04:30| vthread-1| I125: Obtaining info using the running kernel. 2018-04-23T20:11:48.639+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include". 2018-04-23T20:11:48.639+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.639+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.639+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.639+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.646+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.646+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:48.867+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers 2018-04-23T20:11:48.867+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers. 2018-04-23T20:11:48.892+04:30| vthread-1| I125: Read 20056 symbol versions 2018-04-23T20:11:48.893+04:30| vthread-1| I125: Kernel header path retrieved from FileEntry: /lib/modules/4.15.0-2-amd64/build/include 2018-04-23T20:11:48.893+04:30| vthread-1| I125: Update kernel header path to /lib/modules/4.15.0-2-amd64/build/include 2018-04-23T20:11:48.893+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.893+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.893+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.893+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.900+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.900+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:48.902+04:30| vthread-1| I125: Found compiler at "/usr/bin/gcc" 2018-04-23T20:11:48.906+04:30| vthread-1| I125: Got gcc version "7". 2018-04-23T20:11:48.906+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove. 2018-04-23T20:11:48.910+04:30| vthread-1| I125: Got gcc version "7". 2018-04-23T20:11:48.910+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove. 2018-04-23T20:11:48.912+04:30| vthread-1| I125: Trying to find a suitable PBM set for kernel "4.15.0-2-amd64". 2018-04-23T20:11:48.912+04:30| vthread-1| I125: No matching PBM set was found for kernel "4.15.0-2-amd64". 2018-04-23T20:11:48.912+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove. 2018-04-23T20:11:48.912+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.912+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.912+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.912+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.922+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.922+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:48.925+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove. 2018-04-23T20:11:48.925+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.925+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.925+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.925+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.937+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.937+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:48.937+04:30| vthread-1| I125: Using temp dir "/tmp". 2018-04-23T20:11:48.940+04:30| vthread-1| I125: Obtaining info using the running kernel. 2018-04-23T20:11:48.940+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include". 2018-04-23T20:11:48.940+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64". 2018-04-23T20:11:48.940+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h 2018-04-23T20:11:48.940+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead. 2018-04-23T20:11:48.940+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check 2018-04-23T20:11:48.951+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64". 2018-04-23T20:11:48.951+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo! 2018-04-23T20:11:49.171+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers 2018-04-23T20:11:49.171+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers. 2018-04-23T20:11:49.196+04:30| vthread-1| I125: Read 20056 symbol versions 2018-04-23T20:11:49.196+04:30| vthread-1| I125: Invoking modinfo on "vmmon". 2018-04-23T20:11:49.200+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:49.200+04:30| vthread-1| I125: Invoking modinfo on "vmnet". 2018-04-23T20:11:49.203+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256. 2018-04-23T20:11:49.594+04:30| vthread-1| I125: Setting destination path for vmmon to "/lib/modules/4.15.0-2-amd64/misc/vmmon.ko". 2018-04-23T20:11:49.595+04:30| vthread-1| I125: Extracting the vmmon source from "/usr/lib/vmware/modules/source/vmmon.tar". 2018-04-23T20:11:49.606+04:30| vthread-1| I125: Successfully extracted the vmmon source. 2018-04-23T20:11:49.606+04:30| vthread-1| I125: Building module with command "/usr/bin/make -j4 -C /tmp/modconfig-stxrjw/vmmon-only auto-build HEADER_DIR=/lib/modules/4.15.0-2-amd64/build/include CC=/usr/bin/gcc IS_GCC_3=no" 2018-04-23T20:11:52.158+04:30| vthread-1| W115: Failed to build vmmon. Failed to execute the build command. 2018-04-23T20:11:52.161+04:30| vthread-1| I125: Setting destination path for vmnet to "/lib/modules/4.15.0-2-amd64/misc/vmnet.ko". 2018-04-23T20:11:52.161+04:30| vthread-1| I125: Extracting the vmnet source from "/usr/lib/vmware/modules/source/vmnet.tar". 2018-04-23T20:11:52.170+04:30| vthread-1| I125: Successfully extracted the vmnet source. 2018-04-23T20:11:52.170+04:30| vthread-1| I125: Building module with command "/usr/bin/make -j4 -C /tmp/modconfig-stxrjw/vmnet-only auto-build HEADER_DIR=/lib/modules/4.15.0-2-amd64/build/include CC=/usr/bin/gcc IS_GCC_3=no" 2018-04-23T20:11:56.805+04:30| vthread-1| I125: Successfully built vmnet. Module is currently at "/tmp/modconfig-stxrjw/vmnet.o". 2018-04-23T20:11:56.805+04:30| vthread-1| I125: Found the vmnet symvers file at "/tmp/modconfig-stxrjw/vmnet-only/Module.symvers". 2018-04-23T20:11:56.805+04:30| vthread-1| I125: Installing vmnet from /tmp/modconfig-stxrjw/vmnet.o to /lib/modules/4.15.0-2-amd64/misc/vmnet.ko. 2018-04-23T20:11:56.809+04:30| vthread-1| I125: Registering file "/lib/modules/4.15.0-2-amd64/misc/vmnet.ko". 2018-04-23T20:11:57.108+04:30| vthread-1| I125: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. 2018-04-23T20:11:57.109+04:30| vthread-1| I125: Registering file "/usr/lib/vmware/symvers/vmnet-4.15.0-2-amd64". 2018-04-23T20:11:57.404+04:30| vthread-1| I125: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. I tried to google but was unable to find relevant post.
Issue At Hand You are reporting that you are unable to run VMware on Kali Linux. According to the errors you have posted your Operating System is missing the VMware modules necessary to run. I will take this time to point out that Kali Linux is not meant as a general purpose Operating System. You may continue to run into these kinds of errors using software not designed for Kali Linux. Running virtualization or hypervisor software is not an intended function of Kali Linux. One possible solution to your issue would be to run your virtualization software on Ubuntu, Debian, or any other general purpose operating system instead. If you wish to continue using Kali Linux or encounter the same error in a different Operating System the following steps may work as a possible solution to the above error. Possible Solutions I will be referencing this post as it contains a few different possible fixes. First off try and run this command: sudo vmware-modconfig --console --install-all This should install all VMware modules. You should now be able to run Vmware as expected. Look over this VMware forum post as they cover additional scripts you may need to run to verify the install process. Alternatively, you could try this first: sudo apt-get install build-essential linux-headers-$(uname -r) open-vm-dkms sudo ln -s /usr/src/linux-headers-$(uname -r)/include/generated/uapi/linux/version.h /usr/src/linux-headers-$(uname -r)/include/linux/version.h After which run: sudo vmware-config-tools.pl. It might be necessary to run sudo vmware-modconfig --console --install-all again after this is complete. Starting from Scratch You may need to start over with a fresh install of VMware. Purge the existing installation by running sudo vmware-installer -u vmware-player. Then rerun the installer script, i.e: ./VMware-*.bundle. I would also verify that your graphics drivers and all other parts of your system are fully up to date. Conclusion Again, I suggest you use a different Operating System than Kali Linux to complete this task. Please read over this post in its entirety before going with a possible fix. Remember you need to install the proper kernel headers for your kernel to get this to work. I am also including a link to a guide on installing VMware on Kali Linux. There are even some comments in that post on how to troubleshoot the issue further. I am also including a link to the Official Kali Linux documentation on how to install VMware tools as well as a link to another stack exchange post that appears to be related to this issue. Please comment if there are any questions about this answer. I appreciate corrections to any misconceptions and feedback on how to improve my posts. Best of Luck!
Before you can run VMware several modules must be compiled
1,554,251,819,000
lets say we want that only user tutu can read the file /home/grafh/file.txt what is the configuration that need to do in order to enable that? file owner must be stay as root ( and only user tutu can read the file )
You have two possibilities, using the the classical DAC (Discretionary Access Control, the usual rwx rights) of using files ACL (Access Control Lists). Using DAC permissions If tutu has not its own group (check groups tutu output), you must create a new group and make tutu the only member of this group. root@host:~# addgroup tutu root@host:~# usermod -G tutu tutu Then change the file permissions to allow read access to the members of the tutu group: root@host:~# chgrp tutu /home/grafh/file.txt root@host:~# chmod 640 /home/grafh/file.txt This file will remain owned by root, but be readable (but not writeable) by tutu and not by the other other users. Using ACL permissions ACLs are additional rights which come in addition to the DAC permissions seen above. There are meant to solve situation which cannot be easily solved using the historical Unix DAC permission system. To allow tutu to read the file: root@host:~# setfacl -m u:tutu:r /home/grafh/file.txt
Linux + how to give only specific user to read the file
1,554,251,819,000
I'm learning to use dd by experimentally playing with its arguments. I would like to create a 10-byte file. I thought the following would work: dd if=/dev/zero of=./foo count=1 bs=1 obs=9 seek=1 ...because of these comments from the man page: obs=BYTES write BYTES bytes at a time (default: 512) seek=N skip N obs-sized blocks at start of output ...but it does not; it creates a 2-byte file: >ls -l foo -rw-rw-r-- 1 user user 2 Mar 28 16:05 foo My workaround has been: dd if=/dev/zero of=./foo count=1 bs=1 obs=1 seek=9 But for my learning, I'd like to understand why the first version does not work. Thank you.
Your command dd if=/dev/zero of=./foo count=1 bs=1 obs=9 seek=1 creates a two-byte file rather than a 10-byte file because of poorly-defined interaction between bs and obs. (Call this a program bug if you like, but it's probably better defined as a documentation bug.) You are supposed to use either bs or ibs and obs. Empirically it appears that bs overrides obs, so what gets executed is dd if=/dev/zero of=./foo count=1 bs=1 seek=1, which creates a two-byte file as you have seen. If you had used dd if=/dev/zero of=./foo count=1 ibs=1 obs=9 seek=1 you would have got a 10-byte file as expected. As an alternative, if you want to create an empty file that doesn't take any data space on disk you can use the counter-intuitively named truncate command: truncate --size=10 foo
dd with obs and seek makes file of unexpected size
1,554,251,819,000
I'm using a Chromebook and would like to navigate inside the Android container via the shell. The container is mounted at the path /run/containers/android_XXXXX. When trying to cd into the directory, I'm told Permission Denied. I've tried running the command as sudo, but for some reason the cd command becomes inaccessible. I've ran chmod u+x on the directory, but no dice. What steps can I take from here? I've ran stat on the directory, which returns the following: File: ‘android_XXXXXX/’ Size: 80 Blocks: 0 IO Block: 4096 directory Device: fh/15d Inode: 59640 Links: 3 Access: (0700/drwx------) Uid: (655360/ UNKNOWN) Gid: (655360/ UNKNOWN) Context: u:object_r:tmpfs:s0 Access: 2016-10-31 04:04:52.680000040 +0000 Modify: 2016-10-31 04:04:52.200000040 +0000 Change: 2016-10-31 04:44:54.990001186 +0000 Birth: -
The directory is drwx------ so only someone whose uid is 655350 (which is not listed in the password file) can read it or enter it. sudo cd not being able to find the cd command is expected, it is a builtin to the shell. If it wasn't builtin then it wouldn't work. Say your current shell has a process ID of 54000, you ran the /bin/cd command, it might be PID 54309. It would change the directory for process 54309, and then exit. process 54000 would still be in its original directory. chmod u+x alters user (owner) permission. What you want is sudo chmod go+rx /run/containers/android_XXXXX
Permission Denied: cd into directory
1,554,251,819,000
I recently had some trouble with a wireless card, and an online forum suggested searching dmesg with grep firmware. That helped, I was able to find the problem immediately! However, the previous hour was spent looking at dmesg and I couldn't identify anything relevant due to the overwhelming quantity of information! How does on know what to grep for in dmesg? Sure, this was a hardware issue but I myself would never have thought to grep for the string "firmware". For someone not intimately familiar with the output of dmesg, how might I make some educated guesses about what to grep for?
Something like this would be useful: dmesg | grep -iC 3 "what you are looking for" For example, if looking for your video card, you could try: dmesg | grep -iC 3 "video" Or: dmesg | grep -iC 3 "graphics" The C 3 flag will print 3 lines before and after the matched string, just to give you some context on what the results are. But as @tohecz said, there are thousands of possibilities. All depends on what you are looking for... sound, wifi, usb, serial, reader.... If you expect a usb key to appear in there, you can try grepping for /dev/sd. Just found this page, that contains sound advice on how to grep stuff in there: Because of the length of the output of dmesg, it can be convenient to pipe its output to grep, a filter which searches for any lines that contain the string (i.e., sequence of characters) following it. The -i option can be used to tell grep to ignore the case (i.e., lower case or upper case) of the letters in the string. For example, the following command lists all references to USB (universal serial bus) devices in the kernel messages: dmesg | grep -i usb And the following tells dmesg to show all serial ports (which are represented by the string tty): dmesg | grep -i tty The dmesg and grep combination can also be used to show how much physical memory (i.e., RAM) is available on the system: dmesg | grep -i memory The following command checks to confirm that the HDD(s) is running in DMA (direct memory access) mode: dmesg | grep -i dma
How to know what to grep for in dmesg?
1,389,268,938,000
When I put a flash disk into a card reader and make an image with dd, I see the actual size of the disk, like 512483328 bytes in the following example: 1000944+0 records in 1000944+0 records out 512483328 bytes (512 MB) copied, 33.0091 s, 15.5 MB/s Is it possible to get the same number without actually copying the data?
Using sgdisk You can use sgdisk to print detailled information: sgdisk --print <device> […] Disk /dev/sdb: 15691776 sectors, 7.5 GiB Logical sector size: 512 bytes […] When you multiply the number of sectors with the sector size you get the exact byte count that should match the output of dd. Using /sys directly You can also get those numbers directly from /sys: Number of sectors: /sys/block/<device>/size Sector size: /sys/block/<device>/queue/logical_block_size Here's a way of calculating the size: sectors=$(cat /sys/block/sdb/size) bs=$(cat /sys/block/sdb/queue/logical_block_size) echo $(( $sectors * $bs )) --- OR --- echo "$sectors * $bs" | bc Using udisks udisks outputs the information directly. It is reported as size: udisks --show-info <device> | grep size Using blockdev blockdev --getsize64 <device> From /proc/partitions grep ' sdb$' /proc/partitions (number expressed in kibibytes).
How can I find the actual (dd) size of a flash disk?
1,389,268,938,000
I just tried burning both a Debian CD and a Debian DVD from their .iso files and I've got a weird behavior: the checksum of the CD is correct but the one of the DVD ain't. Here's what working: downloaded the two .iso files verified the checksum of the two .iso files burn the CD debian-7.1.0-amd64-CD-1.iso to a CD verify that the CD is correct by issuing: dd if=/dev/sr0 | md5sum (or sha-1 or sha-256) And this works fine: the checksum(s) I get from the CD by using dd and piping into md5, sha-1 or sha-256 do match the official checksums. Now what I don't get is that I did burn a DVD from the DVD .iso --and I know that the file has been correctly downloaded seen that the .iso file checksum is correct. However if I put the DVD in the drive and issue the same: dd if=/dev/sr0 | md5sum (or sha-1 or sha-256) then I get a bogus checksum. The DVD still looks correct in that the files all seem to be there. So here's my question: can I verify that a DVD has been correctly burned by using dd and piping its output into md5sum (or sha-1 or sha-256) or is there something "special" that would make dd work for verifying burned CDs but not burned DVDs? *(note that I used Disk Utility on OS X to burn both the CD and the DVD)*
In addition to Gilles answer, If you still have the ISO image, you could use cmp instead of checksums. It would tell you at which byte the difference happens. It would also make the check faster as if there is an error early on, it would tell you right away, whereas the checksum always has to read the entire media. $ cmp /dev/cdrom /path/to/cdrom.iso In case of error it should print something like this /dev/cdrom /path/to/cdrom.iso differ, byte 123456789, line 42 In case it's correct it should print nothing, or this: cmp: EOF on /path/to/cdrom.iso Which means there is more data on /dev/cdrom than in the ISO, most likely zero-padding. Even before starting any comparisons, you could check the size. $ blockdev --getsize64 /dev/cdrom 123456999 $ stat -c %s /path/to/cdrom.iso 123456789 If it's identical, the checksum should match also. If /dev/cdrom is larger, it should be zero padded at the end. You could check that with hexdump. Use the ISO size for the -s parameter. $ hexdump -s 15931539256 -C /dev/cdrom 3b597ff38 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 3b597fff8 00 00 00 00 00 00 00 00 |........| hexdump is also useful for having a look at difference at any other position in a file, in case a damage was caused deliberately by something.
How to take sha-1, sha-256 or MD5 of CDs / DVDs?
1,389,268,938,000
I'm using Ubuntu Lucid (10.04). In my shell script running as root I want to detect whether an audio cable (analog jack) is connected to the laptop or not. How do I do that? I don't need a portable solution, I need something, no matter how hacky, that works on my laptop.
One way could perhaps be to use amixer. Jack plugged in: $ amixer -c 0 contents numid=29,iface=CARD,name='Front Headphone Jack' ; type=BOOLEAN,access=r-------,values=1 : values=on ... Jack not plugged in: $ amixer -c 0 contents numid=29,iface=CARD,name='Front Headphone Jack' ; type=BOOLEAN,access=r-------,values=1 : values=off ... So for that specific one I could do: amixer -c 0 contents | \ awk -F"," ' $1 == "numid=29" { c=1 } c && /: values/ { split($0, a, "=") print a[2]; exit }' giving output of on or off. One can also use commands and specify by iface + name etc. e.g.: Get list by: $ amixer -c 0 controls Where -c 0 specifies card, not needed if default or only one. Then e.g.: $ amixer -c 0 cget numid=29,iface=CARD $ amixer -c 0 cget numid=29,iface=CARD | awk -F"=" 'NR == 3 {print $2;}' I came across a piece of software once, believe it used Tcl/Tk, that displayed pin-power for all ports on the computer + lots of other HW information. It was a nice piece of software – but I can't find it again. I have looked trough 12 old HDD's with no luck. I used it to debug some jack-ports. So yes, it is definitively possible to poll status of a specific port. Jack is a pain to search the web for due to JACK. It makes it close to impossible. I have some C-code that do some Soundcard information polling. Have to look if I can find it.
How do I detect wheter the audio cable is connected?
1,389,268,938,000
I understand that /proc filesystem reflects the output of various processes. Unfortunately, I have a propietary (romdump) binary that expects the mount table to appear as /proc/mtd, while my Android device appears to output it as /proc/mounts. I've tried creating a symbolic link, but clearly this only works for actual files or directories. How can I fool this binary to read the output from /proc/mounts instead of /proc/mtd?
The easiest way to do it would be to change the binary: sed s-/proc/mtd-/tmp/mntx- < romdump > romdump.new ln -s /proc/mounts /tmp/mntx ./romdump.new The trick here, since you're editing a binary, is to make sure the original string /proc/mtd is the same length as the new string /tmp/mntx, so that you don't change the size or location of anything in the binary. This is not foolproof—it will not work if the binary builds up the path name in pieces rather than using a fixed string. But it's likely to do the trick.
Linking /proc/mnt to /proc/mounts
1,389,268,938,000
Without any DE or even X, I want to use ./my.exe to run mono my.exe, like it works with python scripts.
Bash has no such feature. Zsh does, you can set up aliases based on extensions: alias -s exe=mono This would only work in an interactive shell, however, not when a program invokes another. Under Linux, you can set up execution of foreign binaries through the binfmt_misc mechanism; see Rolf Bjarne Kvinge. Good Linux distributions set this up automatically as part of the mono runtime package. If you can't use binfmt_misc because you don't have root permissions, you'll have to settle for wrapper scripts. #!/bin/sh exec /path/to/mono "$0.exe" "$@" Put the wrapper script in the same directory as the .exe file, with the same name without .exe.
How to set bash to run *.exe with mono?
1,389,268,938,000
I have been provided with a vendor supplied minimal linux installation. From an answer to a previous question I discovered that it is possible to build a kernel with or without module support. I have a CANBUS device that I need to attach which comes with drivers in the form of .ko files. I would like to be able to install these with the provided install scripts, but firstly I need to know if my kernel was built with module support - is it possible for me to detect this from the command line?? When I run lsmod it returns nothing so I know that there are no .ko files there at the moment - but doest this mean that the kernel won't allow me to install a .ko file ?
If you have a /proc filesystem, the file /proc/modules exists if and only if the kernel if compiled with module support. If the file exists but is empty, your kernel supports modules but none are loaded at the moment. If the file doesn't exist, your kernel cannot load any module. It's technically possible to have loadable module support without /proc. You can check for the presence of the init_module and delete_module system calls in the kernel binary. This may not be easy if you only have a compressed binary (e.g. vmlinuz or uImage). See How do I uncompress vmlinuz to vmlinux? for vmlinuz. Once you've managed to decompress the bulk of the kernel, search for the string sys_init_module. Note that if modules are supported, you'll need additional files to compile your own modules anyway: kernel headers. These are C header files (*.h), some of which are generated when the kernel is compiled (so you can't just take them from the kernel source). See What does a kernel source tree contain? Is this related to Linux kernel headers?
Can I detect if my custom made kernel was built with module support?
1,389,268,938,000
Wikipedia entry for GNU gettext shows an example where the locale is just the lanuage, "fr". Whereas the 'i18n gettext() “hello world” example' in SO has the locale value with both the language and country, "es_MX". I have modified the "es_MX" example to use just the lanuage, "es". This covers making an "es" rather than "'es_MX'" message catalog and invoking the program with environment variable LANG set to "es".But this produces the English text rather the expected Spanish. cat >hellogt.cxx <<EOF // hellogt.cxx #include <libintl.h> #include <locale.h> #include <iostream> int main (){ setlocale(LC_ALL, ""); bindtextdomain("hellogt", "."); textdomain( "hellogt"); std::cout << gettext("hello, world!") << std::endl; } EOF g++ -ohellogt hellogt.cxx xgettext -d hellogt -o hellogt.pot hellogt.cxx msginit --no-translator -l es -o hellogt_spanish.po -i hellogt.pot sed --in-place hellogt_spanish.po --expression='/#: /,$ s/""/"hola mundo"/' sed --in-place hellogt_spanish.po --expression='s/PACKAGE VERSION/hellogt 1.0/' mkdir -p ./es.utf8/LC_MESSAGES msgfmt -c -v -o ./es.utf8/LC_MESSAGES/hellogt.mo hellogt_spanish.po LANG=es.utf8 ./hellogt According to Controlling your locale with environment variables: environment variable, LANGUAGE, which is used only by GNU gettext ... If defined, LANGUAGE takes precedence over LC_ALL, LC_MESSAGES, and LANG. LANGUAGE=es.utf8 ./hellogt produces the expected Spanish text rather than English. But this does not explain why "LANG=es" does not work.
From Zac Thompson's link to GNU gettext utilities section 2.3 Setting the Locale through Environment Variables the sub-section The LANGUAGE variable: In the LANGUAGE environment variable, but not in the other environment variables, ‘ll_CC’ combinations can be abbreviated as ‘ll’ to denote the language's main dialect. For example, ‘de’ is equivalent to ‘de_DE’ (German as spoken in Germany), and ‘pt’ to ‘pt_PT’ (Portuguese as spoken in Portugal) in this context. Makes the point that "es" is an abbreviation that only LANGUAGE but not LANG supports.
Why does locale es_MX work but not es?
1,389,268,938,000
I'm working on an embedded ARM Linux system that boots using initramfs. (Here's some background from a couple earlier questions, if you're interested.) So far, thanks in part to help received here, I can boot the kernel via TFTP with an embedded initramfs. The MMC driver detects an SD card containing a new root filesystem, which I can then mount. However, I cannot get the final step, using busybox switch_root to switch to the filesystem on the SD card, to work. At the initramfs shell prompt, I think this should make the kernel switch to the new filesystem: switch_root -c /dev/console /mnt/root /sbin/init.sysvinit However, it just makes busybox (to which switch_root is aliased) print its man page, like this: / # switch_root -c /dev/console /mnt/root /sbin/init.sysvinit BusyBox v1.17.4 (2010-12-08 17:01:07 EST) multi-call binary. Usage: switch_root [-c /dev/console] NEW_ROOT NEW_INIT [ARGS] Free initramfs and switch to another root fs: chroot to NEW_ROOT, delete all in /, move NEW_ROOT to /, execute NEW_INIT. PID must be 1. NEW_ROOT must be a mountpoint. Options: -c DEV Reopen stdio to DEV after switch I think the -c option is correct, since it's the same as what is contained in the example, and /dev/console exists. / # ls -l /dev total 0 crw-r--r-- 1 0 0 5, 1 Jan 1 00:28 console brw-r--r-- 1 0 0 7, 0 Dec 21 2010 loop0 brw-r--r-- 1 0 0 179, 0 Dec 21 2010 mmcblk0 brw-r--r-- 1 0 0 179, 1 Dec 21 2010 mmcblk0p1 brw-r--r-- 1 0 0 179, 2 Dec 21 2010 mmcblk0p2 brw-r--r-- 1 0 0 179, 3 Dec 21 2010 mmcblk0p3 brw-r--r-- 1 0 0 179, 4 Dec 21 2010 mmcblk0p4 /mnt/root also exists. / # ls /mnt/root bin etc linuxrc mnt sys var boot home lost+found proc tmp dev lib media sbin usr The init executable exists: / # ls -lh /mnt/root/sbin/ <snip> lrwxrwxrwx 1 0 0 19 Dec 21 2010 init -> /sbin/init.sysvinit -rwxr-xr-x 1 0 0 22.8K Dec 21 2010 init.sysvinit But here's something strange: /mnt/root/sbin # pwd /mnt/root/sbin /mnt/root/sbin # ls -l | grep init.sysvinit lrwxrwxrwx 1 0 0 19 Dec 21 2010 init -> /sbin/init.sysvinit -rwxr-xr-x 1 0 0 23364 Dec 21 2010 init.sysvinit /mnt/root/sbin # ./init.sysvinit /bin/sh: ./init.sysvinit: not found /mnt/root/sbin # /mnt/root/sbin/init.sysvinit /bin/sh: /mnt/root/sbin/init.sysvinit: not found That's totally mystifying. I'm not sure where I'm going wrong. I've looked at the source, which is at http://git.busybox.net/busybox/tree/util-linux/switch_root.c?id=1_17_4 It's not just the init.sysvinit executable. I can't execute anything from the SD card. For example: /mnt/root/bin # ./busybox /bin/sh: ./busybox: not found /mnt/root/bin # /mnt/root/busybox /bin/sh: /mnt/root/busybox: not found /mnt/root/bin # ls -l | grep "2010 busybox" -rwxr-xr-x 1 0 0 462028 Dec 21 2010 busybox Anyone have a clue what's amiss here? I thought it might be some problem with mounting the card noexec, but I believe exec is the default, and I tried passing the exec option explicitly at mount with no success.
The reason that switch_root is not working on the command line is this code in busybox: if (st.st_dev == rootdev || getpid() != 1) { // Show usage, it says new root must be a mountpoint // and we must be PID 1 bb_show_usage(); } You are not PID 1, so you are falling through into this bb_show_usage. The implication is that the switch_root command in your initramfs init script should run switch_root with exec. i.e. exec switch_root ... The other issue with your "not found" errors is likely because the shared libraries needed by the executables are not found, because the initramfs root filesystem does not have them. If you can get switch_root to work with exec, then is likely the "not found" error will go away.
Trouble getting busybox switch_root to work
1,389,268,938,000
Environment: OS: CentOS 8 (generic/centos8 Vagrant box) Virtualization: VMware-Workstation 16.1.0 build-17198959 Steps to reproduce: Create a devices new policy cd /sys/fs/cgroup/devices mkdir custom_poc Verify which device is being used as tty (multiple methods): Using tty: root@centos8# tty /dev/pts/0 Getting the process STDIN: ls -l /proc/$$/fd/{0,1,2} lrwx------. 1 root root 64 Mar 5 11:25 /proc/2446/fd/0 -> /dev/pts/0 lrwx------. 1 root root 64 Mar 5 11:25 /proc/2446/fd/1 -> /dev/pts/0 lrwx------. 1 root root 64 Mar 5 11:25 /proc/2446/fd/2 -> /dev/pts/0 Add tty device to devices.deny: Check device major and minor numbers: ls -l /dev/pts/0 crw--w----. 1 vagrant tty 136, 0 Mar 5 11:28 /dev/pts/0 Deny access: root@centos8# echo 'c 136:0 w' > /sys/fs/cgroup/devices/custom_poc/devices.deny root@centos8# echo $$ > tasks root@centos8# echo 'a' > /dev/pts/0 -bash: /dev/pts/0: Operation not permitted However, my Bash terminal works just fine even after removing access to STDIN device. Here is the output of a simple whoami: root@centos8# whoami root
From the kernel doc: Implement a cgroup to track and enforce open and mknod restrictions on device files. The restrictions are only applied upon opening device files. That's the same as for most other access control permissions such as the standard Unix permissions. Once you have opened a file in read+write mode, you can read and write from it. Access control are not applied upon each read() and write(), that would add too much overhead, and that would likely cause very surprising behaviours in applications. It would be difficult to enforce as well when a file is mmapped() in memory for instance. In your case, the /dev/pts/0 was opened (likely by one of its parent, maybe a terminal emulator, maybe getty, maybe sshd..., and the shell inherited the file descriptor) before you applied the restriction. Similarly, upon forking a process to execute whoami, the child inherits the fds and those fds are preserved upon executing whoami and whoami just does a write(1, "root\n", 5) without ever opening the device file. In echo a > /dev/pts/0 however, the shell does try to open the file to perform that redirection and is being prevented from doing so at that point.
How can Bash still write to the terminal when blocking access to tty with cgroups?
1,389,268,938,000
When Xviewer or VLC are in full-screen mode on Linux Mint, my laptop does not go to sleep. Some other apps, e.g. mpv in full screen, do not prevent sleep. There is no options in Xviewer GUI on keep awake status. How does Xviewer do prevention and how to turn in off? How to turn-on sleep prevention for apps where I consider this behavior beneficial, like Transmission for example?
Linux applications inhibit suspend through a D-Bus call to org.gnome.SessionManager.Inhibit Contrasting inhibit vs. prevent Note that inhibit is different than prevent. Inhibiting a screensaver, screen lock, or suspend only prevents the action from occurring when the computer is idle, not when it is manually activated by the user or another program. How specific applications inhibit suspend Transmission Transmission has a checkbox for inhibiting sleep in: Preferences-> Desktop tab -> Inhibit hibernation when torrents are active. I downloaded the source code of Transmission and saw that it calls a D-Bus method (org.gnome.SessionManager.Inhibit) to the Cinnamon D-Bus session. Firefox I used dbus-monitor to discover what Firefox does when a video is playing. When the video starts to play, then Firefox send two calls: $ dbus-monitor . . . # disable screensaver method call time=1523976795.844938 sender=:1.104 -> destination=org.freedesktop.ScreenSaver serial=9 path=/ScreenSaver; interface=org.freedesktop.ScreenSaver; member=Inhibit string "firefox" string "video-playing" # disable sleeping method call time=1523976795.893407 sender=:1.21 -> destination=:1.3 serial=61 path=/org/gnome/SessionManager; interface=org.gnome.SessionManager; member=Inhibit string "firefox" uint32 0 string "video-playing" uint32 8 . . . After calling the last method, the following inhibitor was created: $ dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.GetInhibitors method return time=1523969881.311742 sender=:1.3 -> destination=:1.188 serial=491 reply_serial=2 array [ object path "/org/gnome/SessionManager/Inhibitor6" ] When the video has been stopped, then inhibitor is being removed: $ dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.GetInhibitors method return time=1523969881.311742 sender=:1.3 -> destination=:1.188 serial=493 reply_serial=2 array [ ] VLC VLC inhibits sleep/suspend in the same way as Firefox: $ dbus-monitor . . . method call time=1523977809.526716 sender=:1.8017 -> destination=org.freedesktop.ScreenSaver serial=3 path=/ScreenSaver; interface=org.freedesktop.ScreenSaver; member=Inhibit string "vlc" string "Playing some media." method call time=1523977809.527152 sender=:1.21 -> destination=:1.3 serial=91 path=/org/gnome/SessionManager; interface=org.gnome.SessionManager; member=Inhibit string "vlc" uint32 0 string "Playing some media." uint32 8 . . . $ dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.GetInhibitors method return time=1523977813.424421 sender=:1.3 -> destination=:1.8018 serial=85789 reply_serial=2 array [ object path "/org/gnome/SessionManager/Inhibitor7750" ] Xviewer When you play a slideshow, Xviewer calls a D-Bus method similar to the above programs. How to manually inhibit suspend There are two popular applets exists for inhibiting sleep/suspend: Caffeine and Inhibit Applet (built-in Cinnamon applet: Right Click on Bottom Panel -> Add Applets to Panel -> Inhibit Applet. But these applets manually turn on and off inhibit suspend functionality, rather than automatically turning it on and off when certain applications are running.
Prevent system from going to sleep/suspend - how Xviewer/VLC do it
1,389,268,938,000
For certain obvious usages, the grep command can output multiple complaints about search names being directories. Note the multiple "Is a directory" warnings, this typically happens with commands like: grep text * eg: /etc/webmin # grep "log=1" * grep: cluster-webmin: Is a directory config:log=1 grep: cpan: Is a directory grep: man: Is a directory miniserv.conf:log=1 miniserv.conf:syslog=1 grep: mon: Is a directory I know I can use the "-s" option to suppress these warnings about directories (and stderr can be redirected to nul, but that's even worse), but I don't like that because it's extra boilerplate which we have to remember every time, and it also suppresses all warnings, not just ones about directories. Is there some way, where this spurious warning can be suppressed forever and globally ? I'm most interested in Debian and Cygwin.
Depending on the way you want to handle directories contents, grep -d recurse will do it (handling recursively directories), or grep -d skip (ignoring directories and their content). You could have this be automatic, by adding it to ~/.profile or ~/.bashrc (one user) or /etc/profile or /etc/bashrc (all users) alias grep="/bin/grep -d skip"
suppress grep directory warnings globally
1,389,268,938,000
This tutorial says the following: For a directory, write permission allows a user to delete the directory, modify its contents (create, delete, and rename files in it), and modify the contents of files that the user can read. To test this, I created a directory named subdir that doesn't have the w permission bit, and I placed it inside a directory named dir that have the w and the x permission bits set: d-wx------ 3 robert robert 4096 2017-12-16 9:07 dir d--------- 3 robert robert 4096 2017-12-16 9:07 subdir And I was able to delete subdir from the robert account. So does the tutorial I have linked to give wrong information, or am I missing something?
It's wrong. To delete something you need write permissions on the directory containing it. That also goes for directories themselves: to delete a directory, you need (at a minimum) write permissions on the parent directory. You may need write permissions on the directory as well, but that's not enough by itself. Write permissions on the directory itself are needed when the directory is not empty. In that case, you need to clear out the directory first by deleting everything in it, so you need write permissions on all sub-directories as well (recursively). Then you can delete the directory itself if you have write permissions on the parent.
Does the "w" permission bit on a directory allow you to delete the directory?
1,389,268,938,000
I’m looking for an open source software like Virtual-box that I can run on Linux but gives the possibility to program the BIOS (use a personal BIOS program). I want to understand (in a practical way) the process of computer boot up and have a deeper manipulation of the x86 real mode. I also want to understand the different mechanisms to communicate with the peripherals, i.e. controlling devices like the keyboard and the hard drive, and understand I/O modes and interrupts.
There are several virtual machine emulators that can emulate an x86 processor and peripherals. Each comes with a BIOS, several of them with an open-source BIOS. You should look at QEMU, which operates completely independently of the host (it can run on any machine, though it has mechanisms to run faster if the emulated machine is the same architecture as the host). QEMU comes with PC-BIOS. If you want to work in x86 real mode, you can also take a look at Dosbox. Coreboot should also be of interest to you. It's an open source BIOS for x86. Looking at a BIOS will give you some insights on how an x86 processor boots, including all the quirks inherited from 30+ years of history with significant evolution in hardware capabilities. It isn't the best thing to look at if what you want to understand is how to communicate with peripherals. For that, look at the device drivers in an operating system kernel — either Linux, or simpler ones such as MINIX 3. I would also recommend taking a look at other CPU architectures such as ARM and MIPS, so that you get an idea of what's common in OS/hardware interactions and what's specific to the PC architecture.
Is there an open source software to simulate and virtually program a computer BIOS?
1,389,268,938,000
I have a Display that I want to write to. This is possible over the serial port. When I use a USB to RS-232 Converter, that thing works like a charm. I even tried using only the RX, TX and GND wires of the serial converter, and it still works. Now I want to use this Display in a small case paired with a Raspberry Pi, so i don't have any space left for the big USB-RS-232 converter. I have tried using the internal serial port of the Raspberry. It is set to 9600 baud using $ sudo stty -F /dev/ttyAMA0 9600. But when I connect it to the display, it only shows up garbage and normal control-commands (that were working using the RS-232 converter) don't work either. Using $ sudo minicom -b 9600 -o -D /dev/ttyAMA0 and looping the GPIOs TX to RX, it shows up the right characters in the minicom console. Now looping the GPIO-Serial-Port to the USB-RS-232 Converter's RX and TX pins and connecting ground and opening both ports in minicom with baud set to 9600, only sometimes shows some output on the other terminal, but when it shows any output, it is also just garbage.
I'm quite confident the problem is that the Pi does not have an RS232 interface, while the display has. The Pi has an (LV-)UART interface, its TX-pin outputs 0V for a logical 0 and 3.3V for a logical 1. This is quite easy to implement, since 3.3V is already available on the Pi. But this only works for communications on a single PCB or within a single device. For communication between devices over longer distances, a system less prone to interfering signals like RS232 is used. While the logical structure of the waveform (bitrate, timing, start-, stop-, parity- and data-bits) is the same as for UART, the voltage levels are -15V...-3V for a logical 1 and +15V...+3V for a logical 0. This means, there are not only higher (and negative) voltages, their meaning is also inverted. So, if the display expects RS232 levels and gets that 3.3V levels from the Pi, it mostly doesn't recognize the data, and if it does, it's often just garbage. And of course, if you connect RX and TX of the same interface, you get what you expect. But: If the RS232 TX output is not current limited, it could even damage your Pi! There are UART to RS232 converter boards out there, but if you like to solder, the boards just contain a MAX3232 (plus four capacitors). This IC also generates the higher (and negative) voltage levels from the 3.3V supply voltage from the Pi. The more common is the MAX232 (guess why it's called so), but it is for 5V, not 3.3V operation. Finally, because the UART and the RS232 use the same logical structure, it's often not distinguished between both of them, especially by software (programmers). They are often also just called "serial interface", though there are other interfaces like I²C and SPI, which are a type of serial interface, but never considered to be "the" serial interface.
RaspberryPi serial port
1,389,268,938,000
Is there a way to work out how long it would take to reboot a Linux server? To clarify, the time from the reboot command starts to when the server is back up and running (ie all services started, users can log on etc). I tried looking at syslog but that seems to get rotated too quickly. To the nearest minute would suffice. OS = CentOS & Ubuntu Update: if there isn't a simple way - perhaps what would be a way to capture this data for future use.
I'm going to assume you're on CentOS 7+ or Ubuntu 15.04+ which both come with systemd. Systemd has some great tools for figuring out how long your system took to boot along with some visualizations to see why. For the most basic output just run systemd-analyze and you'll get a nice summary like so Startup finished in 853ms (kernel) + 3min 50.610s (initrd) + 10.345s (userspace) = 4min 1.809s That can tell you how much time your last boot took once systemd was started. That doesn't take into account BIOS/hardware initialization or GRUB timeouts but should be accurate for actual OS boot time. If you want to figure out why the OS is taking so long try systemd-analyze blame which will give you a chart of services from longest running to shortest. eg from my system 3min 49.219s systemd-cryptsetup@luks\x2d62611c1c\x2d74ab\x2d4be9\x2d8990\x2d41c0fd863b5a.service 5.315s plymouth-quit-wait.service 3.084s systemd-udev-settle.service 2.275s plymouth-start.service 2.256s docker.service 1.819s powertop.service 778ms firewalld.service 676ms dev-mapper-fedora\x2droot.device 621ms abrtd.service 493ms lvm2-monitor.service Looks like 3 of the 4 minutes it takes to my boot my laptop is because I have an encrypted drive. Finally, with systemd-analyze critical-chain you can see a chain of events that are considered "critical" to get the system booted. From the man page critical means "time-critical chain of units". This is because systemd parallelizes a lot of services. This will list the units that have to wait for another unit and how long it took to start. The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. graphical.target @10.336s └─multi-user.target @10.323s └─docker.service @4.900s +2.256s └─network.target @4.868s └─wpa_supplicant.service @4.828s +14ms └─dbus.service @3.753s └─basic.target @3.749s └─sockets.target @3.749s └─docker.socket @3.741s +6ms └─sysinit.target @3.737s └─systemd-update-utmp.service @3.726s +10ms └─auditd.service @3.713s +9ms └─systemd-tmpfiles-setup.service @3.617s +82ms └─fedora-import-state.service @3.568s +36ms └─local-fs.target @3.560s └─run-user-42.mount @5.753s └─local-fs-pre.target @383ms └─systemd-tmpfiles-setup-dev.service @301ms +80ms └─kmod-static-nodes.service @268ms +10ms └─system.slice └─-.slice You can also do some cool stuff by exporting your boot tree to a picture to email it or plot it with svg. See the man page for more details or this related question for a few more details.
Time it takes to reboot a Linux server
1,389,268,938,000
Given in Linux environment exists lot of packages for manipulating strings (grep, awk, sed, ...), I would like a software to extract a capturing group in a php/preg like syntax. Maybe the most close one is grep -P but I don't get how it works. Stuff like cat file.txt | grep -P '/something="([\w]+)"/i' seems not to give me only the content inside the capturing group. Could someone provide me some working examples? Many please, with some variants and limits explained! EDIT: I saw somewhere used sed for doing this purpose but I'm still a bit confused about it's syntax.
pcregrep -io1 'something="(\w+)"' myfile.txt (-i for case insensitive matching, -o1 to print the first capture group). GNU grep supports a -P (if built with perl compatible regex support) and -o. However its -o is limited to printing the whole matched portions. You can however use perl look-around operators to work around that: grep -iPo '(?<=something=")\w+(?=")' myfile.txt (that is, a regexp that matches sequence of word component characters provided it follows something=" and is followed by "). Or with recent enough PCRE: grep -iPo 'something="\K\w+(?=")' myfile.txt (where \K resets the start of the matched string like vim's \zs; perl has no equivalent of vim's \ze to reset the end of the match, so we still need to use a look-ahead operator). But if you're going to use perl regexps, you might as well use perl: perl -C -lne 'print for /something="(\w+)"/ig' myfile.txt With GNU or BSD sed, to return only the right-most match per line: sed -nE 's/.*something="(\w+)".*/\1/pi' myfile.txt Portably (as extended regex support and case insensitive matching are non-standard extensions not supported by all sed implementations): sed -n 's/.*[sS][oO][mM][eE][tT][hH][iI][nN][gG]="\([[:alnum:]_]\{1,\}\)".*/\1/p' myfile.txt That one assumes uppercase i is I. That means that in locales where uppercase i is İ for instance, the behaviour will be different from the previous solution. A standard/portable solution that can find all the occurrences on a line: awk '{while(match(tolower($0), /something="[[:alnum:]_]+"/)) { print substr($0, RSTART+11, RLENGTH-12) $0 = substr($0, RSTART+RLENGTH-1)}}' myfile.txt That may not work correctly if the input contains text whose lower case version doesn't have the same length (in number of characters). Gotchas: There will be some variations between all those solutions on what \w (and [[:alnum:]_]) matches in locales other than the C/POSIX one. In any case it should at least include underscore, all the decimal arabic digits and the letters from the latin English alphabet (uppercase and lower case). If you want only those, fix the locale to C. As already mentioned, case insensitive matching is very much locale-dependent. If you only care about a-z vs A-Z English letters, you can fix the locaIle to C again. The . regexp operator, with GNU implementations of sed at least will never match sequences of bytes that are not part of a valid character. In a UTF-8 locale, for instance, that means that it won't match characters from a single-byte charset with the 8th bit set. Or in other words, for the sed solution to work properly, the character set used in the input file must be the same as the one in the user's locale. perl, pcregrep and GNU utilities will generally work with lines of any length, and containing any arbitrary byte value (but note the caveat above), and will consider the extra data after the last newline character as an extra line. Other implementations of those utilities may not. The patterns above are matched in-turn against each line in the input. That means that they can't match more than one line of input. Not a problem for a pattern like something="\w+" that can't span over more than one line, but in the general case, if you want your pattern to match text that may span several lines like something=".*?", then you'd need to either: change the type of record you work on. grep --null, sed -z (GNU sed only), perl -0, awk -v RS='\0' (GNU awk and recent versions of mawk only) can work on NUL-delimited records instead of lines (newline delimited records), GNU awk can use any regexp as the record separator (with -v RS='regexp'), perl any byte value (with -0ooo). pcregrep has a -M multiline mode for that. use perl's slurp mode, where the whole input is the one record (with -0777) Then, for perl and pcre ones, beware that . will not match newline characters unless the s flag is enabled, for instance with pcregrep -Mio1 '(?s)something="(.*?)"' or perl -C -l -0777 -ne 'print for /something="(.*?)"/gis' Beware that some versions of grep and pcregrep have had bugs with -z or -M, and regexp engines in general can have some built-in limits on the amount of effort they may put into matching a regexp.
How to extract under linux some capturing groups using command line in a php/preg fashion?
1,389,268,938,000
I was trying to fix my wifi on Kali Linux the other day and was following some tutorial. That didn't work, so I read somewhere that if I run this command iw dev wlan0 del After that command I can't seem to find my wlan device. When I type iwconfig it shows this: lo no wireless extensions. eth0 no wireless extensions. Anyone know what should I do now?
Easiest way is to just reboot. The type of configuration change you've made does not persist across reboots.
Accidentally deleted my wifi device wlan0
1,389,268,938,000
Does zswap compress pages, that are written to the swap device? Is it eligible to reduce swap IO?
At the LSFMM summit in 2013, there was no compression on pages written to the swap device. But it doesn't sound like there are any technical reasons why not, just that it would increase the complexity. Hugh [Dickins] added that compression of page cache (file) pages may be appealing, but the filesystem developers do not seem to be that interested in zcache in general. So he agreed that it might make better sense to start with zswap, perhaps adding zcache features over time. Dan [Magenheimer, zcache hacker] said that he would agree to merging zswap as long as there was an explicit understanding that zswap is not the end of development in this area; there is, he said, a lot more work to be done to gain the full benefits of in-kernel compression. In other words, he would plan to submit patches to increase the functionality of zswap over time. It sounds like it would make a lot of sense to add this, to save disk space and read/write times, but that it would require some more work and complexity. At a guess, it would take some work to be able to efficiently allocate space on the disk for compressed pages of variable size. (That's just my speculation.) We might hope to see this in future, but it obviously depends on the efforts of those who would develop on it. There doesn't seem to be much public discussion about it since 2014. But one way to keep an eye on this might be to monitor the commits made to mm/zswap.c in the kernel.
Is zswap eligible to reduce swap IO?
1,389,268,938,000
In the below 2.6.18 Linux Kernel (Red Hat) server, there is a lot of free memory, but I see some swap is used. I always thought of swap as an overflow when memory has been depleted. Why would it swap with about 7GB (50%) free memory? Swappiness is 60 (default). Meminfo output: MemTotal: 16436132 kB MemFree: 7507008 kB Buffers: 534804 kB Cached: 2642652 kB SwapCached: 39084 kB Active: 6001828 kB Inactive: 2532028 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 16436132 kB LowFree: 7507008 kB SwapTotal: 2097144 kB SwapFree: 1990096 kB Dirty: 236 kB Writeback: 0 kB AnonPages: 5353644 kB Mapped: 45764 kB Slab: 330660 kB PageTables: 34020 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 10315208 kB Committed_AS: 14836360 kB VmallocTotal: 34359738367 kB VmallocUsed: 264660 kB VmallocChunk: 34359472735 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB
Swapping only when there is no free memory is only the case if you set swappiness to 0. Otherwise, during idle time, the kernel will swap memory. In doing this the data is not removed from memory, but rather a copy is made in the swap partition. This means that, should the situation arise that memory is depleted, it does not have to write to disk then and there. In this case the kernel can just overwrite the memory pages which have already been swapped, for which it knows that it has a copy of the data. The swappiness parameter basically just controls how much it does this.
Why is swap used when a lot of memory is still free? [duplicate]
1,389,268,938,000
To get the vim user-manual I am doing vi test and then, once the file is opened, I do :help user-manual This opens up a split window, which makes reading manual inconvenient to read. How do I only open the user-manual in the full window ?
:h user-manual | only only : Make the current window the only one on the screen. All other windows are closed. See: :h only Open from terminal: vi[m] -c 'h user-manual|only'
How do I get just the user-manual for Vim
1,389,268,938,000
I have a certain Linux executable that is called from a bigger application to perform some calculations. I want to modify the way the calculations are performed, for that I first need to understand how the executable is called and in what way the parameters and data are transferred. So, I'd like to monitor the command line parameters, stdin and stdout if this executable is called. The normal operation of the executable should not be disturbed. Is there an easy way to do that? Update: I tried the shell script placeholder idea to grab all the data using the following script: #!/bin/bash export LOGFILE=/tmp/logname echo `env` > $LOGFILE-env echo "$@" >> $LOGFILE-arguments tee -a $LOGFILE-stdin | /path/to/real/executable 2>&1 | tee -a $LOGFILE-stdout This works fine for test input, but it just hangs if it is called for real. Probably there is even more going on than I suspected.
Option 1 would be to modify the source code of the calling app and insert tee into the output pipeline to get a copy of the output for review at that stage. Option 2 would be to write a wrapper script around the executable in question. A quick script that passes on stdin and arguments to the real app, then tee's the output to a location for you to review and also spits it back out the same way the app would should be just a couple lines to whip up. Put it someplace special and make add that location to the front of your PATH variable, then run your application. #!/bin/sh cat - | /path/to/realapp $@ | tee /tmp/debug_output
Intercept input and output from specific executable
1,389,268,938,000
I am dealing with an embedded system which has some memory that is accessible by a file descriptor (I have no idea what am I saying, so please correct me if I am wrong). This memory is 32 kB and I want to fill it with 0x00 to 0xFFFFFFFF. I know this for text files: exec {fh} >> ./eeprom; for i in {0..32767}; do echo $i >& $fh; done; $fh>&-; This will write ASCII characters 0 to 977. And if I do a hexdump eeprop | head I get: 0000000 0a30 0a31 0a32 0a33 0a34 0a35 0a36 0a37 0000010 0a38 0a39 3031 310a 0a31 3231 310a 0a33 0000020 3431 310a 0a35 3631 310a 0a37 3831 310a 0000030 0a39 3032 320a 0a31 3232 320a 0a33 3432 0000040 320a 0a35 3632 320a 0a37 3832 320a 0a39 0000050 3033 330a 0a31 3233 330a 0a33 3433 330a 0000060 0a35 3633 330a 0a37 3833 330a 0a39 3034 0000070 340a 0a31 3234 340a 0a33 3434 340a 0a35 0000080 3634 340a 0a37 3834 340a 0a39 3035 350a 0000090 0a31 3235 350a 0a33 3435 350a 0a35 3635 How can I fill each address with its uint32, not the ASCII representation?
perl -e 'print pack "L*", 0..0x7fff' > file Would write them in the local system's endianness. Use: perl -e 'print pack "L>*", 0..0x7fff' perl -e 'print pack "L<*", 0..0x7fff' To force big-endian or little-endian respectively regardless of the native endianness of the local system. See perldoc -f pack for details. With bash builtins specifically, you can write arbitrary byte values with: printf '\123' # 123 in octal printf '\xff' # ff in hexadecimal So you could do it by writing each byte of the uint32 numbers by hand with something like: for ((i = 0; i <= 32767; i++)); do printf -v format '\\x%x' \ "$(( i & 0xff ))" \ "$(( (i >> 8) & 0xff ))" \ "$(( (i >> 16) & 0xff ))" \ "$(( (i >> 24) & 0xff ))" printf "$format" done (here in little-endian). In any case, note that 32767 is 0x7fff, not 0xFFFFFFFF. uint32 numbers 0 to 32767 take up 128KiB, not 32kb. 0 to 0xFFFFFFFF would take up 16GiB. To write those 16GiB in perl, you'd need to change the code to: perl -e 'print pack "L", $_ for 0..0xffffffff' As otherwise it would try (and likely fail) to allocate those 16GiB in memory. On my system, I find perl writes the output at around 30MiB/s, while bash writes it at around 250KiB/s (so would take hours to complete). To write 32kb (32000 bits, 4000 bytes, 1000 uint32 numbers) worth of uint32 numbers, you'd use the 0..999 range. 0..8191 for 32KiB. Or you could write 0..16383 as uint16 numbers by replacing L (unsigned long) with S (unsigned short).
How to write binary values into a file in Bash instead of ASCII values
1,389,268,938,000
I'm starting two child processes from bash script and waiting both for completion using wait command: ./proc1 & pid1=$! echo "started proc1: ${pid1}" ./proc2 & pid2=$! echo "started proc2: ${pid2}" echo -n "working..." wait $pid1 $pid2 echo " done" This script is working fine in normal case: it's waiting both processes for completion and exit after it. But sometimes I need to stop this script (using Ctrl+C). But when I stop it, child processes are not interrupted. How can I kill them altogether with main script?
Set-up a trap handling SIGINT (Ctrl+C). In your case that would be something like: trap "kill -2 $pid1 $pid2" SIGINT Just place it before the wait command.
Interrupt child processes from bash script on Ctrl+C
1,389,268,938,000
I'm trying to understand what the difference is between DRM (Direct Rendering Manager) and a graphics driver, such as AMD or Nvidia GPU drivers. Reading the DRM wiki[1], it seems to me like DRM is basically a graphics hardware driver, however this doesn't explain the existence of proprietary or FOSS graphics drivers for discrete GPUs. What then, is the difference, or use case, for DRM over mesa or Nvidia drivers? What happens with DRM when AMD drivers are installed? Are they used for different tasks? Are proprietary drivers built around DRM? [1]https://en.wikipedia.org/wiki/Direct_Rendering_Manager
"Graphics driver" can mean any number of things. The way X (the graphical windowing system) works is that there is a central X server, which can load modules ("X drivers") for different hardware. Like vesa, fbdev, nvidia, nouveau, amdgpu. Some of these drivers can work on their own (vesa). Some need linux kernel drivers. Many of these kernel drivers following the "direct rendering manager API", and therefore they are called "DRM drivers". Others, like the proprietary nvidia driver (which needs both an X driver and a kernel driver), don't. It gets more complicated: The hardware consists of parts that read out the framebuffer and display it at different resolutions etc. This is called "modesetting". Modern graphics card also have a GPU, which is used to accelerate 3D drawing (OpenGL). "DRM kernel drivers" provide an interface for both. "Mesa" is a software library that understands OpenGL, but does the rendering either on the CPU, or on some (but not all) GPUs (see here for a list). So the Mesa library can offer this functionality for graphics card that do not or do not sufficiently have hardware for this, or can serve as the OpenGL library for a few GPUs. You could probably make a case to call anything in this complex picture a "graphics driver".
What's the difference between DRM and a graphics driver?
1,389,268,938,000
Error when i try to upgrade Debian based distro After this operation, 347 MB of additional disk space will be used. Do you want to continue? [Y/n] y sh: 0: getcwd() failed: No such file or directory sh: 0: getcwd() failed: No such file or directory if I cancel the process with ctrl+c I have the following message: E: Sub-process /usr/bin/apt-listchanges --apt || test $? -lt 10 received signal 2. E: Failure running script /usr/bin/apt-listchanges --apt || test $? -lt 10
The error code is obvious: someone removed your current working directory (the directory from which you started the command). I recommend to type cd to go to your home directory and retry the command. If someone removed your home directory, chdir to / or /tmp/.
Error sh: 0: getcwd() failed: No such file or directory when I try to upgrade Debian
1,389,268,938,000
I'm writing an initramfs-script and want to detect usb-sticks as fast as possible. When I insert an usb 2.0 stick, the detection of idVendor, idProduct and USB class happens within 100 ms. But the scsi subsystem does not "attach" until about 1 s has passed and it takes another 500 ms before the partition is fully recognized. I assume that the driver needs to read the partition table in order to detect partitions. Why does it take so long? I don't expect the urb send/recev time to be that long or the access time of the flash to take so much time. I've tried 5 sticks from different vendors and the result is about the same. [ 5731.097540] usb 2-1.2: new high-speed USB device number 7 using ehci-pci [ 5731.195360] usb 2-1.2: New USB device found, idVendor=0951, idProduct=1643 [ 5731.195368] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 5731.195372] usb 2-1.2: Product: DataTraveler G3 [ 5731.195376] usb 2-1.2: Manufacturer: Kingston [ 5731.195379] usb 2-1.2: SerialNumber: 001CC0EC32BCBBB04712022C [ 5731.196942] usb-storage 2-1.2:1.0: USB Mass Storage device detected [ 5731.197193] scsi host9: usb-storage 2-1.2:1.0 [ 5732.268389] scsi 9:0:0:0: Direct-Access Kingston DataTraveler G3 PMAP PQ: 0 ANSI: 0 CCS [ 5732.268995] sd 9:0:0:0: Attached scsi generic sg2 type 0 [ 5732.883939] sd 9:0:0:0: [sdb] 7595520 512-byte logical blocks: (3.88 GB/3.62 GiB) [ 5732.884565] sd 9:0:0:0: [sdb] Write Protect is off [ 5732.884568] sd 9:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 5732.885178] sd 9:0:0:0: [sdb] No Caching mode page found [ 5732.885181] sd 9:0:0:0: [sdb] Assuming drive cache: write through [ 5732.903834] sdb: sdb1 [ 5732.906812] sd 9:0:0:0: [sdb] Attached SCSI removable disk Edit So I've found the delay_use module parameter that by default is set to 1 second, which explains the delay I'm seeing. But I'm wondering if someone can provide context as to why that parameter is needed? A comment suggested that for older usb sticks, delay_use might need to be set to as much as 5 seconds. What is it inside the usb stick that takes so much time; firmware initialization; reads from the flash? I find it hard to belive that we need delays as long as 1 second or more when the latency for accessing flash is in the order of tens of microseconds. I realize that this might be slightly off-topic for this channel, if so, I'll go to electronics.stackexchange.com
You can change the timeout by writing to /sys/module/usb_storage/parameters/delay_use. For older usb disks, a settle delay of 5 seconds or even more may be needed (and 5 was the default until it was reduced to 1 second in 2010), presumably because the controller is starved of power while the disk motors are initializing. Or possibly because the internal SCSI firmware takes time to boot up before it's responsive (can you tell I'm just speculating here?). For modern solid-state storage, it's probably not needed at all, and many people set it to 0. Unfortunately, it's a global parameter that applies to all devices, so if you have any slow devices at all, you have to endure the delay for every mass-storage USB device you use. It would be nice if it could be set per-device by udev, but that's not the case.
Why does it take so long to detect an usb stick?
1,389,268,938,000
In output of iostat there is a steal field, according to man page the field is used to: Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. But what does that mean? Does it means the kernel itself is too busy to manage a cpu, and cause the cpu to be idle?
The hypervisor means the layer that manages a virtual environment, like VMware, XEN or VirtualBox. So the steal field, should be an interesting field to monitor, to detect problems or oversubscription of a virtualised environment. The field itself means the time the VM CPU has to wait for others VMs (virtual machines) finishing their turn (slice), or for a task of the hypervisor itself. The st field is present in the iostat, vmstat, sar and top commands. However, this thread confirms the steal field is not supported in VmWare VMs (I tested it in VMware 5.5 and I corroborate it). VirtualBox doesn't provide CPU steal time data also. It is supported by Xen and KVM virtual environments. vmstat also has the same field in the CPU area, but only after Debian 8. For sar to work sysstat data collection has to be enabled. As per man vmstat: st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown. Related thread Tools for Monitoring Steal Time (st) Further reading: CPU Time stolen from a virtual machine? It’s the time the hypervisor scheduled something else to run instead of something within your VM. This might be time for another VM, or for the Hypervisor host itself. If no time were stolen, this time would be used to run your CPU workload or your idle thread.
iostat - What does the 'steal' field mean?
1,389,268,938,000
Because I now and then need to use scandinavian letters despite using US Dvorak as my layout, I would like to use Caps Lock as a compose key. (I don't need Caps Lock at all, I'm not a forum troll) How would one accomplish this? Using Linux Mint 17 with xfce, if that makes a difference. For the record, I am the only user of this PC, and would prefer to have this be the system default, mainly in xorg, but also in tty if that's not too much drudgery.
X11 (classic) Run the program xev from a terminal to see the keycode sent by the CapsLock key. That's the number just after keycode on the third line from the KeyPress event line corresponding to pressing the key. On a PC, the keycode is 66. Create a file called .Xmodmap in your home directory and add the line keycode 66 = Mode_switch clear Lock Mode_switch is the weird name that X11 gives to Compose. clear Lock is necessary to avoid the key occasionally acting like Caps Lock even though it isn't Caps Lock (Lock is the Caps Lock modifier, and some applications behave a bit strangely when modifier declarations and keysym declarations aren't consistent). Alternatively, you can use the lines keysym Lock = Mode_switch clear Lock which causes any key currently sending Caps Lock to be rebound to sending Compose instead. Either way, you need to arrange for the command xmodmap ~/.Xmodmap to be executed when your session starts. This is a common convention, but not all combinations of distribution/desktop environment do it automatically. If yours doesn't, add the command to the list of commands executed at the session start (in the XFCE4 configuration editor, go to “Session and Startup” → “Application Autostart” and add that command). X11 (XKB) XKB is neater and more powerful, but more cumbersome to use in general than xmodmap. There is a preset in the standard configuration to do what you want, so it's easy in your case: run the following command: setxkbmap -option compose:caps See the previous section for how to run this command when your session starts. Linux console Find out the keycode of the CapsLock key. Run showkey on a text console, press CapsLock, then wait 10 seconds for showkey to exit. On a PC, the keycode is 0x3a. You need to have the following line in your console keymap file: keycode 0x3a = Compose The default console keymap file is /etc/console/boottime.kmap.gz on Debian with the console-tools package. It may be a different file under Mint; this is the file that loadkeys is invoked on in the boot scripts. If you prefer, you can leave the distribution-provided files intact, create a file with the line above, and run loadkeys /path/to/your/file.kmap from /etc/rc.local.
Remapping Caps Lock to Compose
1,389,268,938,000
I have two PPP peers, dsl-line1 and dsl-line2 which are configured with pppd on Ubuntu (Server) Linux. They are brought up by the /etc/network/interfaces file with the auto thingy however each PPP connection chooses the name pppX where X varies depending on which comes up first. I would like to make it such that dsl-line1 comes up with a name such as "dsl0" and dsl-line2 with a name like "dsl1" so that I can create firewall rules more easily for each and set up routing (as well as having it easier to configure). My question is how can I get the pppd's interfaces to name themselves? /etc/ppp/peers/dsl-line1 (dsl-line2 is basically the same apart from the default route being removed and the ethernet interface being different) noipdefault defaultroute replacedefaultroute hide-password #lcp-echo-interval 30 #lcp-echo-failure 4 lcp-echo-interval 10 lcp-echo-failure 3 noauth persist #mtu 1492 #persist #maxfail 0 #holdoff 20 plugin rp-pppoe.so eth1 user "[email protected]" /etc/network/interfaces (the line1 part, again, 2 is very similar) auto dsl0 iface dsl0 inet ppp pre-up /sbin/ifconfig eth1 up # line maintained by pppoeconf post-up /bin/sh /home/callum/ppp0_up.sh # Route everything post-up /bin/sh /etc/miniupnpd/ppp0_up.sh # Start miniupnpd (if not alr$ provider dsl-line1 Thanks in advance.
The best bet I found was the "unit" option in the /etc/ppp/peers/... file. This option is an integer which names the interface pppX where X is the int after "unit". I ended up just naming the interfaces pppX in /etc/network/interfaces and using "unit" in the peers files to ensure they are named that way.
Naming PPP interfaces
1,389,268,938,000
When I switch to a TTY and turn on the caps lock, the caps lock LED on my keyboard isn't turning on. On X it works fine. When I activate caps lock and then switch to a TTY and then press caps lock (on the TTY) the LED stays on (Even though the TTY keeps it's own track of the caps lock). So it seems the TTYs don't care about the LED at all. Can I somehow enable the LED on TTYs? It's very annoying to be forced to type something whilst not knowing if caps lock is activated. I'm using Debian jessie (frequently updated) arch: amd64
This is a long standing Debian bug. It seems to relate to an underlying kernel bug which has been long since fixed. The problem seems to have been that Caps_Lock did not work for non-ASCII characters, so the workaround was to map Shift_Lock or CtrlL_Lock to the caps lock key instead. On the Debian side the issue is created by ckbcomp which is used by console-setup to create the console keymap from the XKB keyboard description. Note that the original code referenced in the bug report using Shift_Lock seems to have been replaced by different code which switches for CtrlL_Lock instead. If you are interested you can search for usages of the broken_caps variable in the ckbcomp Perl script. I have no idea if the code is still necessary for any reason, maybe it is worth bumping the bug report. However, the workaround is to put the following line in /etc/kbd/remap and it should be fixed after a reboot: s/CtrlL_Lock/Caps_Lock/ Or for a temporary fix until the next reboot, run the following in a tty session: dumpkeys | sed s/CtrlL_Lock/Caps_Lock/ | sudo loadkeys Update It seems that /etc/kbd/remap is only actually used if setupcon is not available. A better workaround is just to put the following line in /etc/rc.local: dumpkeys | sed s/CtrlL_Lock/Caps_Lock/ | loadkeys
caps lock led not working on Linux console
1,389,268,938,000
On the output of the mount command on my system there are some lines like the following: /dev/sda2 on /var/log type ext3 (rw,relatime,errors=continue,barrier=1,data=ordered) And inside the parenthesis as you can see it says errors=continue. What does this mean? Is there any error on the sda2 partition? Shall i consider this or can I just ignore it?
No, that means that if there is an error, the system will ignore it. There are three possible values for the errors option: continue (ignores the error) remount-ro (remount the filesystem read-only) panic (kernel panic). Read man 8 mount for more informations.
What does the errors=continue mount option mean?
1,389,268,938,000
I need a small distro, that is stable. I don't need a full X server or window manager, I only need it to run one single application with a basic UI that consists of a viewport. I would like for the distro to be as small as possible. 700 mb or less would be ideal. Is their a base distro of ubuntu or similar that I can add whatever I need to it from the command line. Which basically is the kernel and some way of graphical output. I was thinking of putting Direct FB on it to render the application. Even a live distro would work.
Have a look at TinyCore Linux. It comes in two variants, one CLI and one including X. The X version including a window manager and a simple desktop is about 12MiB. If you don't need a window manager, you can just start X and your application. A window manager is not required.
Need small distro without a desktop or windows manager, just to run a single graphical app [duplicate]
1,389,268,938,000
top - 08:43:16 up 96 days, 22:16, 1 user, load average: 4.03, 3.92, 3.98 Tasks: 199 total, 1 running, 198 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.5%sy, 50.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 3.9%sy, 46.8%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 3.0%sy, 47.5%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 5.0%sy, 45.5%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3593 foldinga 39 19 276m 80m 2972 S 402 1.0 12:55.42 FahCore_a3.exe Now, how come top says it's using 100% of the CPU (400% / 4 cores) while exactly half of the processors are idle?? processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 30 model name : Intel(R) Xeon(R) CPU X3440 @ 2.53GHz stepping : 5 cpu MHz : 2526.932 cache size : 8192 KB physical id : 0 siblings : 4 core id : 3 cpu cores : 4 apicid : 6 initial apicid : 6 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5054.02 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: EDIT: In response to i_grok and Max Alginin, I made sure Hyperthreading was enabled on the server. Once I got it turned on, here are the results of top now. Notice, the same symptoms are evident. top - 10:17:01 up 47 days, 10:28, 3 users, load average: 7.93, 7.96, 8.02 Tasks: 150 total, 1 running, 149 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 2.8%sy, 42.0%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 2.2%sy, 42.5%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 1.2%us, 3.7%sy, 95.1%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 1.7%sy, 43.1%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.1%sy, 43.6%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 0.0%us, 0.0%sy, 44.8%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 2.2%sy, 42.5%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 1.7%sy, 43.1%ni, 55.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8177700k total, 6258704k used, 1918996k free, 29248k buffers Swap: 0k total, 0k used, 0k free, 5203172k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8132 foldinga 39 19 557m 99m 3060 S 796 1.2 1510:53 FahCore_a3.exe
It's been over four years, and to be fair, I completely forgot about this question and only came back because I saw that I had received a Notable Question badge for it. The problem was tangentially related to hyperthreading, but as I continually pointed out to responders, that was not the cause for the 50% idling. The cause had to do with an inappropriately configured kernel dynamic ticks configuration. I was running Gentoo and using a custom-built kernel. After I upgraded the kernel, sometime in mid-2012, the issue resolved itself.
Top says 100% CPU used, but 50% of cores idle?
1,389,268,938,000
I have downloaded the ChromePlus tarball and extracted it to my home directory. The extracted folder contains an executable that I can double-click to launch ChromePlus. So I assume I do not need to any extra things to install it. I'm new to Linux. Where should I place the ChromePlus directory? It's currently sitting on my home directory, and it does not look neat. After googling, I thought about /bin/, /usr/bin, /usr/lib. Where is the best place?
By convention, /opt is used for manually installed programs with self contained directories. Programs in self contained directories will not show up in your PATH by default, but generally this is solved by creating symlinks in /usr/local/bin to any binaries under /opt. As implied above, /usr/local is the other location for manually installed files, but it's generally only used for programs that split their files (/usr/local/bin for executables, /usr/local/lib for libraries, etc.). Using /opt and /usr/local avoids potential conflicts between manually installed files and files installed by a package management system (yum, apt, etc. generally install files in /usr/bin, /usr/lib, etc.). Historically, conflicts tended to result in files being silently overwritten, causing all sorts of unexpected behaviour. Modern package management systems are better about this, but it's still best not to rely on automated conflict resolution that may or may not always do what you expect.
Where should I place a downloaded tarball?
1,389,268,938,000
I use both man and --help in Bash programming to get help. For example, to get information about usage of ls command, I may use man ls Or ls --help Both give some what similar output. What is the difference between these two?
For one, --help is not a command, it is an argument that is often given to a command to get help using it. Meanwhile, man is a command, short for "manual". Manual pages are installed by many programs, and are a common way to find help about commands, as well as system calls, (e.g. fork()). If a program installs a manual page, it can always be accessed via the man command, whereas --help is just a common convention, but need not be enforced—it could be just (and only) -h. man also typically uses a pager, such as less, automatically, which can make viewing and searching through the information much easier. Finally, you mention Bash programming in your question—none of this is unique to Bash. Bash doesn't care about the commands themselves or their arguments for the most part.
Difference between 'man ls' and 'ls --help'?
1,389,268,938,000
I would like to setup a Linux system with ScrotWM as the window manager, but I noticed that X is aware of only a few fonts. I would like to have UTF-8 fonts that support multiple languages, including Asian languages like Japanese and Traditional Chinese. How do I install fonts so that X can show them? What kind of fonts would they be? And will the fonts be universally available to other programs like Firefox or OpenOffice once I install them? Or will applications maintain separate groups of fonts for their own use? Thanks.
Step-by-step manual how to install to Linux the multiple fonts from the specific folder: Open the Terminal application and gain root privileges by typing su and the correct root password. Go to the folder with the fonts by using cd command, e.g suppose, the user font folder is in Downloads: cd /home/**user_name**/Downloads/Fonts Copy the font files to the system-wide fonts directory /usr/share/font: find . -iname '*.ttf' -exec ln -srvf {} /usr/share/fonts/ \; Refresh the system font cache with fc-cache, e.g.: fc-cache -v
How to install fonts for X?
1,389,268,938,000
Is it possible to set Linux kernel sysctl settings (those usually set in /etc/sysctl.d) using kernel command line (those visible in /proc/cmdline)? (Using grub config file /etc/default/grub variable GRUB_CMDLINE_LINUX="...".)
Sysctl parameters can be set via the kernel command-line starting with kernel version 5.8, thanks to Vlastimil Babka from SUSE. sysctl.*= [KNL] Set a sysctl parameter, right before loading the init process, as if the value was written to the respective /proc/sys/... file. Both '.' and '/' are recognized as separators. Unrecognized parameters and invalid values are reported in the kernel log. Sysctls registered later by a loaded module cannot be set this way. Example: sysctl.vm.swappiness=40
How to set sysctl using kernel command line parameter?
1,572,946,589,000
I have two questions about the Linux kernel. Specifically, does anybody know exactly, what Linux does in the timer interrupt? Is there some documentation about this? And what is affected when changing the CONFIG_HZ setting, when building the kernel? Thanks in advance!
The Linux timer interrupt handler doesn’t do all that much directly. For x86, you’ll find the default PIT/HPET timer interrupt handler in arch/x86/kernel/time.c: static irqreturn_t timer_interrupt(int irq, void *dev_id) { global_clock_event->event_handler(global_clock_event); return IRQ_HANDLED; } This calls the event handler for global clock events, tick_handler_periodic by default, which updates the jiffies counter, calculates the global load, and updates a few other places where time is tracked. As a side-effect of an interrupt occurring, __schedule might end up being called, so a timer interrupt can also lead to a task switch (like any other interrupt). Changing CONFIG_HZ changes the timer interrupt’s periodicity. Increasing HZ means that it fires more often, so there’s more timer-related overhead, but less opportunity for task scheduling to wait for a while (so interactivity is improved); decreasing HZ means that it fires less often, so there’s less timer-related overhead, but a higher risk that tasks will wait to be scheduled (so throughput is improved at the expense of interactive responsiveness). As always, the best compromise depends on your specific workload. Nowadays CONFIG_HZ is less relevant for scheduling aspects anyway; see How to change the length of time-slices used by the Linux CPU scheduler? See also How is an Interrupt handled in Linux?
Linux timer interrupt
1,572,946,589,000
I faced an urge to run different scripts simultaneously on different servers. (+/- 2 seconds not a problem) The condition is to launch the scripts by the invocation from the primary server, and NOT by the Cron schedule. I tried SSH, but this way I must wait till the script finishes running on the remote server and only then drop the session. I'm looking for an alternative way, that allows to start the script on remote server without waiting for its end. P.S. - I'm not using CLI in my use case, but an external script invocation app, that triggers the script on the primary server. Thanks.
GNU screen will allow you to execute remote jobs without staying connected to the server, except on systems where systemd kills all processes upon logout1. Specifically, you can use: #!/bin/sh ssh server1 screen -d -m command1 ssh server2 screen -d -m command2 # ... Each local ssh session terminates immediately after launching the remote screen process, which in turn exits as soon as the executed command finishes. You can suffix each ssh line with & to make the connections in parallel. Excerpt from the manual, screen(1): -d -m Start screen in "detached" mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts. 1 If you are on a system that uses systemd configured with killUserProcesses=yes, you will need to replace screen with systemd-run --user --scope screen.
Run bash on a remote server without waiting till the end
1,572,946,589,000
According to this post can stat be used to give the atime on Linux, but FreeBSD 10.1 doesn't have the GNU stat. How do I list the atime for files?
ls -lu where -l will provide a long listing format and -u will sort by access time.
How to list atime for files?
1,572,946,589,000
In DMESG I see: [sdb] Attached SCSI removable disk How does Linux decide what is removable and not removable? Is there a way I can look up if a device is "removable" or not other than the log, for example somehwere in /sys or /proc?
All block devices have a removable attribute, among other block device attributes. These attributes can be read from userland in sysfs at /sys/block/DEVICE/ATTRIBUTE, e.g. /sys/block/sdb/removable. You can query this attribute from a udev rule, with ATTR{removable}=="0" or ATTR{removable}=="1". Note that removable (the device keeps existing but may have no media) isn't the same thing as hotpluggable (the device can come and go). For example, CD drives are removable but often not hotpluggable. USB flash drives are both, but hard disks in external enclosures are typically hotpluggable but not removable. If you want to find out the nitty-gritty of when a device is considered removable, you'll have to dig into the kernel source. Search for removable — there aren't too many spurious hits. For SCSI devices, the removable bit is read from the device in scsi_add_lun with a SCSI INQUIRY command.
How to tell if a SCSI device is removable?
1,572,946,589,000
I am trying to install a Linux kernel (3.8.1) from source in a Fedora distribution. The kernel is a vanilla one. I follow the kernel's build instructions closely that is: make menuconfig make sudo make modules_install install sudo grub2-mkconfig -o /boot/grub2/grub.cfg Everything in /boot seems fine. I can see System.map, initramfs, and vmlinuz for the newly compiled kernel. The vmlinuz link points to vmlinuz-3.8.1. There are multiple other kernels installed including an Ubuntu one. grub2 recognises all of them and I can boot to each one of them. When I reboot I see all kernels as menu entries and choose 3.8.1. Then I see this message: early console in decompress_kernel decompressing Linux... parsing ELF ... done Booting the kernel. [1.687084] systemd [1]:failed to mount /dev:no such device [1.687524] systemd [1]:failed to mount /dev:no such device Solution: All three posted responses provide the solution. CONFIG_DEVTMPFS was in fact causing the issue. I copied a working kernel's /boot/config-… into the root of the source tree as .config and executed the standard commands for building the kernel also shown above.
Easiest way to get a working kernel configuration is to just copy Fedora's .config over and then do a make oldconfig to configure it. The configuration is found at /boot/config-*
Self-built kernel: failed to mount /dev: No such device
1,572,946,589,000
by doing this, # btrfs subvolume snapshot /mnt/1 /mnt/1/snapshot # tree /mnt/1 /mnt/1 ├── a ├── snapshot │ ├── a │ └── subv └── subv └── b 3 directories, 3 files We can create snapshot from /mnt/1 on btrfs. My question is: what is advantage of using snapshot , than using rsync to simpaly backup filesystem?
Snapshotting can be seen as a special case of, but distinct from, copying. I'm not really familiar with the specifics of Btrfs, but the following applies to ZFS, from which Btrfs draws much inspiration. Apparently Btrfs snapshots are actually read/write, making them more similar to ZFS file system clones, but that does not change their relationship to file copies. A snapshot is a read-only, point-in-time copy of the filesystem state. This works because both Btrfs and ZFS are so-called Copy On Write filesystems. Whenever a block of data is changed, the changed data is written to a location on disk different from the original copy. The primary upside of this is that it greatly increases reliability: because very little data needs to be overwritten in place, there is a greatly reduced possibility of a problem leading to data loss. However, there are also other advantages. One such advantage is that you can do filesystem-level snapshotting efficiently. A major downside is that as your storage fills up, it tends to greatly increase storage fragmentation as the block allocator struggles to find somewhere, anywhere, to physically store the copy. As a matter of fact, it is recommended to keep ZFS pool usage below 80%, presumably not in the least for that exact reason. A snapshot basically tells the filesystem code "these blocks are still needed". Hence, they won't be reclaimed and potentially overwritten with new data. However, they still reference the same old data blocks. Now, how is that different from simply making a copy using rsync, cp, cat or whatever? It's different because until the data actually changes, no additional physical copy of the data is made. It's like hardlinks on stereoids; the same physical on-disk copy of the data is used when accessing a file under different names. The difference is that with hardlinks, a change to the file under one name propagates to every other copy because they really reference the same data blocks. With copy-on-write and snapshotting, the changed blocks only show up in the place where they were changed. (With read-only snapshots, this means in the "current" version of the file.) You also only need to rewrite the blocks that have actually been changed; the remaining blocks are left exactly where they are. For doing things like snapshotting files containing VM disk images for example, this can make a massive difference in amount of data needed to store on the disk. So, to recap: Snapshotting only requires as much disk space as the changed blocks require. Copying requires the number of copies times the size of the file. Snapshots are read-only or read/write, depending on file system design. Copies are read/write by design. Copies are independent. Snapshots reference the same data blocks as the current version of the file, until the current version of the file changes (in whole or part).
How are filesystem snapshots different from simply making a copy of the files?
1,572,946,589,000
I have a main script that I'm running, and from it I have a second "slow process" I want to kick off, and "do something" in the main script if it doesn't complete in the time limit -- depending on if it completed or not. N.B. If the "slow process" finishes before my time limit, I don't want to have to wait an entire time limit. I want the "slow process" to keep going so I can gather stats and forensics about it's performance. I've looked into using timeout, however it will kill my script when finished. Suppose this simplified example. main.sh result=`timeout 3 ./slowprocess.sh` if [ "$result" = "Complete" ] then echo "Cool it completed, do stuff..." else echo "It didn't complete, do something else..." fi slowprocess.sh #!/bin/bash start=`date +%s` sleep 5 end=`date +%s` total=`expr $end - $start` echo $total >> /tmp/performance.log echo "Complete" Here, it uses timeout -- so the script dies, so nothing winds up in /tmp/performance.log -- I want slowprocess.sh to complete, but, I want main.sh to go onto its next step even if it doesn't finish in the 3 seconds.
With ksh/bash/zsh: { (./slowprocess.sh >&3 3>&-; echo "$?") | if read -t 3 status; then echo "Cool it completed with status $status, do stuff..." else echo "It didn't complete, do something else..." fi } 3>&1 We duplicate the original stdout onto fd 3 (3>&1) so we can restore it for slowprocess.sh (>&3), while stdout for the rest of the (...) subshell goes to the pipe to read -t 3. Alternatively, if you want to use timeout (here assuming GNU timeout): timeout --foreground 3 sh -c './slowprocess.sh;exit' would avoid slowprocess.sh being killed (the ;exit is necessary for sh implementations that optimise by executing the last command in the shell process).
timeout without killing process in bash
1,572,946,589,000
This is a server that is running on Vmware ESXi: SERVER:/root # cat /etc/SuSE\-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 2 SERVER:/root # rpm -qa|grep -i vmware vmware-open-vm-tools-common-8.0.3-258828.sles11sp1 vmware-open-vm-tools-nox-8.0.3-258828.sles11sp1 vmware-tools-nox-8.0.3-258828.sles11sp1 vmware-tools-common-8.0.3-258828.sles11sp1 SERVER:/root # How can I figure out how many physical CPUs are assigned to the vmware guest? I only have access to the guest, not the host
You can get this info only from the vSphere Client.. :\ (but fixme, if there is a way.. I just asked many people, and the conclusion was this..) p.s.: maybe: vmware-toolbox-cmd can do this, but I can't see it on the servers: http://www.virtuallyghetto.com/2011/01/how-to-extract-host-information-from.html where does vmware-toolbox-cmd get's its information? isn't there any alternatives for it?
How to get CPU info on a vmware guest
1,572,946,589,000
I was wondering if there is an easy way to find the maximum size that is supported by Linux sockets? (Is this configurable? If so where?) For example, most of the socket examples found on the web send "Hello Socket" or some such other small string, however if I put the whole of War And Peace into the socket, when does it break? As everything is a file, is it the maximum file size? How is it coordinated when sockets connect different file systems? I'm most interested in stream sockets.
net.core.rmem_max and net.core.wmem_max are your thing. You can examine their values with # sysctl net.core.rmem_max and set them with # sysctl -w net.core.rmem_max=8388608 These are the socket buffer sizes, when receiving and sending, respectively. They have default values, too, - rmem_default and wmem_default.
Size of data that can be written to / read from sockets
1,572,946,589,000
I am writing scripts to initialize and configure a large system with many components. Each component has its own log file. I would like to change the color of the component file name to red whenever an error occur on its installation/configuration. How can I do it?
Google will find you the answer. Print Hello world in red: echo -e "\033[0;31mHello world\033[0m" Explained <esc><attr>;<foreground>m <esc> = \033[ ANSI escape sequence, some environments will allow \e[ instead <attr> = 0 Normal text - bold possible with 1; <foreground> = 31 30 + 1 = Color red - obviously! m = End of sequence \033[0m Reset colors (otherwise following lines will be red too) Look at http://en.wikipedia.org/wiki/ANSI_escape_code for full list of colors and other functions (bold etc). The command tput, if available, will make life easier: echo -e "$(tput setaf 1)Hello world$(tput sgr0)" Can even save sequences in vars for simpler use. ERR_OPEN=$(tput setaf 1) ERR_CLOSE=$(tput sgr0) echo -e "${ERR_OPEN}Hello world${ERR_CLOSE}"
Change the color of the file name text
1,572,946,589,000
When I ssh as root, my shell is bash, but when it's a non-root user it is sh. How can I make them both to use bash? This actually goes against the logic in this question: Why root's default shell is configured differently with other normal user account's default shell?
Please see man usermod. An example would be sudo usermod -s /bin/bash username.
different shell for root and non-root user
1,572,946,589,000
I read many tutorials about the use of kill command, mostly 3 approaches kill -15 <pid> kill -SIGTERM <pid> kill -TERM <pid> For scripts purposes and for portability with macos too, the code numbers are not going to be used. Because kill -l in macos is different than Linux. So here enter in play the signal names. Question What is the correct or suggested approach to send the signal name through the kill command? I mean: SIGsomething or something? And why? These 2 approaches exists for one reason, right? is there a mandatory reason to use an approach over the other? Environment This situation is for Ubuntu Desktop/Server and Fedora Workstation/Server
The most portable variant is kill -s TERM -- … That’s the form specified in POSIX with no extensions: -s followed by the signal’s name, as defined in signal.h, with no SIG prefix, or 0 for the “null” signal (used to check for the existence of a process with a given identifier). SIGTERM is the default signal, so sending that to processes or process groups can be done using the following canonical forms: kill <pid> kill -- -<pgid> On Linux in general, most implementations of kill (including shell builtins) accept signals as numbers or names with or without a SIG prefix; notable exceptions include the kill builtin of dash, which is the default /bin/sh in Debian-based distributions, and of the Schily Bourne Shell. There’s no “mandatory” reason to use one form rather than another, among whatever forms are supported by the tools you use. I would personally avoid numeric forms because they can appear to be ambiguous: is kill -9 -15 intended to send SIGTERM to process groups 9 and 15, or SIGKILL to process group 15? (It’s the latter, but some readers may wonder.) I would also omit the SIG prefix since that’s not recognised everywhere. Note that POSIX does specify a few numeric signal values: Number Signal 0 “Null” signal 1 HUP 2 INT 3 QUIT 6 ABRT 9 KILL 14 ALRM 15 TERM
What is correct or suggested approach to send the signal name through 'kill' command?
1,572,946,589,000
I try to run simple test case on linux operating system with dd command . I create a simple disk image with this command : dd if=/dev/urandom of=disk.img bs=1M count=100 This is First Test : Now I run this commands : dd if=disk.img of=output_1 bs=1k count=4 dd if=disk.img of=output_2 bs=4k count=1 md5sum output_1 output_2 bc0245c10ff529042fd2a5335ed1573f output_1 bc0245c10ff529042fd2a5335ed1573f output_2 You can see output_1 and output_2 exactly has same md5 hash . This is Second Test : Now I add skip parameter to dd command : dd if=disk.img of=output_1 skip=1500 bs=1k count=4 dd if=disk.img of=output_2 skip=1500 bs=4k count=1 c5b0e8dde317c25011b31a5c48580477 output_1 4585d39fcf93cec4abc6c55094aac724 output_2 Why result is difference for this blocks ? What part of my test is wrong ?
The skip parameter is in (input) blocks, not in bytes, as written in the man page: skip=N skip N ibs-sized blocks at start of input In the first case the skipped part is: 1500 * 1KiB = 1536000 bytes The second case: 1500 * 4KiB = 6144000 bytes As long as the involved values divide exactly you can adjust. For the second case using 1500*1k/4k=375 (skip=375) will give back the same result as the first case.
Linux dd problem on difference result for same blocks
1,572,946,589,000
I stumbled upon Lynis - a security auditing tool for linux - and ran it on my Raspberry Pi to see if I could harden it a bit more. I got one warning in the Authentication group that confuses me. - sudoers file [ FOUND ] - Permissions for directory: /etc/sudoers.d [ WARNING ] - Permissions for: /etc/sudoers [ OK ] - Permissions for: /etc/sudoers.d/010_pi-nopasswd [ OK ] - Permissions for: /etc/sudoers.d/010_at-export [ OK ] - Permissions for: /etc/sudoers.d/README [ OK ] - Permissions for: /etc/sudoers.d/pihole [ OK ] However further down in the Result section it says: -[ Lynis 2.7.5 Results ]- Great, no warnings Suggestions (5): ---------------------------- [...] Also none of the Suggestions are referring to this Warning. These are the permissions for the mentioned directory: $ ls -l /etc/ | grep sudo -r--r----- 1 root root 669 Nov 13 2018 sudoers drwxr-xr-x 2 root root 4096 Dec 28 2018 sudoers.d/ Does anyone know why I get this warning and where I might find more detailed information?
Lynis expects /etc/sudoers.d to be unreadable by “others”, i.e. rwx[r-][w-][x-]---. If you run chmod 750 /etc/sudoers.d the warning will disappear. The information should have been logged in the Lynis log file...
Why do I get a warning for the sudoers.d when doing an audit with Lynis?
1,572,946,589,000
I'm looking for a file system that stores files by block content, therefore similar files would only take one block. This is for backup purposes. It is, similar to what block-level backup storage proposes such as zbackup, but I'd like a Linux file system that allows to do that transparently.
Assuming your question is about data deduplication, there are a few file systems which support that on Linux: ZFS, with online deduplication (so data is deduplicated as it is stored), but with extreme memory requirements which make the feature hard to use in practice; Btrfs, with “only” out-of-band deduplication, albeit with tightly-integrated processes which provide reasonably quick deduplication after data is stored; SquashFS, but that probably doesn’t fit your requirements because it’s read-only. XFS is supposed to get deduplication at some point, and Btrfs is also supposed to get online deduplication. Keep an eye on Wikipedia’s file system comparison to see when this changes.
Is there a block-level storage file system?
1,572,946,589,000
Recently there is a popular github repo called lsix, and it's using sixel graphics to display images inside a terminal. Currently I am using rxvt-unicode as my terminal emulator, but it seems not to work well with sixel. Anyone know how to make it support sixel? (I am using Ubuntu 18.04 LTS FYI)
Besides using the rxvt-unicode-sixel fork, it might be possible to implement sixel by writing a perl extension. Documentation for that is in the urxvtperl(3) manpage. I don't know much about sixel but I imagine it's a matter of: intercepting the sixel escape sequences, interpreting them and not letting them pass through to the main escape sequence interpreter. You can replace the sequence with newline characters to displace the correct number of lines to fit the image's height, maybe scaled to fit the width. drawing the image. You can get the correct window id via the API urxvt provides to extensions, and use regular xlib or xcb functions to draw the image if need be. watch out for events like scrolling to redraw the image as needed. I see many possibilities here that could be configurable, though I don't know if there are standards on sixel implementations. For example, what happens to images when resizing a terminal? Is it clipped? Is it scaled? only on creation or on every resize as well? what happens when the cursor is then moved over the image and one writes enough for text to wrap? what happens to the image and wrapped text when you resize then? etc. I think the ideal would be to initially draw it at the scale of what's smaller between the terminal width and the image size, and set that as the maximum size of the image. Rescale the image on terminal resize, while respecting the maximum size set. With respect to text drawn over it, it might get a little complicated to keep that text over the image on redrawing the image... Sorry, seems I got a little excited and got out of scope for your answer. Kind of wish I had the time to work on this. EDIT: To answer to skepticism in the comments on the ability to work with pixels from a urxvt perl extension, here is a proof of concept. It sets a pixel the color white on coordinate (10, 10) from the top-left: use strict; use warnings; use X11::Protocol; my $X = X11::Protocol->new; sub on_refresh_end { my $term = shift(@_); my $gc = $X->new_rsrc; $X->CreateGC($gc, $term->vt, foreground => $X->white_pixel); $X->PolyPoint($term->vt, $gc, 0, (10,10)); $X->flush; } To install this extension, put it in ~/.urxvt/ext/sixel-proof-of-concept, add it to ~/.Xresources (or ~/.Xdefaults if you use that) by adding the line URxvt.perl-ext-common: sixel-proof-of-concept, load that by doing xrdb ~/.Xresources, and make sure you have the X11::Protocol perl module installed.
Is there any way to let urxvt support sixel?
1,572,946,589,000
I'm working with Docker in action's book, and I have seen the term "process accounting" several times. I am in a containerization of the application context. I would like to know more about this concept of process accounting. Google found me some finance accounting articles; I am looking for the meaning related to computer systems. Would you please provide some explanation about this concept?
The Linux kernel has a built-in process accounting facility. It allows system administrators to collect detailed information in a log file each time a program is executed on a Linux system. Then the administrator can analyze the data in these log files and find a conclusion. To shed more light on this term, let me give few examples: The administrator can collect information about who has been playing games on a Linux computer and for how long. One of the earliest uses of process accounting was to calculate the CPU time absorbed by users at computer installations and then bill users accordingly. Another example is when process accounting can be turned on for a week to record the names of all the commands executed in a log file. The administrator can then parse the log file to find out which command was run most often. The most typical application of process accounting is as a supplement to system security measures. In the case of a break-in on a company server, the log files created by the process accounting facility are useful for collecting forensic evidence. Turning on process accounting requires significant disk space. For example, on a Pentium III system with Red Hat 7.2, each time a program is executed, 64 bytes of data are written to the process accounting log file. Process accounting commands are as follows: **Command Name** **Purpose** accton Enables or disables process accounting acctentries Counts the number of accounting entries in the log file accttrim Truncates the accounting file specified dumpacct Dumps the contents of the log file dump-acct Similar to dumpacct handleacct.sh Script to compress and backup logs and delete the oldest lastcomm Prints commands executed on the system, most recent first sa Summarize accounting information More information about installation and utilization of process accounting can be found in this Linux Journal article.
What does "process accounting" mean in Linux?
1,572,946,589,000
I am searching for files by finding a partial file name: find /script -name '*file_topicv*' /script/VER_file_topicv_32.2.212.1 It works, but not when the partial file name is a variable: var=file_topicv find reported file not found, (in spite of the file existing): find /script -name '*$var*' What is wrong here? I also tried these: find /script -name "*$var*" find /script -name "*\$var*" find /script -name "*\\$var*" but not one of those works. Update: I think this is the problem: var=` find /tmp -name '*.xml' -exec sed -n 's/<Name>\([^<]*\)<\/Name>/\1/p' {} + | xargs ` echo $var generateFMN ls /script/VERSIONS | grep "$var" NO OUTPUT var=generateFMN ls /script/VERSIONS | grep "$var" VER_generateFMN_32.2.212.1 So why $var from find command cause the problem? (I removed the spaces by xargs.)
The first double-quoted one should work: $ touch asdfghjkl $ var=fgh $ find -name "*$var*" ./asdfghjkl Within single quotes ('*$var*'), the variable is not expanded, and neither is it expanded when the dollar sign is escaped in double quotes ("*\$var*"). If you double-escape the dollar sign ("*\\$var*"), the variable is expanded but find gets a literal backslash, too. (But find seems to take the backslash as an escape again, so it doesn't change the meaning.) So, confusing though as it is, this also works: $ set -x $ find -name "*\\$var*" + find -name '*\fgh*' ./asdfghjkl You can try to run all the others with set -x enabled to see what arguments find actually gets. As usual, wrap the variable name in braces {}, if it's to be followed by letters, digits or underscores, e.g. "*${prefix}somename*".
How to use find command with variables
1,572,946,589,000
Hy there, I am having problem in connecting my raspberry pi to a wifi dongle. I have followed a lot of tutorials from internet but no success so far. My WIFI dongle can scan the networks but it's not connecting to any network.Here is what my configuration file looks like ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev country=GB update_config=1 network={ ssid="noname" #psk="zong4gisbest" psk=ead5b0b7e82e1a68f01e9a17a2a7719ec24575c89bb5b5805e4ae49c80daa983 } Here are the results of my commands on Raspbian iwconfig wlan0 unassociated Nickname:"<WIFI@REALTEK>" Mode:Auto Frequency=2.412 GHz Access Point: Not-Associated Sensitivity:0/0 Retry:off RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions. lsusb Bus 001 Device 004: ID 0bda:0179 Realtek Semiconductor Corp. RTL8188ETV Wireless LAN 802.11n Network Adapter Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Can you please help me resolving the issue? thanks
Edit your wpa_supplicant.conf , change the following lines: network={ ssid "noname" psk"zong4gisbest" to: network={ ssid="noname" #psk="zong4gisbest" psk=ead5b0b7e82e1a68f01e9a17a2a7719ec24575c89bb5b5805e4ae49c80daa983 } save then run wpa_supplicant -iwlan0 -D wext -c/etc/wpa_supplicant/wpa_supplicant.conf -B dhclient wlan0 The error: nl80211: Driver does not support authentication/association or connect commands mean that the standard nl80211 doesn't support your device , you should use the old driver wext. To correctly set up your wpa_supplicant.conf , it is better to use the wpa_passphrase command: wpa_passphrase YOUR-SSID YOUR-PASSWORD >> /etc/wpa_supplicant/wpa_supplicant.conf To automatically connect to your AP after restart edit the wlan0 interface on your /etc/network/interfaces as follow: allow-hotplug wlan0 iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
How to get the wifi working with r8188eu driver on my raspberry pi?
1,572,946,589,000
I need to print the 10 processes that are using the most CPU. Also I need to print their ID, and the command they were started with. What I've found is that the command ps -ax -u prints all the processes and their %CPU usage. The command ps -ax -u --sort pcpu prints all the processes sorted by the %CPU usage, from the least to the most, but I need to print only 10 processes from the most to the least. I have to use something like sort -r to make a reverse sorting, but the command ps -ax -u --sort -r pcpu produces an error. So, how can I make a reverse sorting and print only 10 of the processes?
to print 10 processes, that use the most CPU ps -aux --sort -pcpu | head Sorting syntax is [+|-]key[,[+|-]key[,...]]. The "+" is optional since default direction is increasing numerical or lexicographic order. Identical to k. For example: ps jax --sort=uid,-ppid,+pid head - will print the first/top 10 lines of file(s) or standard input (by default)
Print processes, sorted by usage of CPU
1,572,946,589,000
In particular, I'd like to add a fd flag and a branch in a couple of fd-handling syscalls that should be used if the flag is set instead of the current code. I think the only thing that matters here for the purposes of this question is that this should be a generic rather than hardware specific modification. How do I set things up so that I can rebuild the modified kernel and test the new feature quickly? I figure I need a basic setup that'll boot in a virtual machine and run my test code, which I guess could be simply in initram and the boot might not need to go any further (?) Are there any good guides on this or can you explain it in a single answer here?
eudyptula-boot is quite handy for this; its introductory blog post has more details, but basically it allows you to boot a VM using the kernel you wish to test, and your existing filesystems (using overlayfs). That way you can quickly check a kernel without rebooting, and you still have access to all your files. The only requirement on the kernel being tested is that it support overlayfs and 9p; these are easy to activate in the kernel configuration before building it.
How do I quickly build and test the kernel if I want to modify a system call
1,572,946,589,000
I would like to read a bit of the source code to try and understand how it all fits together, but cannot see where to start. Which file in the Linux source code is the main file used to compile the kernel? I was half expecting to find a kernel/main.c, but there are many files under kernel/ and I cannot see which one is the main one? Is it kernel/sys.c?
The handover from the bootloader to the kernel necessarily involves some architecture-specific considerations such as memory addresses and register use. Consequently, the place to look is in the architecture-specific directories (arch/*). Furthermore, handover from the bootloader involves a precise register usage protocol which is likely to be implemented in assembler. The kernel even has different entry points for different bootloaders on some architectures. For example, on x86, the entry point is in arch/x86/boot/header.S (I don't know of other entry points, but I'm not sure that there aren't any). The real entry point is the _start label at offset 512 in the binary. The 512 bytes before that can be used to make a master boot record for an IBM PC-compatible BIOS (in the old days, a kernel could boot that way, but now this part only displays an error message). The _start label starts some fairly long processing, in real mode, first in assembly and then in main.c. At some point the initialization code switches to protected mode. I think this is the point where decompression happens if the kernel is compressed; then control reaches startup_32 or startup_64 in arch/x86/kernel/head_*.S depending on whether this is a 32-bit or 64-bit kernel. After more assembly, i386_start_kernel in head32.c or x86_64_start_kernel in head64.c is invoked. Finally, the architecture-independent start_kernel function in init/main.c is invoked. start_kernel is where the kernel starts preparing for the real world. When it starts, there is only a single CPU and some memory (with virtual memory, the MMU is already switched on at that point). The code there sets up memory mappins, initializes all the subsystems, sets up interrupt handlers, starts the scheduler so that threads can be created, starts interacting with peripherals, etc. The kernel has other entry points than the bootloader: entry points when enabling a core on a multi-core CPU, interrupt handlers, system call handlers, fault handlers, …
Looking at the source files, where does Linux start executing?
1,572,946,589,000
I'm trying to understand Linux file system, and one of the question is: 1- Why there are multiple folders for executable files: /usr/bin, /usr/sbin/ and /usr/local/bin? Is there any differences between them ? 2- If I have an executable file and I want to add it to my system, which of the third latter locations is the best for me ?
Run man hier from the command line to get the answer to your first question. It depends. See /usr/bin vs /usr/local/bin on Linux
Why there are multiple folders for executable files in Linux? [duplicate]
1,572,946,589,000
I have a read-only file, F. A program, P, that I'm not the author of, needs to read F. I want the content of F to come from another 'generator' program, G, whenever P tries to read F (taking F to be an ordinary file) and not any earlier. I tried doing the following: $ mkfifo /well-known/path/to/F # line #1 $ G > /well-known/path/to/F # line #2 Now, when P starts up and tries to read F, it appears to be able to read the output generated by G just as I wished it to. However, it can do so only once, since G after all gets to run only once! So, if P had a need to read F again later in its execution, it would end up blocking on the fifo! My question is, other than bracketing line #2 above in some sort of an infinite loop, is there any other (elegant) alternative for the above? What I'm hoping for is, some way of registering a 'hook' program into the file-open system call such that the file-open would result in the launching of the hook-program and the file-read in the reading of the hook-program output. Obviously the assumption here is: the read will happen sequentially from file beginning to file end, and never in random seeks.
FUSE + a soft-link (or a bind mount) is a solution, though I would not consider it "elegant", there's quite a lot of baggage. On *BSD you'd have the simpler option of portalfs, with which you could solve the problem with a symlink – there was a port of it to Linux many years ago, but it seems to have been dropped, presumably in favour of FUSE. You can quite easily inject a library to override the required open()/open64() libc call(s) that it makes. e.g.: #define _GNU_SOURCE #include <stdio.h> #include <fcntl.h> #include <string.h> #include <dlfcn.h> #include <stdarg.h> // gcc -Wall -rdynamic -fPIC -nostartfiles -shared -ldl -Wl,-soname,open64 \ // -o open64.so open64.c #define DEBUG 1 #define dfprintf(fmt, ...) \ do { if (DEBUG) fprintf(stderr, "[%14s#%04d:%8s()] " fmt, \ __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0) typedef int open64_f(const char *pathname, int flags, ...); typedef int close_f(int fd); static open64_f *real_open64; static close_f *real_close; static FILE *mypipe=NULL; static int mypipefd=-1; //void __attribute__((constructor)) my_init() void _init() { char **pprog=dlsym(RTLD_NEXT, "program_invocation_name"); dfprintf("It's alive! argv[0]=%s\n",*pprog); real_open64 = dlsym(RTLD_NEXT, "open64"); dfprintf("Hook %p open64()\n",(void *)real_open64); if (!real_open64) printf("error: %s\n",dlerror()); real_close = dlsym(RTLD_NEXT, "close"); dfprintf("Hook %p close()\n",(void *)real_close); if (!real_close) printf("error: %s\n",dlerror()); } int open64(const char *pathname, int flags, ...) { mode_t tmpmode=0; va_list ap; va_start(ap, flags); if (flags & O_CREAT) tmpmode=va_arg(ap,mode_t); va_end(ap); dfprintf("open64(%s,%i,%o)\n",pathname,flags,tmpmode); if (!strcmp(pathname,"/etc/passwd")) { mypipe=popen("/usr/bin/uptime","r"); mypipefd=fileno(mypipe); dfprintf(" popen()=%p fd=%i\n",mypipe,mypipefd); return mypipefd; } else { return real_open64(pathname,flags,tmpmode); } } int close(int fd) { int rc; dfprintf("close(%i)\n",fd); if (fd==mypipefd) { rc=pclose(mypipe); // pclose() returns wait4() status mypipe=NULL; mypipefd=-1; return (rc==-1) ? -1 : 0; } else { return real_close(fd); } } Compile and run: $ gcc -Wall -rdynamic -fPIC -nostartfiles -shared -ldl -Wl,-soname,open64 \ -o open64.so open64.c $ LD_PRELOAD=`pwd`/open64.so cat /etc/passwd 19:55:36 up 1110 days, 9:19, 55 users, load average: 0.53, 0.33, 0.29 Depending on exactly how the application works (libc calls), you may need to handle open() or fopen()/fclose() instead. The above works for cat or head, but not sort since it calls fopen() instead (it's straightforward to add fopen()/fclose() to the above too). You probably need more error handling and sanity checking than the above (especially with a long running program, to avoid leaks). This code does not correctly handle concurrent opens. Since a pipe and a file have obvious differences, there is a risk that the program will malfunction. Otherwise, assuming you have daemon and socat you can pretend you don't have an infinite loop: daemon -r -- socat -u EXEC:/usr/bin/uptime PIPE:/tmp/uptime This has the slight disadvantage (which should be evident here) of the provider program starting to write then blocking, so you see an old uptime, instead of it being run on-demand. Your provider would need to use non-blocking I/O in order to properly provide just-in-time data. (A unix domain socket would allow a more conventional client/server approach, but that's not the same as a FIFO/named pipe that you can just drop in.) Update see also this later question which covers the same topic, though generalised to arbitrary processes/readers rather than a specific one: How can I make a special file that executes code when read from (note that fifo-only answers won't reliably generalise to concurrent reads)
Dynamic file content generation: Satisfying a 'file open' by a 'process execution' [duplicate]
1,572,946,589,000
I'm currently attempting to remove the usbserial module in order to install a new driver module. When I attempt to remove the module I get the following issue: [root@localhost xr21v141x-lnx-3.0-pak]# modprobe -r usbserial FATAL: Module usbserial is builtin How can I remove the usbserial module?
That means the module was compiled into the kernel. If you want to be able to unload it, you will have to compile a new kernel and have it built as a dynamically (un)loadable module instead.
Removing builtin modules in Linux
1,572,946,589,000
I mounted a FAT32 drive onto my Linux computer using the following terminal command: > sudo mount /dev/sdb1 /media/exampleFolderName -o dmask=000, fmask=111 I did this so I could share / edit the files over a network connection. Unfortunately Linux doesn't support per file permissions in FAT32 format, so this sets the entire drive in the right permissions whilst it's connected. If I understand mount correctly, I'll have to do this every time I plug the drive in, which I don't want to do. I've heard about: /etc/fstab So my question - how do I turn the above mount command into an fstab entry? If anyone could also explain what dmask and fmask mean, that would be appreciated.
You probably want to add a line like /dev/sdb1 /media/drive1 vfat dmask=000,fmask=0111,user 0 0 to /etc/fstab. The additional ,user in the options field allows any user to mount this filesystem, not just root.
Linux, fat32 and etc/fstab
1,572,946,589,000
What are some ways I could interest my friends, girlfriend, parents, etc. to learn and use Linux? It's usually not feasible for them to let me install Linux on their computers, even for a dual boot. Are there other things to get them hooked?
A good way to demonstrate Linux features and for others to play around with, is to boot off a live CD for your Linux distribution. That way no one has to worry about partitioning any hard drive or installing any (corrupted,evil) software until they eventually choose to go with it. The fact that it's for free to try out should make the choise easy. Make a couple of live CDs and distribute them. I suggest Debian or Fedora just show the basic features. About getting them hooked - I'd show off some fancy looking desktop environments e.g. GNOME3, Openbox w. Conky or Compiz just to make the visual gap between Windows/MacOS smaller - in terms of user friendliness. After all, many non-technical people get scared away once they see a terminal, so be aware. From there there are tons of free, educational, multimedia and everyday-use software to explore. Most people use their computer for simple tasks such as browsing the web and office tasks. Using propreitary for this is expensive and unnecessesary. Linux provides the all features most people may ever require of an operating system, plus an extra feature called freedom.
How to get someone interested in using Linux [closed]
1,572,946,589,000
I have a file named 'sourceZip.zip' This file ('sourceZip.zip') contains two files: 'textFile.txt' 'binFile.bin' I also have a file named 'targetZip.zip' This file ('targetZip.zip') contains one file: 'jpgFile.jpg' In linux, what bash command shall I use to copy both files ('textFile.txt', 'binFile.bin') from the source archive ('sourceZip.zip') straight into the second archive ('targetZip.zip'), so that at the end of the process, the second archive ('targetZip.zip') will include all three files? (ideally, this would be done in one command, using 'zip' or 'unzip')
Using the usual command-line zip tool, I don't think you can avoid separate extraction and update commands. source_zip=$PWD/sourceZip.zip target_zip=$PWD/targetZip.zip temp_dir=$(mktemp -dt) ( cd "$temp_dir" unzip "$source_zip" zip -g "$targetZip" . # or if you want just the two files: zip -g "$targetZip" textFile.txt binFile.bin ) rm -rf "$temp_dir" There are other languages with more convenient zip file manipulation libraries. For example, Perl with Archive::Zip. Error checking omitted. use Archive::Zip; my $source_zip = Archive::Zip->new("sourceZip.zip"); my $target_zip = Archive::Zip->new("targetZip.zip"); for my $member ($source_zip->members()) { # or (map {$source_zip->memberNamed($_)} ("textFile.txt", "binFile.bin")) $target_zip->addMember($member); } $target_zip->overwrite(); Another way is to mount the zip files as directories. Mounting either of the zip files is enough, you can use zip or unzip on the other side. Avfs provides read-only support for many archive formats. mountavfs target_zip=$PWD/targetZip.zip (cd "$HOME/.avfs$PWD/sourceZip.zip#" && zip -g "$target_zip" .) # or list the files, as above umountavfs Fuse-zip provides read-write access to zip archives, so you can copy the files with cp. source_dir=$(mktemp -dt) target_dir=$(mktemp -dt) fuse-zip sourceZip.zip "$source_dir" fuse-zip targetZip.zip "$target_dir" cp -Rp "$source_dir/." "$target_dir" # or list the files, as above fusermount -u "$source_dir" fusermount -u "$target_dir" rmdir "$source_dir" "$target_dir" Warning: I typed these scripts directly in my browser. Use at your own risk.
Copy a File From One Zip to Another?
1,572,946,589,000
Just installed Debian 12.0.0. Upgrade from 11.2 didn't work, so installed clean from Bookworm DVD. sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target DOES NOT prevent screen from going to black even though sudo systemctl status sleep.target suspend.target hibernate.target hybrid-sleep.target DOES show that they are all masked. This worked perfectly on Debian 11.2 on this machine, and is still working fine on my other Debian 11.2 server. Creating /etc/systemd/sleep.conf.d/nosuspend.conf AS: [Sleep] AllowSuspend=no AllowHibernation=no AllowSuspendThenHibernate=no AllowHybridSleep=no ALSO HAS NO EFFECT, even after reboot. This is running on a Dell XPS 8930 with an i7-9700 CPU @ 3.00 GHz, 32 GB RAM, a 1 TB SSD, and two 4 TB HDD's. Any advice?
This turns out to be a situation where the command line is apparently not the best way to go. These four steps worked: Click the far upper right corner power button. Click the settings gearwheel in the resulting dropdown box. Click "Power" in the left menu of the resulting popup settings window. Under "Power Saving Options" select "Never" instead of "5 minutes".
Disabling Suspend, etc. on Debian 12
1,572,946,589,000
In order to verify a password hash we can use openssl passwd as shown below and explained here openssl passwd $HASHING-ALGORITHM -salt j9T$F31F/jItUvvjOv6IBFNea/ $CLEAR-TEXT-PASSWORD However, this will work only for the following algorithm: md5, crypt, apr1, aixmd5, SHA-256, SHA-512 How to calculate the hashing password, from bash or python or nodeJS for a $CLEAR-TEXT-PASSWORD, with salt using yescrypt ?
perl's crypt() or python3's crypt.crypt() should just be an interface to your system's crypt() / crypt_r(), so you should be able to do: $ export PASS=password SALT='$y$j9T$F31F/jItUvvjOv6IBFNea/$' $ perl -le 'print crypt($ENV{PASS}, $ENV{SALT})' $y$j9T$F31F/jItUvvjOv6IBFNea/$pCTLzX1nL7rq52IXxWmYiJwii4RJAGDJwZl/LHgM/UD $ python3 -c 'import crypt, os; print(crypt.crypt(os.getenv("PASS"), os.getenv("SALT")))' $y$j9T$F31F/jItUvvjOv6IBFNea/$pCTLzX1nL7rq52IXxWmYiJwii4RJAGDJwZl/LHgM/UD (provided your system's crypt() supports the yescript algorithm with the $y$... salts)
Verifying a hashed salted password that uses yescrypt algorithm
1,572,946,589,000
I have a command to forward a port from my computer to a server, as follows: ssh -L 8000:localhost:8888 myserver.com I would like to run this command in background. I don't need to enter user and password since I already setup a public key. I tried adding & at the end, as follows: ssh -L 8000:localhost:8888 myserver.com& But, I got the following error: [1] + 30825 suspended (tty output) I tried also nohup , as follows: nohup ssh -L 8000:localhost:8888 myserver.com& exit but the port is not forwarding. Finally, I tried ssh -f , as follows: ssh -L 8000:localhost:8888 myserver.com and, I got the following error: Cannot fork into background without a command to execute. My goal is to keep the ssh connection active in background without keeping the terminal open. Any help?
You mention ssh -f, which is correct, but you missed -N, which is the remaining piece of the puzzle: -N Do not execute a remote command. This is useful for just forwarding ports. [...] So close! Try this ssh -fN -L 8000:localhost:8888 myserver.com
SSH Tunnel (Port forwarding) in background
1,572,946,589,000
I just got a new display (Samsung LC27JG50QQU, 1440p, 144hz) which is plugged into my AMD Radeon HD 6950 (DVI-D, DVI-I, HDMI 1.4, 2x Mini DisplayPort) graphics card using HDMI. However, it only lets me set 1080p max in my display settings. Cable and monitor were fine on 1440p with my MacBook Pro. I am running Linux Mint 19.1 Tessa This is the output xrandr gives: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384 DisplayPort-3 disconnected (normal left inverted right x axis y axis) DisplayPort-4 disconnected (normal left inverted right x axis y axis) HDMI-3 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 1920x1080 60.00* 50.00 59.94 1680x1050 59.88 1600x900 60.00 1280x1024 75.02 60.02 1440x900 59.90 1280x800 59.91 1152x864 75.00 1280x720 60.00 50.00 59.94 1024x768 75.03 70.07 60.00 832x624 74.55 800x600 72.19 75.00 60.32 56.25 720x576 50.00 720x480 60.00 59.94 640x480 75.00 72.81 66.67 60.00 59.94 720x400 70.08 DVI-0 disconnected (normal left inverted right x axis y axis) DVI-1 disconnected (normal left inverted right x axis y axis) VGA-1-1 disconnected (normal left inverted right x axis y axis) HDMI-1-1 disconnected (normal left inverted right x axis y axis) DP-1-1 disconnected (normal left inverted right x axis y axis) HDMI-1-2 disconnected (normal left inverted right x axis y axis) HDMI-1-3 disconnected (normal left inverted right x axis y axis) DP-1-2 disconnected (normal left inverted right x axis y axis) DP-1-3 disconnected (normal left inverted right x axis y axis) lspci -k | grep -EA3 'VGA|3D|Display': 00:02.0 Display controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: Gigabyte Technology Co., Ltd 2nd Generation Core Processor Family Integrated Graphics Controller Kernel driver in use: i915 Kernel modules: i915 -- 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] Subsystem: Hightech Information System Ltd. Cayman PRO [Radeon HD 6950] Kernel driver in use: radeon Kernel modules: radeon glxinfo | grep -i vendor: server glx vendor string: SGI client glx vendor string: Mesa Project and SGI Vendor: X.Org (0x1002) OpenGL vendor string: X.Org EDID: 00ffffffffffff004c2d560f4d325530 071d0103803c22782a1375a757529b25 105054bfef80b300810081c081809500 a9c0714f0101565e00a0a0a029503020 350055502100001a000000fd00324b1b 5919000a202020202020000000fc0043 32374a4735780a2020202020000000ff 0048544f4d3230303034340a2020014d 02031bf146901f041303122309070783 01000067030c0010008032023a801871
First create the appropriate modeline with cvt $ cvt 2560 1440 # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync Then add the mode using xrandr --newmode $ xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync Finally set your display to that particular mode: $ xrandr --addmode HDMI-3 2560x1440_60.00 $ xrandr --output HDMI-3 --mode 2560x1440_60.00 EDIT 1: Going by the OP's EDID his monitor is reported as **C27JG5x ** . edid-decode also reports the following: EDID version: 1.3 Manufacturer: SAM Model f56 Serial Number 810889805 Made in week 7 of 2019 Digital display Maximum image size: 60 cm x 34 cm Gamma: 2.20 DPMS levels: Off RGB color display First detailed timing is preferred timing Display x,y Chromaticity: Red: 0.6523, 0.3408 Green: 0.3203, 0.6083 Blue: 0.1455, 0.0654 White: 0.3134, 0.3291 Established timings supported: 720x400@70Hz 9:5 HorFreq: 31469 Hz Clock: 28.320 MHz 640x480@60Hz 4:3 HorFreq: 31469 Hz Clock: 25.175 MHz 640x480@67Hz 4:3 HorFreq: 35000 Hz Clock: 30.240 MHz 640x480@72Hz 4:3 HorFreq: 37900 Hz Clock: 31.500 MHz 640x480@75Hz 4:3 HorFreq: 37500 Hz Clock: 31.500 MHz 800x600@56Hz 4:3 HorFreq: 35200 Hz Clock: 36.000 MHz 800x600@60Hz 4:3 HorFreq: 37900 Hz Clock: 40.000 MHz 800x600@72Hz 4:3 HorFreq: 48100 Hz Clock: 50.000 MHz 800x600@75Hz 4:3 HorFreq: 46900 Hz Clock: 49.500 MHz 832x624@75Hz 4:3 HorFreq: 49726 Hz Clock: 57.284 MHz 1024x768@60Hz 4:3 HorFreq: 48400 Hz Clock: 65.000 MHz 1024x768@70Hz 4:3 HorFreq: 56500 Hz Clock: 75.000 MHz 1024x768@75Hz 4:3 HorFreq: 60000 Hz Clock: 78.750 MHz 1280x1024@75Hz 5:4 HorFreq: 80000 Hz Clock: 135.000 MHz 1152x870@75Hz 192:145 HorFreq: 67500 Hz Clock: 108.000 MHz Standard timings supported: 1680x1050@60Hz 16:10 HorFreq: 64700 Hz Clock: 119.000 MHz 1280x800@60Hz 16:10 1280x720@60Hz 16:9 1280x1024@60Hz 5:4 HorFreq: 64000 Hz Clock: 108.000 MHz 1440x900@60Hz 16:10 HorFreq: 55500 Hz Clock: 88.750 MHz 1600x900@60Hz 16:9 1152x864@75Hz 4:3 HorFreq: 67500 Hz Clock: 108.000 MHz Detailed mode: Clock 241.500 MHz, 597 mm x 336 mm 2560 2608 2640 2720 hborder 0 1440 1443 1448 1481 vborder 0 +hsync -vsync VertFreq: 59 Hz, HorFreq: 88786 Hz Monitor ranges (GTF): 50-75Hz V, 27-89kHz H, max dotclock 250MHz Monitor name: C27JG5x Serial number: HTOM200044 Has 1 extension blocks Checksum: 0x4d (valid) While this error might just as likely radeon (namely drmmode_do_crtc_dpms cannot get last vblank counter reported in Xorg.log) driver ( a fix I am in the process of putting together in EDIT 2), in OP's case the monitor might be able to produce an output with the following modeline as reported by edid-decode: Modeline "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync and then again using xrandr as follows: $ xrandr --newmode "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync $ xrandr --addmode HDMI-3 "2560x1440" $ xrandr --output HDMI-3 --mode 2560x1440 This might very well work as both cvt and gtf fails in producing a modeline limited by the EDID reported max dotclock of 250MHz. My own monitor (only capable of 1080p) actually tries to produce the impossible the 2560x1440 resolution when given a modeline properly limited by the EDID max dotclock, unlike when given the cvt modeline which completely shuts down the monitor into standby mode with a message on the screen that says "input not available". In OP's case it was necessary to further drop the refresh rate through limiting the dotclock so the following two modelines may need to be used instead of the one above. xrandr --newmode "2560x1440_54.97" 221.00 2560 2608 2640 2720 1440 1443 1447 1478 +HSync -VSync xrandr --newmode "2560x1440_49.95" 200.25 2560 2608 2640 2720 1440 1443 1447 1474 +HSync -VSync One additional important point is to make sure that the GPU clock as specified by the driver is also capable of the chosen bandwidth by checking the value reported by: grep -iH PixClock /var/log/Xorg.* , and even more importantly that the cable standard you are using conforms to the following limits:
How to enable 2560x1440 option for display in Linux Mint?
1,572,946,589,000
Following on from another user's question I've bumped into a quirk of Linux filesystem permissions that I can't easily rationalize: sudo mkdir ~/foo ~/foo/bar sudo touch ~/baz mkdir ~/my_dir chown 700 ~/my_dir # this is fine mv ~/baz ~/my_dir # renaming is fine mv ~/foo ~/bob # Moving caused: Permission denied mv ~/bob ~/my_dir/ For clarity foo foo/bar baz are owned by root. my_dir is owned by my own user and of course ~ is owned by my own user. I can rename and move a file owned by another user. I can rename a directory owned by another user, but I can't move a directory owned by another user. This seems a very specific restriction and I don't understand what danger is being protected against or what underlying mechanism means that it can only work this way. Why can other users' directories not be moved?
This is one of the situations documented to lead to EACCES: oldpath is a directory and does not allow write permission (needed to update the .. entry). You can’t write inside bob, which means you can’t update bob/.. to point to its new value, my_dir. Moving files doesn’t involve writing to them, but moving directories does.
Why can't you move another user's directory when you can move their file?
1,572,946,589,000
AFAIK, Linux has a page cache to improve performance (example, if you open a file, linux caches that file in RAM), then if file is requested again and it's cached, the OS avoids read the file from disk and returns the file from cache... My question is: if you have a file in tmpfs and you interact with that file (read), does the file becomes duplicated in RAM (one in the tmpfs and one in the page cache?)
Does a tmpfs use Linux Page Cache? tmpfs and the page cache are two sides of the same coin. As described in https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt (emphasis mine) tmpfs puts everything into the kernel internal caches and grows and shrinks to accommodate the files it contains and is able to swap unneeded pages out to swap space. [...] Since tmpfs lives completely in the page cache and on swap, all tmpfs pages will be shown as "Shmem" in /proc/meminfo and "Shared" in free(1). So as such it would be very unexpected for this cache to be duplicated. It's already in the cache, tmpfs is just a front-end of sorts to the cache system. My question is: if you have a file in tmpfs and you interact with that file (read), does the file becomes duplicated in RAM (one in the tmpfs and one in the page cache?) This can be determined experimentally. # sync # echo 3 > /proc/sys/vm/drop_caches # free -m total used free shared buff/cache available Mem: 15940 2005 13331 264 603 13390 Swap: 0 0 0 So, I happen to have roughly ~13000 available memory, and no process running that would change it too drastically, and no swap. Let's burn ~6000 on a tmpfs: # mount -t tmpfs -o size=6000M none /mnt/tmp # dd if=/dev/urandom of=/mnt/tmp/big.file dd: writing to '/mnt/tmp/big.file': No space left on device 6291456000 bytes (6.3 GB, 5.9 GiB) copied So tmpfs filled with random data. What's free now? # free -m total used free shared buff/cache available Mem: 15940 1958 7347 6269 6633 7429 Swap: 0 0 0 So free went down from 13331 to 7347, while shared and buff/cache both went up by 6000. That's interesting, but it still only counts as one, guess that's why they call it shared -.-' Deliberately reading the file: # cat /mnt/tmp/big.file > /dev/null # free -m total used free shared buff/cache available Mem: 15940 2055 7237 6269 6647 7332 Swap: 0 0 0 Counts did not go up (not by the order of 6000 anyway). Deliberatly reading something else: # cat /some/other/file > /dev/null # free -m total used free shared buff/cache available Mem: 15940 2011 157 6303 13771 7334 Swap: 0 0 0 ...and now free is down to 157, cache pretty much full. So, to summarize: tmpfs itself already represents the page cache. When reading files in tmpfs, they are not duplicated by the page cache anymore.
Does a tmpfs (/dev/shm) use Linux Page Cache? If it does, then how?
1,572,946,589,000
Consider the following folder structure: . ├── test1 │   ├── nested1 │   ├── testfile11 │   └── testfile12 └── test2 ├── nested1 -> /path/to/dir/test1/nested1 └── testfile21 test2/nested1 is a symlink to the directory test1/nested1. I would expect, if it were the cwd, .. would resolve to test2. However, I have noticed this inconsistency: $ cd test2/nested1/ $ ls .. nested1 testfile11 testfile12 $ cd .. $ ls nested1 testfile21 touch also behaves like ls, creating a file in test1. Why does .. as an argument to cd refer to the parent of the symlink, while to (all?) others refers to the parent of the linked dir? Is there some simple way to force it to refer to paths relative to the symlink? I.e. the "opposite" of readlink? # fictional command ls $(linkpath ..) EDIT: Using bash
The commands cd and pwd have two operational modes. -L logical mode: symlinks are not resolved -P physical mode: symlinks are resolved before doing the operation The important thing to know here is that cd .. does not call the syscall chdir("..") but rather shortens the $PWD variable of the shell and then chdirs to that absolute path. If you are in physical mode, this is identical to calling chdir(".."), but when in logical mode, this is different. The main problem here: POSIX decided to use the less safe logical mode as default. If you call cd -P instead of just cd, then after a chdir() operation, the return value from getcwd() is put into the shell variable $PWD and a following cd .. will get you to the directory that is physically above the current directory. So why is the POSIX default less secure? If you crossed a symlink in POSIX default mode and do the following: ls ../*.c cd .. rm *.c you will probably remove different files than those that have been listed by the ls command before. If you like the safer physical mode, set up the following aliases: alias cd='cd -P' alias pwd='pwd -P' Since when using more than one option from -L and -P the last option wins, you still may be able to get the other behavior. Historical background: The Bourne Shell and ksh88 did get directory tracking code at the same time. The Bourne Shell did get the safer physical behavior while ksh88 at the same time did get the less safe logical mode as default and the options -L and -P. It may be that ksh88 used the csh behavior as reference. POSIX took the ksh88 behavior without discussing whether this is a good decision. BTW: some shells are unable to keep track of $PWD values that are longer than PATH_MAX and drive you crazy when you chdir into a directory with an absolute path longer than PATH_MAX. dash is such a defective shell.
How is .. (dot dot) resolved in bash when cwd is a symlink to a directory [duplicate]
1,572,946,589,000
The Linux man page for network namespaces(7) says: Network namespaces provide isolation of the system resources associated with networking: [...], the /sys/class/net directory, [...]. However, simply switching into a different network namespace doesn't seem to change the contents of /sys/class/net (see below for how to reproduce). Am I just mistaken here in thinking that the setns() into the network namespace is already sufficient? Is it always necessary to remount /sys in order to get the correct /sys/class/net matching the currently joined network namespace? Or am I missing something else here? Example to Reproduce Take an *ubuntu system, find the PID of the rtkit-daemon, enter the daemon's network namespace, show its network interfaces, and then check /sys/class/net: $ PID=`sudo lsns -t net -n -o PID,COMMAND | grep rtkit-daemon | cut -d ' ' -f 2` $ sudo nsenter -t $PID -n # ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 # ls /sys/class/net docker0 enp3s0 lo lxcbr0 ... Please notice that while ip link show correctly only shows lo, /sys/class/net shows all network interfaces visible in the "root" network namespace (and "root" mount namespace). In the case of rtkit-daemon also entering the mount namespace of it doesn't make a difference: sudo nsenter -t $PID -n -m and then ls /sys/class/net still shows network interfaces not present in the network namespace. "Fix" Many kudos to @Danila Kiver for explaining what really is going on behind the Linux kernel scenes. Remounting sysfs while the correct network namespace is joined will show the correct entries in /sys/class/net: $ PID=`sudo lsns -t net -n -o PID,COMMAND | grep rtkit-daemon | cut -d ' ' -f 2` $ sudo nsenter -t $PID -n # MNT=`mktemp -d` # mount -t sysfs none $MNT # ls $MNT/class/net/ lo # umount $MNT # rmdir $MNT # exit So this now yields the correct results in /sys/class/net.
Let's look into man 5 sysfs: /sys/class/net Each of the entries in this directory is a symbolic link representing one of the real or virtual networking devices that are visible in the network namespace of the process that is accessing the directory. So, according to this manpage, the output of ls /sys/class/net must depend on the network namespace of the ls process. But... Actual behavior does not seem to be as described in this manpage. There is a nice kernel documentation about how it works. Each sysfs mount has a namespace tag associated with it. This tag is set when sysfs gets mounted and depends on the network namespace of the calling process. Each sysfs entry (e.g. an entry in /sys/class/net) also may have a namespace tag associated with it. When you iterate over the sysfs directory, the kernel obtains the namespace tag of the sysfs mount, and then it iterates over the entries, filtering out those which have different namespace tag. So, it turns out that the results of iterating over the /sys/class/net depend on the network namespace of the process which initiated /sys mount rather than on the network namespace of the current process, thus, you must always mount /sys in the current network namespace (from any process belonging to this namespace) to see the correct results.
Switching into a network namespace does not change /sys/class/net?
1,572,946,589,000
I'm trying to compile a demo project, what is using OpenGL. I'm getting this error message: But I have everything: What is happening? If I have all of the dependencies, why does it not compile? I use Solus 3.
The meaning of -lglut32 (as an example) is, load the library glut32. The result of the ls you execute showed that you have the header file for glut32 In order to solve the problem of cannot find -l-library-name You need: To actually have the library in your computer Help gcc/the linker to find the library by providing the path to the library You can add -Ldir-name to the gcc command You can the library location to the LD_LIBRARY_PATH environment variable Update the "Dynamic Linker": sudo ldconfig man gcc -llibrary -l library Search the library named library when linking. -Ldir Add directory dir to the list of directories to be searched for -l.
gcc /usr/bin/ld: cannot find -lglut32, -lopengl32, -lglu32, -lfreegut, but these are installed
1,572,946,589,000
I have the following file variable and values # more file.txt export worker01="sdg sdh sdi sdj sdk" export worker02="sdg sdh sdi sdj sdm" export worker03="sdg sdh sdi sdj sdf" I perform source in order to read the variable # source file.txt example: echo $worker01 sdg sdh sdi sdj sdk until now every thing is perfect but now I want to read the variables from the file and print the values by simple bash loop I will read the second field and try to print value of the variable # for i in ` sed s'/=/ /g' /tmp/file.txt | awk '{print $2}' ` do echo $i declare var="$i" echo $var done but its print only the variable and not the values worker01 worker01 worker02 worker02 worker03 worker03 expected output: worker01 sdg sdh sdi sdj sdk worker02 sdg sdh sdi sdj sdm worker03 sdg sdh sdi sdj sdf
You have export worker01="sdg sdh sdi sdj sdk", then you replace = with a space to get export worker01 "sdg sdh sdi sdj sdk". The space separated fields in that are export, worker01, "sdg, sdh, etc. It's probably better to split on =, and remove the quotes, so with just the shell: $ while IFS== read -r key val ; do val=${val%\"}; val=${val#\"}; key=${key#export }; echo "$key = $val"; done < vars worker01 = sdg sdh sdi sdj sdk worker02 = sdg sdh sdi sdj sdm worker03 = sdg sdh sdi sdj sdf key contains the variable name, val the value. Of course this doesn't actually parse the input, it just removes the double quotes if they happen to be there.
bash + read variables & values from file by bash script
1,572,946,589,000
Why is there a type of $TERM called linux described in /etc/termcap? Why and when was it created and what's the point of it? Couldn't we just stay with vt-generic? Is linux just a customary name and a set of capabilities for a Linux console (like, say, etc or home are a customary names for directories under root and nobody cares) or were there any technical reasons behind when it was introduced?
The terminal description is named for Linux, which provides its own console emulator (as do several other kernels). Except for FreeBSD, all of the Linux- and modern BSD-platforms get "termcap" by deriving it from the terminfo database in ncurses. Console entries are specific to the systems in which they are implemented (unlike many terminal emulators, which run on more than one platform). A comment in ncurses 1.8.6 (October 1994) for the linux terminal description stated: # Note that the statdard Linux console is now called 'linux' instead # of 'console'. terminals with sizes other than 80x25 need to append # their size to linux and add an entry like the one for 132x43 below That was specific to Linux, but generalization followed as ncurses was ported. In the ncurses sources, this section of INSTALL is very old (seen in 1.9.7a in November 1995), but not outdated: Naming the Console Terminal In various systems there has been a practice of designating the system console driver type as `console'. Please do not do this! It complicates peoples' lives, because it can mean that several different terminfo entries from different operating systems all logically want to be called `console'. Please pick a name unique to your console driver and set that up in the /etc/inittab table or local equivalent. Send the entry to the terminfo maintainer (listed in the misc/terminfo file) to be included in the terminfo file, if it's not already there. See the term(7) manual page included with this distribution for more on conventions for choosing type names. Here are some recommended primary console names: linux -- Linux console driver freebsd -- FreeBSD netbsd -- NetBSD bsdos -- BSD/OS If you are responsible for integrating ncurses for one of these distributions, please either use the recommended name or get back to us explaining why you don't want to, so we can work out nomenclature that will make users' lives easier rather than harder. There is a section in the terminal database for these: ANSI, UNIX CONSOLE, AND SPECIAL TYPES, while there is no "vt-generic" description nor (given the differences across the variations), is there a plausible choice. If you look for "vt-generic", likely you will find that only in less prevalent implementations such as Informix (seen in this file): # @(#)/etc/termcap 0.0 # # Informix product aware termcap file # # Author: Marco Greco, <[email protected]>, Catania, Italy # # Initial release: Jun 97 # Current release: Jul 98 # # Absolutely no warranty -- use at your own risk # # Notes: Adapted from the default Slackware termcap file: # added gs, ge, gb, ZG, ZA capabilities, shifted function keys # down by one, added ki, kj, kf, kg # # Limit the size of each entry - 4gl apps core dump if applicable # entry too long # # Entries other than vt's, console & xterm *untested* # # From: [email protected] (Miquel van Smoorenburg) # # Okay guys, here is a shorter termcap that does have most # capabilities and is ncurses compatible. If it works for you # I'd like to hear about it. Further reading: tctest — termcap library checker list of capability codes for Informix 4GL termcap
Origin of /etc/termcap linux entry
1,433,997,497,000
I have switched recently to Fedora 22 from Ubuntu Gnome. In Ubuntu gnome, when my Kodi media center used to hang while in full-screen, I used to press Ctrl+Alt+F1 to switch to terminal. Then find the process ID with ps aux | grep process_name and then kill the process and use Ctrl+Alt+F7 to return to gnome. What should I do in fedora to switch to terminal like that? Couldn't find anything in keyboard shortcuts and the defaults I mentioned above aren't working. Also is there any other/better way to end unresponsive full-screen applications?
You can try Ctrl+Alt+F2, or F3 or F4... Unless it was changed in Fedora 22, graphical server is started on first terminal of Fedora instead of 7th in Ubuntu. P.S. If that works, use Ctrl+Alt+F1 to go back to graphical server.
Switch to a text console in Fedora
1,433,997,497,000
CentOS Linux release 7.0.1406 (Core) / Linux 3.10.0-123.13.2.el7.x86_64 Last week, I noticed that when I tried to restart, there was an option to Install Updates & Restart. I do not recall manually installing any updates. Because this computer is for work, I would rather not upgrade software where a previous version is crucial for development... Or somehow make a mistake and take a day to fix it. PS: If needed, how do I rollback to a point before Update A was installed?
I found out that in Centos 7 yum-cron has nothing to do with the "Install Updates & Restart" prompt. I don't need or want automatic updates too. After some tricky research I discovered this feature is provided by a gnome package "packagekit". Three solutions: uninstall packagekit altogether (my favourite) disable packagekit from running (see systemctl) find PackageKit.conf (in /etc/PackageKit/ on my system) find WritePreparedUpdates= in the file (last line on my system) set WritePreparedUpdates=false restart in all three cases (just to be on the safe side...) More at: http://www.itsprite.com/linuxhow-to-disable-packagekit-on-centos-fedora-or-rhel/
How to disable automatic updates in CentOS 7?
1,433,997,497,000
Trying lots of different Linux distributions on all kinds of hardware, I find myself typing commands like this quite often: sudo dd if=xubuntu-13.10-desktop-amd64.iso of=/dev/sdc bs=10240 Needless to say, sooner or later I will mistype the destination and wipe a harddrive instead of the intended USB drive. I would like not to use sudo every time here. On my system, a fairly modern Ubuntu, permissions on /dev/sdc are like: (when a stick is present): $ ls -al /dev/sdc* brw-rw---- 1 root disk 8, 32 Apr 6 22:10 /dev/sdc How do I grant my regular user write access to random USB sticks but not other disks present in my system?
I think you can use UDEV to do what you want. Creating a rules file such as/etc/udev/rules.d/99-thumbdrives.rules you'd simply add a rule that will allow either a Unix group or user access to arbitrary USB thumb drives. KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", OWNER="<user>", GROUP="<group>", MODE="0660" Would create the device using the user <user> and group <group>. Example After adding this line to my system. KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", OWNER="saml", GROUP="saml", MODE="0660" And reloading my rules: $ sudo udevadm control --reload-rules If I now insert a thumbdrive into my system, my /var/log/messages shows up as follows: $ sudo tail -f /var/log/messages Apr 13 11:48:45 greeneggs udisksd[2249]: Mounted /dev/sdb1 at /run/media/saml/HOLA on behalf of uid 1000 Apr 13 11:51:18 greeneggs udisksd[2249]: Cleaning up mount point /run/media/saml/HOLA (device 8:17 is not mounted) Apr 13 11:51:18 greeneggs udisksd[2249]: Unmounted /dev/sdb1 on behalf of uid 1000 Apr 13 11:51:18 greeneggs kernel: [171038.843969] sdb: detected capacity change from 32768000 to 0 Apr 13 11:51:39 greeneggs kernel: [171058.964358] usb 2-1.2: USB disconnect, device number 15 Apr 13 11:51:46 greeneggs kernel: [171066.053922] usb 2-1.2: new full-speed USB device number 16 using ehci-pci Apr 13 11:51:46 greeneggs kernel: [171066.134401] usb 2-1.2: New USB device found, idVendor=058f, idProduct=9380 Apr 13 11:51:46 greeneggs kernel: [171066.134407] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Apr 13 11:51:46 greeneggs kernel: [171066.134410] usb 2-1.2: Product: USBDrive Apr 13 11:51:46 greeneggs kernel: [171066.134412] usb 2-1.2: Manufacturer: JMTek Apr 13 11:51:46 greeneggs kernel: [171066.135470] usb-storage 2-1.2:1.0: USB Mass Storage device detected Apr 13 11:51:46 greeneggs kernel: [171066.136121] scsi17 : usb-storage 2-1.2:1.0 Apr 13 11:51:46 greeneggs mtp-probe: checking bus 2, device 16: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2" Apr 13 11:51:46 greeneggs mtp-probe: bus: 2, device: 16 was not an MTP device Apr 13 11:51:47 greeneggs kernel: [171067.139462] scsi 17:0:0:0: Direct-Access JMTek USBDrive 7.77 PQ: 0 ANSI: 2 Apr 13 11:51:47 greeneggs kernel: [171067.140251] sd 17:0:0:0: Attached scsi generic sg2 type 0 Apr 13 11:51:47 greeneggs kernel: [171067.142105] sd 17:0:0:0: [sdb] 64000 512-byte logical blocks: (32.7 MB/31.2 MiB) Apr 13 11:51:47 greeneggs kernel: [171067.144236] sd 17:0:0:0: [sdb] Write Protect is off Apr 13 11:51:47 greeneggs kernel: [171067.145988] sd 17:0:0:0: [sdb] No Caching mode page found Apr 13 11:51:47 greeneggs kernel: [171067.145998] sd 17:0:0:0: [sdb] Assuming drive cache: write through Apr 13 11:51:47 greeneggs kernel: [171067.153721] sd 17:0:0:0: [sdb] No Caching mode page found Apr 13 11:51:47 greeneggs kernel: [171067.153728] sd 17:0:0:0: [sdb] Assuming drive cache: write through Apr 13 11:51:47 greeneggs kernel: [171067.159028] sdb: sdb1 Apr 13 11:51:47 greeneggs kernel: [171067.164760] sd 17:0:0:0: [sdb] No Caching mode page found Apr 13 11:51:47 greeneggs kernel: [171067.164768] sd 17:0:0:0: [sdb] Assuming drive cache: write through Apr 13 11:51:47 greeneggs kernel: [171067.164775] sd 17:0:0:0: [sdb] Attached SCSI removable disk Apr 13 11:51:47 greeneggs kernel: [171067.635474] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Apr 13 11:51:47 greeneggs udisksd[2249]: Mounted /dev/sdb1 at /run/media/saml/HOLA on behalf of uid 1000 Now checking out the device files under /dev shows the following: $ ls -l /dev/sd* brw-rw----. 1 root disk 8, 0 Apr 13 09:17 /dev/sda brw-rw----. 1 root disk 8, 1 Apr 13 09:17 /dev/sda1 brw-rw----. 1 root disk 8, 2 Apr 13 09:17 /dev/sda2 brw-rw----. 1 saml saml 8, 16 Apr 13 11:51 /dev/sdb brw-rw----. 1 root disk 8, 17 Apr 13 11:51 /dev/sdb1 So it would seem to have worked. Being more explicit The above will work but will likely have these rules getting applied to every block device which isn't quite what we want. To narrow its focus a bit you can use ATTRS{..}==... attribute rules to restrict the application to specific hardware. In my case I only want it to be applied to a single USB thumbdrive. Step #1 - uniquely id device So to start we can use this command once we've mounted the particular thumb drive so that we can use udevadm to scrutinize it, groping it for its particular attributes. Here I'm focusing on looking at the "manufacturer" and "product" attributes. $ udevadm info -a -p $(udevadm info -q path -n /dev/sdb)|grep -iE "manufacturer|product" ATTRS{manufacturer}=="JMTek" ATTRS{idProduct}=="9380" ATTRS{product}=="USBDrive" ATTRS{idProduct}=="0020" ATTRS{manufacturer}=="Linux 3.13.7-100.fc19.x86_64 ehci_hcd" ATTRS{idProduct}=="0002" ATTRS{product}=="EHCI Host Controller" NOTE: ATTRS{..}==.. attributes are attributes to parent devices in the hierarchy of where this device's device file is ultimately deriving from. So in our case the block device being added, /dev/sdb is coming from a USB parent device, so we're looking for this parent's attributes, ATTRS{manufacturer}=..., for example. So in this example I'm selecting the manufacturer "JMTek" and the product "USBDrive". Step #2 - modify .rules flie So with these additional bits in hand let's add them to our original .rules file. KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ATTRS{manufacturer}=="JMTek", ATTRS{product}=="USBDrive", OWNER="saml", GROUP="saml", MODE="0660" Step #3 - Trying it out Now when we reload our rules and unmount/remove/reinsert our USB thumbdrive again we get this rule: $ ls -l /dev/sdb* brw-rw----. 1 saml saml 8, 16 Apr 13 12:29 /dev/sdb brw-rw----. 1 root disk 8, 17 Apr 13 12:29 /dev/sdb1 However if I insert a completely different device: $ ls -l /dev/sdb* brw-rw----. 1 root disk 8, 16 Apr 13 12:41 /dev/sdb brw-rw----. 1 root disk 8, 17 Apr 13 12:41 /dev/sdb1 References ArchLinux Wiki UDEV Topic
Writing raw images safely to USB sticks