date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,489,492,456,000
A Plextor PX-750A on a Linux system has made many DVD+R DL coasters while trying to burn an 8GB DVD .iso file with growisofs and the Schilling cdrecord program. I tried growisofs first: growisofs --version * growisofs by <[email protected]>, version 7.0, front-ending to genisoimage: genisoimage 1.1.8 (Linux) The command line was: growisofs -dvd-compat -Z /dev/sr1=SUU_14.03.00_A00.iso Removal of the -dvd-compat option was also attempted. Schilling cdrtools was also tried: Cdrecord-ProDVD-ProBD-Clone 3.00 (i686-pc-linux-gnu) Copyright (C) 1995-2010 Jörg Schilling The command line was: /usr/local/bin/cdrecord -v dev=ATAPI:1,0,0 SUU_14.03.00_A00.iso In both cases, the burn aborts half way through as though it writes a layer and croaks when it should move to the second layer. growisofs: 4275175424/8434493440 (50.7%) @1.6x, remaining 22:03 RBU 100.0% UBU 94.4% :-[ WRITE@LBA=1fdb40h failed with SK=3h/ASC=0Ch/ACQ=00h]: Input/output error :-( write failed: Input/output error cdrecord: Track 01: 4205 of 8043 MB written (fifo 99%) [buf 97%] 2.4x./usr/local/bin/cdrecord: Input/output error. write_g1: scsi sendcmd: no error CDB: 2A 00 00 20 DA 10 00 00 10 00 status: 0x2 (CHECK CONDITION) Sense Bytes: 70 00 03 00 00 00 00 0A 00 00 95 00 0C 00 00 00 00 00 Sense Key: 0x3 Medium Error, Segment 0 Sense Code: 0x0C Qual 0x00 (write error) Fru 0x0 Sense flags: Blk 0 (not valid) cmd finished after 0.019s timeout 200s write track data: error after 4409294848 bytes /usr/local/bin/cdrecord: A write error occured. /usr/local/bin/cdrecord: Please properly read the error message above. Looking at the media after the burn, it appears to have written data from the inside to the outside - giving the impression that it wrote one layer fully and then errored out when that layer ended. In several attempts, each failure is at about 50% of the way through the burn of: $ ls -lh SUU_14.03.00_A00.iso -rw-rw-r-- 1 user group 7.9G 2014-05-14 07:53 SUU_14.03.00_A00.iso Begin 2014/05/23 edit: The cdrecord man page says: Cdrecord functional options ... driveropts=option list ... layerbreak Switch a drive with DVD-R/DL medium into layer jump recording recording mode and use automatic layer-break position setup. By default, DVD-R/DL media is written in sequential recording mode that completely fills up both layers. layerbreak=value Set up a manual layer-break value for DVD-R/DL and DVD+R/DL. The specified layer-break value must not be set to less than half of the recorded data size and must not be set to more than the remaining Layer 0 size of the medium. The manual layer-break value needs to be a mul- tiple of the ECC sector size which is 16 logical 2048 byte sectors in case of DVD media and 32 logical 2048 byte sectors in case of HD-DVD or BD media. Cdrecord does not allow to write DL media in case that the total amount of data is less then the Layer 0 size of the medium except when a manual layer-break has been specified by using the layerbreak=value option. Use of layerbreak without a manually computed breakpoint gives: cdrecord -v driveropts=layerbreak dev=ATAPI:1,0,0 SUU_14.03.00_A00.iso ... cdrecord: Bad layer break value ''. ... After some research, I found a patch to an older cdrecord that contained an "optimal layer break computation". With a little experimentation, I found that I could compute the "optimal layer break" in a shell: echo -e "a = $(isosize SUU_14.03.00_A00.iso)\nb = a / 2048 / 2\nb - 1 + 16 - ( b - 1 ) % 16\n" | bc This gave a layerbreak value of 2059216. In fact, for grins, I tried adjusting the value up and down. For some layerbreak values (like 2059215) cdrecord would report: cdrecord: Layer break at 2059215 is not properly aligned. Hopeful, I tried: cdrecord -v driveropts=layerbreak=2059216 dev=ATAPI:1,0,0 SUU_14.03.00_A00.iso And still got a coaster and write error: Track 01: 4205 of 8043 MB written (fifo 99%) [buf 97%] 2.4x./usr/local/bin/cdrecord: Input/output error. write_g1: scsi sendcmd: no error CDB: 2A 00 00 20 DB 60 00 00 10 00 status: 0x2 (CHECK CONDITION) Sense Bytes: 70 00 03 00 00 00 00 0A 00 00 95 00 0C 00 00 00 00 00 Sense Key: 0x3 Medium Error, Segment 0 Sense Code: 0x0C Qual 0x00 (write error) Fru 0x0 Sense flags: Blk 0 (not valid) cmd finished after 0.022s timeout 200s write track data: error after 4409982976 bytes /usr/local/bin/cdrecord: A write error occured. /usr/local/bin/cdrecord: Please properly read the error message above. End of 2014/05/23 edit. This issue occured on a server where there were no GUI tools installed. I am looking for a way to burn dual layer DVDs from the command-line.
Try another DVD writer. Failing commands functioned properly when used with a USB-attached LG M/N GE24LU20 DVD writer. Check DVD writer firmware revision. Upgrade the firmware if a newer revision is available. Both growisofs and cdrecord commands above that fail on a Plextor PX-750A-UF 1.01 burner succeed with the same drive after a firmware update to PX-750A-UF 1.03. /usr/local/bin/cdrecord -v dev=ATAPI:1,0,0 SUU_14.03.00_A00.iso growisofs -dvd-compat -Z /dev/sr1=SUU_14.03.00_A00.iso Other benefits to firmware upgrade are probable. For instance, the PX-750A-UF 1.01 firmware device wrote the dual layer DVD at about 2.4x but the 1.03 firmware wrote the media at a 6x rate. Concerning the layer break, it was interesting to observe cdrecord pausing a long time at the 4023 MB point (half way) and right before the 4025 MB point where cdrecord failed when working with the drive when it had the older firmware. It had the appearance that the delay may have been due to a layer switch operation.
Command line to burn DVD+R DL media on Linux?
1,489,492,456,000
(This is not about restricting client access, for which ext3 permissions do the trick) I'd like to encrypt the data on my NAS drive (Buffalo LinkStation Pro with SSH access enabled, if that matters) in a user-friendly way. Currently, a truecrypt container has to be manually mounted via SSH and also unmounted again (unless you solve my timeout question). Using a passwordless (but EFS encrypted) SSH key this is reduced to two PuTTY desktop shortcuts and entering the truecrypt password (until simplified further) for mounting. However, the ideal solution would be transparent. I first thought about trying to somehow have the share allow for EFS encryption, but that would probably involve more work and EFS for multiple users without an Active Directory server seems to be troublesome. But now my idea is an automated mount of e.g. an EncFS encrypted directory triggered automatically by a samba access from authorized users (using Windows clients). How can that be achieved? (Bonus points for displaying a honeypot share for unauthorized users...)
I'm seeing a sketch of a solution using Samba "logon scripts" - client-side code that runs after a samba login - but a complete solution needs to complete the sketch with details. Also related are "preexec scripts" - server-side code that runs during a samba login. Referencing the smb.conf man page logon script (G) This parameter specifies the batch file (.bat) or NT command file (.cmd) to be downloaded and run on a machine when a user successfully logs in. The file must contain the DOS style CR/LF line endings. Using a DOS-style editor to create the file is recommended. The script must be a relative path to the [netlogon] service. If the [netlogon] service specifies a path of /usr/local/samba/netlogon, and logon script = STARTUP.BAT, then the file that will be downloaded is: /usr/local/samba/netlogon/STARTUP.BAT The contents of the batch file are entirely your choice. A suggested command would be to add NET TIME \SERVER /SET /YES, to force every machine to synchronize clocks with the same time server. Another use would be to add NET USE U: \SERVER\UTILS for commonly used utilities, or NET USE Q: \\SERVER\ISO9001_QA for example. Note that it is particularly important not to allow write access to the [netlogon] share, or to grant users write permission on the batch files in a secure environment, as this would allow the batch files to be arbitrarily modified and security to be breached. This option takes the standard substitutions, allowing you to have separate logon scripts for each user or machine. and also preexec (S) This option specifies a command to be run whenever the service is connected to. It takes the usual substitutions. An interesting example is to send the users a welcome message every time they log in. Maybe a message of the day? Here is an example: preexec = csh -c 'echo \"Welcome to %S!\" | /usr/local/samba/bin/smbclient -M %m -I %I' & In your case, though, you really want logon scripts (unencrypted form is mounted on the client), so a solution sketch might involve: ensure that each computer has a EncFS equivalent installed write a logon script (.bat format) that calls encfs on the client and prompts the user for logon. The encfs command thus mounts the unencrypted form locally, with the remote store remaining encrypted. configure smb.conf so that the relevant users run the logon script. e.g. something like logon script = runencfs.bat For bonus points, your logon script might automate / prompt installation of Encfs (from the samba share) and only run the mount if it's installed! Client-side scripts, though, are bound to give you headaches because of the cmd language, ensuring installation of encfs, and working around windows gotchas, like Windows 8.1 and up not running the logon scripts till five minutes later unless otherwise configured.
How to set up an encrypted directory to be mounted only during samba access?
1,489,492,456,000
I am trying to generate sound data, convert it and store it to a WAV format. I'm almost there - except I'd like to hear the generated sound while it is being "recorded". This command just generates data and plays it back: perl -e 'for ($c=0; $c<4*44100; $c++) { $k=1*sin((1500+$c/16e1)*$c*22e-6); print pack "f", $k; } ' | aplay -t raw -c 1 -r 44100 -f FLOAT_LE (Note that if you press Ctrl-C here after sound stops playing, aplay may segfault) Using sox and mplayer, I can record fine - but I can hear no sound at the same time: perl -e 'for ($c=0; $c<4*44100; $c++) { $k=1*sin((1500+$c/16e1)*$c*22e-6); print pack "f", $k; } ' | sox -V -r 44100 -c 1 -b 32 -e floating-point -t raw - \ -c 2 -b 16 -t wav - trim 0 3 gain -1 dither | mplayer - -cache 8092 -endpos 3 -vo null -ao pcm:waveheader:file=test.wav Note here that play test.wav (where play is from sox package, not alsa's aplay) will state "Duration: 00:00:03.00" for the test.wav file. Also, this process seems to run faster than realtime, i.e. completes in (apparently) less than 3 secs. By trying to cheat by using tee to capture the stream to disk, perl -e 'for ($c=0; $c<4*44100; $c++) { $k=1*sin((1500+$c/16e1)*$c*22e-6); print pack "f", $k; } ' | sox -V -r 44100 -c 1 -b 32 -e floating-point -t raw - \ -c 2 -b 16 -t wav - trim 0 3 gain -1 dither | tee test.wav | aplay Here apparently I get to hear the sound as it is generated - and test.wav is playable as well, however, play test.wav will report "Duration: unknown". So I'd like to ask - is it possible to do something like the above "one-liner" command, to both generate, play and record a sound at the same time - however, without the need to install jack? PS: some relevant links: notes on linux audio file formats Mplayer doesn’t stream from stdin without cache setting - Dag Olav Prestegarden command line - How to convert 16bit wav to raw audio - Stack Overflow Old Nabble - linux-audio-dev - audio recording through pipe using mplayer and sox sometimes has incorrect speed How to redirect the ALSA output to a file? | Software View Can I setup a loopback audio device? - Unix and Linux - Stack Exchange
You can use tee(1) to multiplex the stream, e.g. perl -e 'for ($c=0; $c<4*44100; $c++) { $k=1*sin((1500+$c/16e1)*$c*22e-6); print pack "f", $k; }' | tee >(sox -c1 -r44100 -t f32 - test.wav) \ >(sox -c1 -r44100 -t f32 - -d) > /dev/null You might also be interested in soxs' synth effect, which can produce most tones and sweeps, e.g. sox -n -r 44100 test.wav synth 4 sine 100:1000
Command line audio - piping for simultaneous playback and recording
1,535,912,384,000
What are the platforms that Linux is being used frequently on besides x86? I know that x86 dominates. But, what are other platforms that some people also use Linux for? Are there links for statistics about this?
ARM is huge for linux. Aside from the Rasberry Pi and other hobbyist ARM SoC you have every Android phone and tablet and many of the Chromebooks running Linux on ARM. I couldn't find any hard numbers on total devices in use, but total android activations number somewhere north of 1 billion. The Chromebooks are Amazon's best selling laptops, though not all of those are ARM based, and I'm not sure what the breakdown of sales are. Needless to say, ARM is one of Linux bigger architectures as far users go.
What are the platforms that Linux is being used frequently on besides x86?
1,535,912,384,000
For example, I have directory with multiple files created by this way: touch files/{1..10231}_file.txt I want to move them into new directory new_files_dir. The simplest way to do this is: for filename in files/*; do mv "${filename}" -t "new_files_dir" done This script works for 10 seconds on my computer. It is slow. The slowness happens due execution of mv command for every file. ###Edit start### I have understood, that in my example the simplest way will be just mv files/* -t new_files_dir or, if the "Argument list too long": printf '%s\0' files/* | xargs -0 mv -t new_files_dir but aforementioned case is a part of task. The whole task is in this question: Moving large number of files into directories based on file names in linux. So, the files must be moved into corresponding subdirectories, the correspondence of which is based on a number in the filename. This is the cause of for loop usage and other oddities in my code snippets. ###Edit end### There is possibility to speedup this process by passing bunch of files to mv command instead of a single file, like this: batch_num=1000 # Counting of files in the directory shopt -s nullglob file_list=(files/*) file_num=${#file_list[@]} # Every file's common part suffix='_file.txt' for((from = 1, to = batch_num; from <= file_num; from += batch_num, to += batch_num)); do if ((to > file_num)); then to="$file_num" fi # Generating filenames by `seq` command and passing them to `xargs` seq -f "files/%.f${suffix}" "$from" "$to" | xargs -n "${batch_num}" mv -t "new_files_dir" done In this case the script works for 0.2 seconds. So, the performance has increased by 50 times. But there is a problem: at any moment the program can refuse to work due "Argument list too long", because I can't guarantee that the bunch of filenames length is less than max allowable length. My idea is to calculate the batch_num: batch_num = "max allowable length" / "longest filename length" and then use this batch_num in xargs. Thus, the question: How can max allowable length be calculated? I have done something: Overall length can be found by this way: $ getconf ARG_MAX 2097152 The environment variables contributes into the argument size too, so probably they should be subtracted from ARG_MAX: $ env | wc -c 3403 Made a method to determine the max number of files of equal sizes by trying different amount of files before the right value is found (binary search is used). function find_max_file_number { right=2000000 left=1 name=$1 while ((left < right)); do mid=$(((left + right) / 2)) if /bin/true $(yes "$name" | head -n "$mid") 2>/dev/null; then left=$((mid + 1)) else right=$((mid - 1)) fi done echo "Number of ${#name} byte(s) filenames:" $((mid - 1)) } find_max_file_number A find_max_file_number AA find_max_file_number AAA Output: Number of 1 byte(s) filenames: 209232 Number of 2 byte(s) filenames: 190006 Number of 3 byte(s) filenames: 174248 But I can't understand the logic/relation behind these results yet. Have tried values from this answer for calculation, but they didn't fit. Wrote a C program to calculate the total size of passed arguments. The result of this program is close, but some non-counted bytes are left: $ ./program {1..91442}_file.txt arg strings size: 1360534 number of pointers to strings 91443 argv size: 1360534 + 91443 * 8 = 2092078 envp size: 3935 Overall (argv_size + env_size + sizeof(argc)): 2092078 + 3935 + 4 = 2096017 ARG_MAX: 2097152 ARG_MAX - overall = 1135 # <--- Enough bytes are # left, but no additional # filenames are permitted. $ ./program {1..91443}_file.txt bash: ./program: Argument list too long program.c #include <stdio.h> #include <string.h> #include <unistd.h> int main(int argc, char *argv[], char *envp[]) { size_t chr_ptr_size = sizeof(argv[0]); // The arguments array total size calculation size_t arg_strings_size = 0; size_t str_len = 0; for(int i = 0; i < argc; i++) { str_len = strlen(argv[i]) + 1; arg_strings_size += str_len; // printf("%zu:\t%s\n\n", str_len, argv[i]); } size_t argv_size = arg_strings_size + argc * chr_ptr_size; printf( "arg strings size: %zu\n" "number of pointers to strings %i\n\n" "argv size:\t%zu + %i * %zu = %zu\n", arg_strings_size, argc, arg_strings_size, argc, chr_ptr_size, argv_size ); // The enviroment variables array total size calculation size_t env_size = 0; for (char **env = envp; *env != 0; env++) { char *thisEnv = *env; env_size += strlen(thisEnv) + 1 + sizeof(thisEnv); } printf("envp size:\t%zu\n", env_size); size_t overall = argv_size + env_size + sizeof(argc); printf( "\nOverall (argv_size + env_size + sizeof(argc)):\t" "%zu + %zu + %zu = %zu\n", argv_size, env_size, sizeof(argc), overall); // Find ARG_MAX by system call long arg_max = sysconf(_SC_ARG_MAX); printf("ARG_MAX: %li\n\n", arg_max); printf("ARG_MAX - overall = %li\n", arg_max - (long) overall); return 0; } I have asked a question about the correctness of this program on StackOverflow: The maximum summarized size of argv, envp, argc (command line arguments) is always far from the ARG_MAX limit.
Just use a shell where mv is or can be made builtin, and you won't have the problem (which is a limitation of the execve() system call, so only with external commands). It will also not matter as much how many times you call mv. zsh, busybox sh, ksh93 (depending on how it was built) are some of those shells. With zsh: #! /bin/zsh - zmodload zsh/files # makes mv and a few other file manipulation commands builtin batch=1000 files=(files/*(N)) for ((start = 1; start <= $#files; start += batch)) { (( end = start + batch - 1)) mkdir -p ${start}_${end} || exit mv -- $files[start,end] ${start}_${end}/ || exit } The execve() E2BIG limit applies differently depending on the system (and version thereof), can depend on things like stacksize limit. It generally takes into account the size of each argv[] and envp[] strings (including the terminating NUL character), often the size of those arrays of pointers (and terminating NULL pointer) as well (so it depends both on the size and number of arguments). Beware that the shell can set some env vars at the last minute as well (like the _ one that some shells set to the path of the commands being executed). It could also depend on the type of executable (ELF, script, binfmt_misc). For instance, for scripts, execve() ends up doing a second execve() with a generally longer arg list (["myscrip", "arg", NULL] becomes ["/path/to/interpreter" or "myscript" depending on system, "-<option>" if any on the shebang, "myscript", "arg"]). Also beware that some commands end up executing other commands with the same list of args and possibly some extra env vars. For instance, sudo cmd arg runs cmd arg with SUDO_COMMAND=/path/to/cmd arg in its environment (doubling the space required to hold the list of arguments). You may be able to come up with the right algorithm for your current Linux kernel version, with the current version of your shell and the specific command you want to execute, to maximise the number of arguments you can pass to execve(), but that may no longer be valid of the next version of the kernel/shell/command. Better would be to take xargs approach and give enough slack to account for all those extra variations or use xargs. GNU xargs has a --show-limits option that details how it handles it: $ getconf ARG_MAX 2097152 $ uname -rs Linux 5.7.0-3-amd64 $ xargs --show-limits < /dev/null Your environment variables take up 3456 bytes POSIX upper limit on argument length (this system): 2091648 POSIX smallest allowable upper limit on argument length (all systems): 4096 Maximum length of command we could actually use: 2088192 Size of command buffer we are actually using: 131072 Maximum parallelism (--max-procs must be no greater): 2147483647 You can see ARG_MAX is 2MiB in my case, xargs thinks it could use up to 2088192, but chooses to limit itself to 128KiB. Just as well as: $ yes '""' | xargs -s 230000 | head -1 | wc -c 229995 $ yes '""' | strace -fe execve xargs -s 240000 | head -1 | wc -c [...] [pid 25598] execve("/bin/echo", ["echo", "", "", "", ...], 0x7ffe2e742bf8 /* 47 vars */) = -1 E2BIG (Argument list too long) [pid 25599] execve("/bin/echo", ["echo", "", "", "", ...], 0x7ffe2e742bf8 /* 47 vars */) = 0 [...] 119997 It could not pass 239,995 empty arguments (with total string size of 239,995 bytes for the NUL delimiters, so fitting in that 240,000 buffer) so tried again with half as many. That's a small amount of data, but you have to consider that the pointer list for those strings is 8 times as big, and if we add up those, we get over 2MiB. When I did this same kind of tests over 6 years ago in that Q&A here with Linux 3.11, I was getting a different behaviour which had already changed recently at the time, showing that the exercise of coming up with the right algorithm to maximise the number of arguments to pass is a bit pointless. Here, with an average file path size of 32 bytes, with a 128KiB buffer, that's still 4096 filenames passed to mv and the cost of starting mv is alreadly becoming negligible compared to the cost of renaming/moving all those files. For a less conservative buffer size (to pass to xargs -s) but that should still work for any arg list with past versions of Linux at least, you could do: $ (env | wc; getconf ARG_MAX) | awk ' {env = $1 * 8 + $3; getline; printf "%d\n", ($0 - env) / 9 - 4096}' 228499 Where we compute a high estimate of the space used by the environment (number of lines in env output should be at least as big as the number of envp[] pointers we passed to env, and we count 8 bytes for each of those, plus their size (including NULs which env replaced with NL)), substract that from ARG_MAX and divide by 9 to cover for the worst case scenario of a list of empty args and add 4KiB of slack. Note that if you limit the stack size to 4MiB or below (with limit stacksize 4M in zsh for instance), that becomes more conservative than GNU xargs's default buffer size (which remains 128K in my case and fails to pass a list of empty vars properly). $ limit stacksize 4M $ (env | wc; getconf ARG_MAX) | awk ' {env = $1 * 8 + $3; getline; printf "%d\n", ($0 - env) / 9 - 4096}' 111991 $ xargs --show-limits < /dev/null |& grep actually Maximum length of command we could actually use: 1039698 Size of command buffer we are actually using: 131072 $ yes '""' | xargs | head -1 | wc -c 65193 $ yes '""' | xargs -s 111991 | head -1 | wc -c 111986
How calculate the number of files which can be passed as arguments to some command for batch processing?
1,535,912,384,000
I know that different distributions patch the packages that are available in the respective repositories but I've never understood why is there a need to do so. I would appreciate it if somebody could explain or point me to the relevant documentation online. Thanks.
It took a few tries, but I think I comprehend what you're asking now. There are several possible reasons for a distribution to patch given software before packaging. I'll try and give a non-exclusive list; I'm sure there are other possible reasons. For purposes of this discussion, "upstream" refers to the original source code from the official developers of the software Patches that upstream has not (or not yet) incorporated into their main branch for whatever reason or reasons. Usually because the distribution's package maintainer for that package believes that said patches are worthwhile, or because they're needed to keep continuity in the distribution (Suppose you've got a webserver and after a routine update to php several functions you've been relying on don't work anymore, or it's unable to read a config file from the old style) Distributions tend to like standardized patterns for their filesystem hierarchy in /etc/; every software developer may or may not have their own ideas for what constitutes proper standards. Therefore, one of the first thing a distribution package maintainer tends to do is patch the build scripts to configure and expect said configuration files in a hierarchy pattern that corresponds to the rest of the distribution. Continuing on the topic of configuration, one of the first "patches" tends to be a set of default configuration files that will work with the rest of the distribution "out of the box" so to speak, allowing the end user to get started immediately after installing rather than having to manually sort out a working configuration. That's just off the top of my head. There may very well be others, but I hope this gives you some idea.
Why do different Linux distributions need to patch packages?
1,535,912,384,000
How can I add line numbers to man pages or info pages in Linux? I want to use line numbers to navigate in man pages. I can write the man page in a file and then open it with Vim, but is there a better way?
Open a manpage, Hit -N then Enter. ( -, then ShiftN, then Enter) e,g: man man: 1 MAN(1) Manual pager utils MA 2 3 NAME 4 man - an interface to the system reference manuals 5 6 SYNOPSIS 7 man [man options] [[section] page ...] ... 8 man -k [apropos options] regexp ... 9 man -K [man options] [section] term ... 10 man -f [whatis options] page ... 11 man -l [man options] file ... 12 man -w|-W [man options] page ... To remove line numbers -n Enter To avoid duplicate lines, set the MANWIDTH variable. LESS variable set to -N to print line numbers: MANWIDTH=100 LESS=-N man man
How do I add line numbers to the man page?
1,535,912,384,000
The third party scheduling application our enterprise uses doesn't execute rm commands as expected. By this, I mean I expect the rm -f $filetoremove to complete and then continue to the next line of code in the script. But I need to get it to execute preferrably rm -f. Is there another method to remove a file without using rm? I tried > delete_foobar.file but it just empties it without removing. Additional information: My work environment is a large enterpise. I write the .sh script which I test outside the scheduling application. Outside the scheduling software, the rm -f $filetoremove command works with a return code of 0. However, the scheduling software does not register the 0 return code and immediately exits without running the remainder of the .sh script. This is problematic and the vendor has acknowledged this defect. I'm not privy to the details of the automation software nor the exact return codes it receives. All I know, is that my scripts don't run completely, when run via the automation software, if it contains rm. This is why I'm looking for alternatives to rm. Yes, it is important that I remove the file once I've completed processing it.
The unlink command is also part of POSIX: unlink <file>
How to remove a file without using rm? [closed]
1,535,912,384,000
I accidentially destroyed my cd command. I tried to automatically execute ls after cd is called. I found a post saying that I have to execute alias cd='/bin/cd && /bin/ls', but now I get -bash: /bin/cd: No such file or directory and can't change directoy anymore.
Your system (like many Unix systems) does not have an external cd command (at least not at that path). Even if it had one, the ls would give you the directory listing of the original directory. An external command can never change directory for the calling process (your shell)1. Remove the alias from the environment with unalias cd (and also remove its definition from any shell initialization files that you may have added it to). With a shell function, you can get it to work as cd ordinarily does, with an extra invocation of ls at the end if the cd succeeded: cd () { command cd "$@" && ls -lah } or, cd () { command cd "$@" && ls -lah; } This would call the cd command built into your shell with the same command line arguments that you gave the function. If the change of directory was successful, the ls would run. The command command stops the shell from executing the function recursively. The function definition (as written above) would go into your shell's startup file. With bash, this might be ~/.bashrc. The function definition would then be active in the next new interactive shell session. If you want it to be active now, then execute the function definition as-is at the interactive shell prompt, which will define it within your current interactive session. 1 On systems where cd is available as an external command, this command also does not change directory for the calling process. The only real use for such a command is to provide POSIX compliance and for acting as a test of whether changing directory to a particular one would be possible.
-bash: /bin/cd: No such file or directory - automatically execute ls after cd
1,535,912,384,000
Does the command pwd in a shell script output the directory the shell script is in?
There are three independent "directories" at play here: your current shell's current working directory, the shell script's current working directory, and the directory containing the shell script. To demonstrate that they are independent, you can write a shell script, saved to /tmp/pwd.sh, containing: #!/bin/sh pwd cd /var pwd You can then change your pwd (#1 above) to /: cd / and execute the script: /tmp/pwd.sh which starts off by demonstrating your existing pwd (#1), then changes it to /var and shows it again (#2). Neither of those pwd's were "/tmp", the directory that contains /tmp/pwd.sh (#3).
What does pwd output?
1,535,912,384,000
I can't determine exactly what file is eating up my disk. Firstly I used df command to list my directories: devtmpfs 16438304 0 16438304 0% /dev tmpfs 16449868 0 16449868 0% /dev/shm tmpfs 16449868 1637676 14812192 10% /run tmpfs 16449868 0 16449868 0% /sys/fs/cgroup /dev/mapper/fedora-root 51475068 38443612 10393632 79% / tmpfs 16449868 384 16449484 1% /tmp /dev/sda3 487652 66874 391082 15% /boot /dev/mapper/fedora-home 889839636 44677452 799937840 6% /home Then I ran du -h / | grep '[0-9\,]\+G'. The problem is I get everything including other directories, so I need to get specifically find /dev/mapper/fedora-root but when I try du -h /dev/mapper/fedora-root | grep '[0-9\,]\+G' I get no results. I need to know what's eating up 79% of directory / How can I solve this?
My magic command in such situation is : du -m . --max-depth=1 | sort -nr | head -20 To use this : cd into the top-level directory containing the files eating space. This can be / if you have no clue ;-) run du -m . --max-depth=1 | sort -nr | head -20. This will list the 20 biggest subdirectories of the current directory, sorted by decreasing size. cd into the biggest directory and repeat the du ... command until you find the BIG file(s)
How to Pin Point Large File eating space in Fedora 18
1,535,912,384,000
how to delete the files that created on Aug 7 with the name DBG_A_sql* under /tmp as the following example: -rw-r--r-- 1 root root 51091 Aug 7 11:22 DBG_A_sql.2135 -rw-r--r-- 1 root root 15283 Aug 7 11:22 DBG_A_sql.2373 -rw-r--r-- 1 root root 51091 Aug 7 11:22 DBG_A_sql.2278 -rw-r--r-- 1 root root 9103 Aug 7 11:22 DBG_A_sql.2485 -rw-r--r-- 1 root root 9116 Aug 7 11:22 DBG_A_sql.2573 -rw-r--r-- 1 root root 9140 Aug 7 11:22 DBG_A_sql.2679 -rw-r--r-- 1 root root 15695 Aug 7 11:22 DBG_A_sql.2897
You can use find. Calculate date according to your requirement and use, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -print After confirming it delete them, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -delete
Finding and deleting files with a specific date
1,535,912,384,000
Objective: Check in /etc/shadow if user password is locked, i.e. if the first character in the 2nd field in /etc/shadow, which contains the user's hashed password, is an exclamation mark ('!') Desired output: a variable named $disabled containing either 'True' or 'False' Username is in the $uname varable and I do something like this: disabled=`cat /etc/shadow |grep $uname |awk -F\: '{print$2}'` # I now have the password and need one more pipe into the check for the character # which is where I'm stuck. I would like to do like (in PHP syntax): | VARIABLE=="!"?"True":"False"` This is a fragment of a script that will be run by Cron with root permissions, so there is access to all desirable information.
Why not just do it all with awk? awk -F: '/<username>/ {if(substr($2,1,1) == "!"){print "True"} else {print "False"}}' /etc/shadow
BASH: Check in /etc/shadow if user password is locked
1,535,912,384,000
I am using Linux oess (CentOS). I am working on a VM: In the terminal, I'm trying to: ping 8.8.8.8 to see my connectivity. It says: Network is unreachable Then I typed: ifconfig: inet addr: 192.168.56.101 Then: sudo /sbin/route add -net 0.0.0.0 gw 192.168.56.101 eth0 Now I'm doing the same ping and it says: Destination host is unreachable for all the sequences. What is the source of the problem? route output:
First things first: can you ping 192.168.56.1? If so, then you have an IP connection to the router. Set this as your default route. Otherwise try pinging 192.168.56.255 (broadcast) to see on what address you might get replies. See arp -a to check what addresses you can find. Can you ping 8.8.4.4 (google) after changing the default route? If so,you have internet access. If not, check the router. Can you ping www.google.com? If not, you might have a DNS problem. Do you get results from nslookup www.google.com?
network: destination host unreachable
1,535,912,384,000
I have a file of patterns and I want to return all the line numbers where the pattern was found, but in a wide format and not long/spread. Example: fileA.txt Germany USA UK fileB.txt USA USA Italy Germany UK UK Canada Canada Germany Australia USA I have done something like this: grep -nf fileA.txt fileB.txt which returned me: 1:USA 2:USA 4:Germany 5:UK 6:UK 9:Germany 11:USA However, I want to have something like: Germany 4 9 USA 1 2 11 UK 5 6
Using GNU datamash: $ grep -n -x -F -f fileA.txt fileB.txt | datamash -s -t : -g 2 collapse 1 Germany:4,9 UK:5,6 USA:1,2,11 This first uses grep to get the lines from fileB.txt that exactly matches the lines in fileA.txt, and outputs the matching line numbers along with the lines themselves. I'm using -x and -F in addition to the options that are used in the question. I do this to avoid reading the patterns from fileA.txt as regular expressions (-F), and to match complete lines, not substrings (-x). The datamash utility is then parsing this as lines of :-delimited fields (-t :), sorting it (-s) on the second field (-g 2; the countries) and collapsing the first field (collapse 1; the line numbers) into a list for each country. You could then obviously replace the colons and commas with tabs using tr ':,' '\t\t', or with spaces in a similar way. $ grep -n -x -f fileA.txt -F fileB.txt | datamash -s -t : -g 2 collapse 1 | tr ':,' '\t\t' Germany 4 9 UK 5 6 USA 1 2 11
grep output from long to wide
1,535,912,384,000
I'm trying to remove a lot of files at once but need to be specific as to not remove any of the files I actually need. I have a ton of corrupt files that start master- but there are valid files that start with master-2018 So, I want to do something like rm -rf master-* --exclude master-2018* Is that I need possible?
Yes you can use more than one pattern with find: $ find -name 'master-*' \! -name 'master-2018*' -print0 -prune | xargs -0 echo rm -fr (remove the echo if you're satisfied with the dry run) You should add a -maxdepth 1 predicate just after find if you only want ro remove files from the current directory, ie master-1991 but no subdir/master-1991.
Remove files that start with but don't contain
1,535,912,384,000
This example was in a Linux book: $ cat sort-wc #!/bin/bash # Sort files according to their line count for f do echo `wc -l <"$f» lines in $f done | sort -n $ ./sort-wc /etc/passwd /ect/fstab /etc/motd What I don't get is why there is only a single backtick, a single double quote and what the >> does. Isn't >> for writing to a file?
This is from page 121 of "Introduction to Linux for Users and Administrators" and that's a typographical error in the text. The script is also avaliable in other texts from tuxcademy, with the same typographical error. The single » character is not the same as the double >> and it serves no purpose in a shell script. My guess is that the typesetting system used for formatting the text of the book got confused by "` for some reason and formatted it as a guillemet (angle-quote), or it's just a plain typo (the «...» quotes are used for quoting ordinary text elsewhere in the document). The script should read #!/bin/bash # Sort files according to their line count for f do echo `wc -l <"$f"` lines in $f done | sort -n ... but would be better written #!/bin/sh # Sort files according to their line count for f; do printf '%d lines in %s\n' "$(wc -l <"$f")" "$f" done | sort -n The backticks are an older form of $( ... ), and printf is better to use for outputting variable data. Also, variable expansions and command substitutions should be quoted, and the script uses no bash features so it could just as well be executed by /bin/sh. Related: Have backticks (i.e. `cmd`) in *sh shells been deprecated? Why is printf better than echo? Security implications of forgetting to quote a variable in bash/POSIX shells Why does my shell script choke on whitespace or other special characters?
I don't understand what a single backtick and double quote and >> do in this script
1,535,912,384,000
I'm looking to find the lines between two matching patterns. If any start or end pattern missing, lines should not print. Correct input: a ***** BEGIN ***** BASH is awesome BASH is awesome ***** END ***** b Output will be ***** BEGIN ***** BASH is awesome BASH is awesome ***** END ***** Now suppose END pattern is missing in input a ***** BEGIN ***** BASH is awesome BASH is awesome b Lines should not print. I have tried with sed: sed -n '/BEGIN/,/END/p' input It prints all data up to the last line if END pattern is missing. How to solve it?
cat input | sed '/\*\*\*\*\* BEGIN \*\*\*\*\*/,/\*\*\*\*\* END *\*\*\*\*/ p;d' | tac | sed '/\*\*\*\*\* END \*\*\*\*\*/,/\*\*\*\*\* BEGIN *\*\*\*\*/ p;d' | tac It works by having tac reverse the lines so that sed can find both delimiters in both orders.
Print lines between start & end pattern, but if end pattern does not exist, don't print
1,535,912,384,000
I want to play with Linux to better understand how it works. Thus, I am looking for a very basic and small Linux to play with. I tried small Linux distributions (which copy themselves to RAM), but they have their own structure (like Live CD). Instead, I wish to have a minimal but standard Linux structure. I installed minimal version of Debian on USB and setup GRUB to separate this experiment from my main computer. However, Debian (even minimal) is far more advanced than what I need. What is the best method to copy a very minimal version of Linux on USB and boot with GRUB? Each distribution has its own features and options, but I prefer to be closer to the standard Linux (Linux kernel) without customization of a distribution.
Slackware should do. And to be honest - there is no "standard" linux. You define your standard afer you defined what you need to do with it and what to expect from it. The low-level (plug and play, device-naming, network configuration, system configuration, detection of network services, hardening) is quite different on different linux distributions. Even init-scripts and how they get processed during boot is different.
Very basic Linux for educational purposes
1,535,912,384,000
When I log in to SSH while forwarding my local port, it's 21 FTP port, with the command: ssh -R 2101:localhost:21 [email protected] -p 8288 After successfully logging in, I sent this command in the SSH: ftp ikiw@localhost -p 2101 The command runs normally, and I also successfully logged into FTP smoothly, but when in FTP, I want to see a list of files available with the command ls or dir and I get this error: ftp: Can't connect to '::1:27394': Connection refused What is wrong? Does it seem like FTP creates a new port randomly when I run the ls command? I want to forward my local FTP to my SSH/VPS, and run FTP from my SSH/VPS to my local machine normally, can someone help me and provide a solution? Thank you very much! :D
FTP is a horrible protocol. Yes, it uses multiple ports; there's the control port and then each data transfer (ls or get and so on) opens a second new random port. Worse, depending on if you're doing PASV or active mode FTP, the server could try to initiate the connection. FTP isn't easy to handle with forwarding like this. Since you have ssh connectivity, can't you use sftp? That's an FTP-like protocol that's built directly into ssh so no need to port forward.
Cannot do "ls" in FTP while port forwarding to SSH
1,535,912,384,000
Does the Linux kernel use original Unix code or do they share the idea? Since both are written in C, is that true?
In cases like this it helps to define Unix more precisely. In this response I will be talking about AT&Ts unix specifically. Linux is a Unix Clone and shares no actual code. This is what allowed Linux to be licensed under the GPL and hence free software. If it had inherited code, it would have been owned by the creator of the code it used and could not have been freely modified and used under the GPL as it is today. If is very likely that it would have enjoyed much more limited success had it not been so widely accessible. There are several competing free software unixes such as FreeBSD which came later and do infact share code though with a very different licensing scheme. There is far too much to the licensing history to properly cover it here unfortunately.
Does Linux use original Unix code or do they share the idea? [duplicate]
1,535,912,384,000
So GNU/Linux is an operating system, consisting of several programs at a minimum: Linux kenel, gcc, gnu-binutils, Gnome desktop, etc. What makes a Linux distribution GNU? Is it the tools, with which the kernel was compiled? Is it the tools, with which the distribution is shipped? Do there exist fully functional, desktop operating systems, that are Linux based but not GNU?
An operating system is a combination of a kernel and a userland. Basically, the kernel manages the hardware while the userland provides a comprehensive interface to the users. In a common GNU/Linux distribution, Linux provides the kernel while the GNU project brings the userland tools. GNU was started way before Linux, and provides with a large amount of utilities to build a full operating system. However, they were missing a kernel. Although they had the Hurd kernel, it was taking too long to be ready. And then came Linux with the help of a big enthusiasm around it, it has evolved faster than the Hurd. You now have a userland and a kernel from two different projects. And as each is essential to have an operating system, why not name the association GNU/Linux so each project is given its share of the credit? You have other userlands like the BSD utils or BusyBox. However they are more or less complete compared to the GNU utilities and some software will work only with a GNU userland. For example, most of the BSD operating systems are using GCC as a compiler while LLVM will soon change this situation. And as an universal operating system, you can run Debian with a FreeBSD kernel and a GNU userland.
What makes a distribution GNU and are there Linux distributions, that are not GNU? [duplicate]
1,535,912,384,000
Is there a command like vi > out vi | out That I could use to cause a watchdog reset of my embedded linux device?
If you have a watchdog on your system and a driver that uses /dev/watchdog, all you have to do is kill the process that is feeding it; if there is no such process, then you can touch /dev/watchdog once to turn it on, and if you don't touch it again, it will reset. You also might be interested in resetting the device using the "magic sysrq" way. If you have a kernel with the CONFIG_MAGIC_SYSRQ feature compiled in, then you can echo 1 > /proc/sys/kernel/sysrq to enable it, then echo b > /proc/sysrq-trigger to reboot. When you do this, it reboots immediately, without unmounting or or syncing filesystems.
How do I cause a watchdog reset of my embedded Linux device
1,535,912,384,000
free -m total used free shared buffers cached Mem: 15708 15539 168 124 6 6272 -/+ buffers/cache: 9260 6447 Swap: 0 1759218604 0 sysctl vm.swappiness vm.swappiness = 0 grep Swap /proc/meminfo SwapCached: 0 kB SwapTotal: 0 kB SwapFree: 36 kB I have set vm.swappiness=0 to disable swap, but the output of free -m shows that swap cache is used 1759218604, a very huge number. I think used swap memory should be 0, why not 0? centos version:6.7, Linux kernel:2.6
That's a very old RHEL/CentOS 6 kernel bug, you need to update to kernel-2.6.32-573.6.1.el6 (or newer). See this RH customer portal article (requires RH account) and this question on serverfault for more details. I would also recommend upgrading your system, CentOS 6 is no longer supported and 6.7 is not even the latest minor version (last was 6.10).
swap total is zero but used is too high
1,535,912,384,000
I am trying to use yocto poky environment. I did the following: #source environment-setup-cortexa9hf-neon-poky-linux-gnueabi Now, if I try to compile the program using: #$(CC) hello.c -o hello.elf it throws me error since $(CC) isn't defined. However, if I do $CC it works. I am confused on what is the fundamental difference between $(CC) and $CC?
I'm assuming that you've seen $(CC) in a Makefile where it serves as an expansion of the variable CC, which normally holds the name of the C compiler. The $(...) syntax for variable expansions in Makefiles is used whenever a variable with a multi-character name is expanded, as $CC would otherwise expand to the value of the variable C followed by a literal C ($CC would in effect be the same as $(C)C in a Makefile). In the shell though, due to having a different syntax, $(CC) is a command substitution that would be replaced by the output of running the command CC. If there is no such command on your system, you would see a "command not found" error. It's also possible that you've mistaken $(CC) for ${CC} which, in the shell, is equivalent to $CC under most circumstances. The curly braces are only needed if the variable's expansion is followed immediately by some other string that would otherwise be interpreted as part of the variable's name. An example of the difference may be seen in "$CC_hello" (expands the variable called CC_hello) and "${CC}_hello" (expands the variable CC and appends the string _hello to its value). In all other circumstances, ${CC} is equivalent to $CC. Note that using curly braces is not quoting the expansion, i.e. ${CC} is not the same as "$CC". If have a shell or environment variable holding the name of the compiler that you're using for compiling C code, and you want to use that variable on the command line, then use "$CC", or just $CC if the variable's value does not contain spaces or shell globbing characters. $CC -o hello.elf hello.c
What is the difference between $(CC) and $CC?
1,536,525,681,000
I'm having some trouble trying to remove write permission from the owning group (g), and to add read permission to others (o) at the same time. How would I remove some permission from the owning group and add some permission for others in the same line?
Noting the title of your question: Removing and adding permission using numerical notation on the same line With chmod from GNU coreutils, which you probably have on a Linux system, you could use $ chmod -020,+004 test.txt to do that. It works in the obvious way: middle digit for the group, 2 is for write; and last digit for "others", and 4 for read. Being able to use + or - with a numerical mode is a GNU extension, e.g. the BSD-based chmod on my Mac gives an error for +004: $ chmod +004 test.txt chmod: Invalid file mode: +004 So it would be simpler, shorter, more portable and probably more readable to just use the symbolic form: $ chmod g-w,o+r test.txt
Removing and adding permission using numerical notation on the same line
1,536,525,681,000
I've accidentally created a file named °. Now I'm having trouble deleting it with bash. [/opt/etc/sudoers.d] # ls -l -r--r----- 1 admin administ 21 Feb 3 23:54 010-root -rw-r--r-- 1 admin administ 20 Feb 3 23:50 ° Typing rm ° seems to only move the caret to the beginning of the line, i.e. no character is entered. (For what it's worth I'm running bash 3.2.0 on a remote machine conntected with SSH using Mac OSX Terminal) Any ideas?
How about? rm -i ? I think this should work...
How do I delete a file named "°" in bash
1,536,525,681,000
Given that /etc/sudoers and /etc/sudoers.d/ are not readable by regular users, is there a way that I, as a normal user, can find out what sudo permissions I have?
Run sudo -l. Depending on how sudo is configured on your system, it might request your password first, but it will display all sudo configuration options that apply to your account, and then all the permitted command definitions applicable to you.
linux sudo list permitted commands [duplicate]
1,536,525,681,000
A cpp file I'm working with creates a directory, i.e. mkdir( path, ... ), where path comes from an environment variable (e.g. getenv( "FOO" );). As an example, say $FOO is /foo, and path, created above, is `/foo/newPath/'. For my question scenario, it is possible that /foo/oldPath/ exists and has content (assume no further subdirectories), in which case I want to move files from /foo/oldPath/ to /foo/newPath. My question is: because /foo/newPath/ is created as a subdirectory of $FOO, i.e. /foo/newPath/ and /foo/oldPath/ have the same parent directory, is it then guaranteed that both directories are on the same "mounted file system"? My understanding of mount points and file systems on Linux is tenuous at best. The context behind this question is: if /foo/newPath/ and /foo/oldPath/ are guaranteed to be on the same mounted file system, I can use rename() to more easily perform the file movement than other alternatives. The man page of the function says that it will fail if oldPath and newPath are not on the same "mounted file system."
They are not guaranteed that. It is possible that /foo/oldPath is a mount point. This can, however, be easily checked by running mount | grep 'on /foo/oldPath' No output should indicate that the oldPath directory is not a mount point. You will need to be more careful if you are using nested directories, since you can have a mount point anywhere. I'm not sure whether this is automated, but it's worth noting that the 3rd field from mount (space-separated) is the mount point for each line, so utilizing an cut -d ' ' -f 3 could be used to extract the path (should you need to verify it's not just a substring of another mount point, like /foo/oldPath/nested/mountPoint) If you'd like to translate this into C/C++ code, you may be able to use system("mount | grep 'on /foo/oldPath'"), but I won't swear by that. You may have better luck on StackOverflow for more implementation detail there if you need it.
Are two subdirectories with the same root guaranteed to be on the same mounted file system?
1,536,525,681,000
I want to use aufs to combine a few disks.I am able to mount the aufs file system using the mount command from the command line. However, when trying to mount the same through an fstab entry, it fails. Google tells me that fstab does not mount file systems in the specified order, creating this problem. I also found recommendations to add the mount command in rc.local so the aufs is mounted after fstab. I am using archlinux which uses systemd, so how can I run the mount command at boot in systemd?
Systemd has native support for mounts (man systemd.mount). In fact systemd reads /etc/fstab, uses it to generate mount units, and mount the filesystems itself. Rather than relying on fstab, it's also possible to create mount units by hand. This is how system mounts like /dev, /sys, /proc, etc are handled (/usr/lib/systemd/system/*.mount). This method allows you to use systemd dependencies to ensure things get mounted in the right order. systemd mount unit files must be named after the mount point they control (man systemd.unit). As an example, I created a unit file to mount my USB backup drive to /mnt/backup. Following the naming convention, I've created /etc/systemd/system/mnt-backup.mount, with the following contents: [Unit] Description = USB backup disk [Mount] What = LABEL=david-usb-backup Where = /mnt/backup Type = ext4 [Install] WantedBy = multi-user.target I then run systemctl daemon-reload for systemd to load the unit. I can run systemctl start mnt-backup.mount to mount the disk, and/or systemctl enable mnt-backup.mount to have it started at boot. For dependencies add Requires = some-other-mnt-point.mount under the [Unit] section. Optionally, you may use BindTo rather than Requires; this will cause it to get unmounted if one of the dependencies disappears. However Requires does not affect the order in which the disks are mounted. So to make sure that the disks are mounted before aufs, use After. Edit: To expand on the use of Requires and After, the unit section might look like: [Unit] Description = USB backup disk Requires = mnt-data01.mount Requires = mnt-data02.mount Requires = mnt-data03.mount After = mnt-data01.mount After = mnt-data02.mount After = mnt-data03.mount
How to mount aufs file system on boot in archlinux?
1,536,525,681,000
If you want to read the single line output of a system command into Bash shell variables, you have at least two options, as in the examples below: IFS=: read user x1 uid gid x2 home shell <<<$(grep :root: /etc/passwd | head -n1) and IFS=: read user x1 uid gid x2 home shell < <(grep :root: /etc/passwd | head -n1) Is there any difference between these two? What is more efficient or recommended? Please note that, reading the /etc/passwd file is just for making an example. The focus of my question is on here strings vs. process substitution.
First note that using read without -r is to process input where \ is used to escape the field or line delimiters which is not the case of /etc/passwd. It's very rare that you would want to use read without -r. Now as to those two forms, a note that neither are standard sh syntax. <<< is from zsh in 1991. <(...) is from ksh circa 1985 though ksh initially didn't support redirecting from/to it. $(...) is also from ksh, but has been standardised by POSIX (as it replaces the ill-designed `...` from the Bourne shell), so is portable across sh implementations these days. $(code) interprets the code in a subshell with the output redirected to a pipe while the parent at the same time, reads that output from the other end of the pipe and stores it in memory. Then once that command finishes, that output, stripped of the trailing newline characters (and with the NUL characters removed in bash) makes up the expansion of $(...). If that $(...) is not quoted and is in list context, it is subject to split+glob (split only in zsh). After <<<, it's not a list context, but still older versions of bash would still do the split part (not glob) and then join the parts with spaces. So if using bash, you'd likely want to also quote $(...) when used as target of <<<. cmd <<< word in zsh and older versions of bash causes the shell to store word followed by a newline character into a temporary file, which is then made the stdin of the process that will execute cmd, and that tempfile deleted before cmd is executed. That's the same as happens with << EOF from the Bourne shell from the 70s. Effectively, it is exactly the same as: cmd << EOF word EOF In 5.1, bash switched from using a temporary file to using a pipe as long as the word can fit whole in the pipe buffer (and falls back to using a tempfile if not to avoid deadlocks) and makes cmd's stdin the reading end of the pipe which the shell has seeded beforehand with the word. So cmd1 <<< "$(cmd2)" involves one or two pipes, store the whole output of cmd2 in memory, storing it again in either another pipe or a tempfile and mangles the NULs and newlines. cmd1 < <(cmd2) is functionality equivalent to cmd2 | cmd1. cmd2's output is connected to the writing end of a pipe. Then <(...) expands to a path that identifies the other end, < that-path gets you a file descriptor to that other end. So cmd2 talks directly to cmd1 without the shell doing anything with the data. You see this kind of construct in the bash shell specifically because in bash, contrary to AT&T ksh or zsh, in: cmd2 | cmd1 cmd1 is run in a subshell¹, so if cmd1 is read for instance, read will only populate variables of that subshell. So here, you would want: IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored < <( grep :root: /etc/passwd) The head is superfluous as with -r, read will only read one line anyway². I've added a rest_if_any_ignored for future proofing in case in the future a new field is added to /etc/passwd, causing $shell to contain /bin/sh:that-field otherwise. Portably (in sh), you can't do: grep :root: /etc/passwd | IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored as POSIX leaves it unspecified whether read runs in a subshell (like in bash/dash...) or not (like zsh/ksh). You can however do: IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored << EOF $(grep :root: /etc/passwd | head -n1) EOF (here restoring the head to avoid the whole of grep's output to be stored in memory and in the tempfile/pipe). Which is standard even if not as efficient (though as indicated by @muru, the difference for such a small input is likely negligible compared to the cost of running an external utility in a forked process). Performance, if that mattered here, could be improved by using builtin features of the shell to do grep's job. However, especially in bash, you'd only do that for very small input as a shell is not designed for this kind of task and is going to be a lot worse at it than grep. while IFS=: read <&3 -r user x1 uid gid name home shell rest_if_any_ignored do if [ "$name" = root ]; then do-something-with "$user" "$home"... break fi done 3< /etc/passwd ¹ except when the lastpipe option in bash is set and the shell is non-interactive like in scripts ² see also the -m1 or --max-count=1 option of the GNU implementation of grep which would tell grep itself to stop searching after the first match. Or the portable equivalent: sed '/:root:/!d;q'
What is more efficient or recommended for reading output of a command into variables in Bash?
1,536,525,681,000
Given this input: # Lines starting with # stay the same # Empty lines stay the same # only lines with comments should change ls  # show all major directories               # and other things cd      # The cd command - change directory               # will allow the user to change between file directories touch             # The touch command, the make file command                 # allows users to make files using the Linux CLI #  example, cd ~ bar foo baz # foo foo foo I need to keep lines beginning with # and lines not containing any comments as is, but align all other comments on the same column. Desired Output: # Lines starting with # stay the same # Empty lines stay the same # Only lines with # in middle should change and be aligned ls              # show all major directories                 # and other things cd              # The cd command - change directory                   # will allow the user to change between file directories touch           # The touch command, the make file command                 # allows users to make files using the Linux CLI #  exmaple, cd ~ bar foo baz     # foo foo foo Here what I have so far: # Building an array out of input while IFS=$'\n' read -r; do lines+=("$REPLY") done # Looping through array and selecting elemnts that need change for i in "${lines[@]}" do if [[ ${i:0:1} == ';' || $i != *";"* ]]; then echo "DOESNT CHANGE: #### $i" else echo "HAS TO CHANGE: #### $i" array+=( "${i%%";"*}" ); array2+=("${i##";"}") fi done # Trying to find the longest line to decide how much space I need to add for each element max = ${array[0]} for n in "${array[@]}" ; do ((${#n} > max)) && max=${#n} echo "Length:" ${#n} ${n} done #Longest line echo $max # Loop for populating array for j in "${!array2[@]}" ; do echo "${array2[j]} " | sed -e "s/;/$(echo "-%20s ;") /g" done I feel like I'm doing too much. I think there should be an easier way to tackle this problem.
If all your commands and arguments do not contain #, and one other character (say the ASCII character given by byte 1), you can insert that other character as an extra separator and use column to align the comments (see this answer). So, something like: $ sed $'s/#/\001#/' input-file | column -ets $'\001' # Lines starting with # stay the same # Empty lines stay the same # only lines with comments should change ls # show all major directories # and other things cd # The cd command - change directory # will allow the user to change between file directories touch # The touch command, the make file command # allows users to make files using the Linux CLI # example, cd ~ bar foo baz # foo foo foo If your column doesn't support -e to avoid eliminating empty lines, you could add something to empty lines (for example, a space, or the separator character used above): $ sed $'s/#/\001#/;s/^$/\001/' input-file | column -ts $'\001' # Lines starting with # stay the same # Empty lines stay the same # only lines with comments should change ls # show all major directories # and other things cd # The cd command - change directory # will allow the user to change between file directories touch # The touch command, the make file command # allows users to make files using the Linux CLI # example, cd ~ bar foo baz # foo foo foo
Complex text alignment in bash
1,536,525,681,000
My data looks like: $ cat input 1212103122 1233321212 0000022221 I want the output to look like: $ cat output 1 2 1 2 1 0 3 1 2 2 1 2 3 3 3 2 1 2 1 2 0 0 0 0 0 2 2 2 2 1 I tried: sed -i 's// /g' input > output but it does not work. Any suggestions?
Here you are: sed 's/\(.\{1\}\)/\1 /g' input > output And if you want to save the changes in-place: sed -i 's/\(.\{1\}\)/\1 /g' input How it works: s/\(.\{1\}\)/\ /g will add a space, after each 1 character. For instance, if you wanted an output file like: 12 12 10 31 22 12 33 32 12 12 00 00 02 22 21 You could edit my answer to: sed -i 's/\(.\{2\}\)/\1 /g' So it will add a space, after each 2 characters. In addition, /\1 / is the same as /&, and will add one white-space. For instance, to add three: /\1 / or /& /. You have many more options to use. Sed is a super-powerful tool. In addition yes, as @Law29 mentioned, this will leave a space at the end of each line if you do not remove, so to remove them while adding spaces you can add a s/ $// to the end of given solution, to do so: sed 's/./& /g; s/ $//' I hope this could help.
how to insert space between individual digits in a file?
1,536,525,681,000
I would like to know if a particular folder is present or not. I used the following command find /mnt/md0/ -maxdepth 1 -name 'dcn'||'DCN' I want to know if folder name is DCN or dcn . How would I do this ?
You're looking for the option -iname, which stands for "ignore case" on GNU find along with the option -type d for selecting only directories. find /mnt/md0/ -type d -maxdepth 1 -iname dcn For more a detail explanation on find switches you consult explainshells.com's explanation of find. (This will match any case: dcn , DcN, DCn) Edit 1: As state in comment by Olivier Dulac to use with non GNU find or old find version you could use : find /mnt/md0 -type d -maxdepth 1 -print | grep -i '/dcn$' see this answer to have a real compatibility with non GNU and old find version
Using find command to find folder ignoring case
1,536,525,681,000
I've been looking for an all around Linux Programmers manual but there isn't one... So that leads me to ask if the Unix Programmers Manual is relevant for Linux? The manual is here: http://cm.bell-labs.com/7thEdMan/v7vol1.pdf
The Unix Programmers Manual you linked to is probably mostly relevant for Linux also. However, that manual was published in 1979. Things have changed since then in all descendants of the original Unix.
Is the Unix Programmers Manual relevant for Linux?
1,536,525,681,000
Possible Duplicate: How do I delete a file whose name begins with “-” (hyphen a.k.a. dash or minus)? This is an awkward one, I have received some files from a windows machine which have been named things like "----index.html" When I try to grep hello * in a directory containing these files I get grep errors and when I try to mv ----index.html index.html there are similar errors: mv: unrecognized option '----index.html' Try `mv --help' for more information. Can anyone shed any light on this? Thanks
mv -- ----index.html index.html grep hello -- *
Linux Rename File Beginning with "--" [duplicate]
1,536,525,681,000
I need to find out what kind of script runs fsck during the boot on CentOS 7? I know that all scenarios are located in /etc/rc.d directory. But I haven't any idea about where is this script is located.
I know that all scenarios are located in /etc/rc.d directory. What you know is wrong. Welcome to CentOS 7. The world has changed. In particular, your base of Red Hat Enterprise Linux 7 has changed. You are using a systemd Linux operating system. A lot of the received wisdom about Linux is not true for such systems. fsck is not run by any script at all on systemd Linux operating systems. The native format for systemd is the unit, which can be amongst other things a service unit or a mount unit. systemd's service management proper operates solely in terms of those, which it reads from one of nine directories where (system-wide) .service and .mount files can live. /etc/systemd/system, /run/systemd/system, /usr/local/lib/systemd/system, and /usr/lib/systemd/system are four of those directories. Your /etc/fstab database is converted into mount units by a program named systemd-fstab-generator. This program is listed in the /usr/lib/systemd/system-generators/ directory and is thus run automatically by systemd early in the bootstrap process at every boot, and again every time that systemd is instructed to re-load its configuration later on. This program is a generator, a type of ancillary utility whose job is to create unit files on the fly, in a tmpfs where three more of those nine directories (which are intended to be used only by generators) are located. systemd-fstab-generator generates .mount units that mount the volumes. These in their turn reference .service units that run fsck. Those fsck service units don't themselves exist as files in the filesystem (not even in a tmpfs), and are not the products of a generator. They are instantiated by systemd from a template service unit file, named [email protected], using the device name as the service unit instance name. The instantiation happens because of the Requires= and After= references to systemd-fsck@device.service from the generated .mount units. This instantiated template is a service that runs a program named systemd-fsck, which sets up a client-server connection for displaying progress information and then in its turn runs fsck. systemd-fsck is a compiled C program, not an interpreted script. Further reading "New Features: System and Services". Red Hat Enterprise Linux 7 Release Notes. Red Hat. Stephen Wadeley (2014). "8. Managing Services with systemd" Red Hat Enterprise Linux 7 System Administrators' Guide. Red Hat. systemd-fstab-generator. systemd manual pages. Freedesktop.org. [email protected]. systemd manual pages. Freedesktop.org. systemd.mount. systemd manual pages. Freedesktop.org. https://unix.stackexchange.com/a/204075/5132 https://unix.stackexchange.com/a/196014/5132
Fsck script location
1,536,525,681,000
I've somehow managed to create a file with the name "--help". If I try to remove the file using "rm" funny stuff happens. Please help here's a printout of the dir listing: [pavel@localhost test]$ ls -la total 3640 drwxrwxr-x. 5 pavel pavel 4096 Jun 19 18:33 . drwxrwxr-x. 6 pavel pavel 4096 Jun 9 12:23 .. -rw-rw-r--. 1 pavel pavel 1070592 Jun 12 09:40 --help
Either rm ./--help or rm -- '--help' See Utility Syntax Guideline 10 in the POSIX.1-2008 specification for a description of the end-of-options indicator, --
How can I remove a file called "--help" with bash command line? [duplicate]
1,536,525,681,000
I'm running Debian Squeeze 6.0.5. Does the use of swap memory make my computer run slower? If so, how can I reduce the size of the swap memory after the system is already installed?
One doesn't always want to reduce it, but often to increase its lazy usage instead — the more clean pages are already in swap, the better, it means they can easily be set off RAM when free RAM is needed. Linux VM, though, has some weird behavior regarding swapping — intensive disk I/O (like huge file cp) can make your system swap unwanted heavily. It can be mitigated to some degree by decreasing vm.swappinness and increasing vfs_cache_pressure although the effect of such countermeasures isn't always meeting expectations. I think it also makes sense to mention zswap here — for some workloads it can be useful.
How to reduce size of swap after a system is already installed?
1,536,525,681,000
On my Archlinux, /dev/pts is mounted by the devpts, so who created the /dev/pts/ptmx device node? What's the purpose of this node? it's the same (Major=5 Minor=2) device node same as /dev/ptmx/, but with different access mode, for what?
The old AT&T System 5 mechanism for pseudo-terminal slave devices was that they were ordinary persistent character device nodes under /dev. There was a multiplexor master device at /dev/ptmx. The old 4.3BSD mechanism for pseudo-terminal devices had parallel pairs of ordinary persistent master and slave device nodes under /dev. These were special device nodes on an ordinary disc filesystem. On OpenBSD, some of this is still true nowadays. /dev is still a disc volume, and slave devices are still real on-disc nodes. They are created on demand, however. The kernel internally issues the relevant calls to create new device nodes there when issued a PTMGET I/O control on the /dev/ptm device. On FreeBSD, none of this is still true. There isn't even a multiplexor device any more. /dev is not a disc volume at all. It is a devfs filesystem. Slave devices appear in the devfs filesystem under its pts/ directory in response to the posix_openpt() system call, which is an outright system call, not a wrapped ioctl() on an open file descriptor to some "multiplexor" evice. For a while on Linux, pseudo-terminal slave devices were persistent device nodes. What you are looking at is its "new" devpts filesystem (where "new" means introduced quite a few years ago, now) in conjunction with devtmpfs. This almost permits the same way of doing things as on FreeBSD with devfs. But there are some differences. In particular, there is still a "multiplexor" device. In the older "new" devpts system, this was a ptmx device in a different devtmpfs filesystem, with the devpts filesystem containing only the automatically created/destroyed slave device files. Conventionally the setup was /dev/ptmx and an accompanying devpts mount at /dev/pts. But Linux people wanted to have multiple wholly independent instances of the devpts filesystem, for containers and the like, and it turned out to be quite hard synchronizing the (correct) two filesystems when there were many devtmpfs and devpts filesystems. So in the newer "new" devpts system all of the devices, multiplexor and slave, are in the one filesystem.For backwards compatibility, the default was for the new ptmx node to be inaccessible unless one set a new ptmxmode mount option. In backwards compatibility mode one could still run things the older single-instance way, and one did by default unless one used an explicit newinstance option when mounting a devpts. In the even newer still "new" devpts (that has been around since 2016) the per-instance multiplexor device in the devpts filesystem is now the primary multiplexor, and the ptmx in the devtmpfs is a shim provided by the kernel that tries to mimic a symbolic link, a bind mount, or a plain old actual symbolic link to pts/ptmx. The multiple-instance way is now the only way. Further reading https://unix.stackexchange.com/a/470853/5132 What would be the best way to work around this glibc problem? https://unix.stackexchange.com/a/214685/5132 Documentation/filesystems/devpts.txt. Linux kernel. Daniel Berrange (2009-05-20). /dev/pts must use the 'newinstance' mount flag to avoid security problem with containers. Red Hat bug #501718. Eric W. Biederman (2015-12-11). devpts: Sensible /dev/ptmx & force newinstance. Linux kernel mailing list. Eric W. Biederman (2016-04-08). devpts: Teach /dev/ptmx to find the associated devpts via path lookup. Linux kernel mailing list.
Where does `/dev/pts/ptmx` come from? [duplicate]
1,536,525,681,000
I have a Dir1 with multiple subdir and files inside it. I intend to copy the Dir1 to Dir2 so that all the files in it will be just empty files but with same file name as Dir1. Then I intend to push the Dir2 to github to explain example data-structure and filenames to users. Is there a command to copy files in a way just to destination files are empty but with same filename?
Or more complicatedly but with a single filesystem pass (for even more portability ~ should be written as $HOME) find . \( -type d -exec mkdir -p "~/elsewhere/{}" \; \ -o -type f -exec touch "~/elsewhere/{}" \; \) The complexity here is that of Boolean logic (which may be of some benefit to learn) and precedence (also good to know) and how find implements these concepts with an implicit AND between the -type and subsequent action, and OR making an appearance as -o.
create empty files with same directory structure as reference directory
1,536,525,681,000
I may be being daft here, but if I want to do something while a process is running, why do I do: while kill -0 $PID do #... done instead of until kill -0 $PID do #... done ? kill -0 exits with code 1 if the process is not found, and 0 if the process is found: $ kill -0 123444 -bash: kill: (123444) - No such process $ echo $? 1 $ screen -ls | grep rofl 28043.rofl (02/19/2015 02:27:56 PM) (Detached) $ kill -0 28043 $ echo $? 0 So if the process is running, wouldn't the while loop boil down to: while 0 do #... done which would then never execute?
When dealing with return codes "0" is a success and non-zero is failure. The syntax of a while loop is: while COMMANDS; do ...; done The while statement checks the return code of the last command in the provided list of commands. In you last example of while 0, this will attempt to execute a command called "0" and check it's return code. A literal 0 is not special to bash outside of arithmetic context. Inside of that context, 0 is considered false. For example while (( 0 )); do ... # never executes end This case is special as the keyword (( is treated as a command, which returns non-zero because it's result is 0.
Why is it "while kill -0 $PID" and not "until kill -0 $PID"?
1,536,525,681,000
At work, I spend a lot of time manipulating files on a networked computer that's running SME Server (but that's set up for Windows filesharing, if that somehow makes a difference). I have been wondering how to cd to the network drive's root from bash so that I don't have to keep calling up Finder / nautilus every time I want to copy a file. Any suggestions? In Ubuntu, I connect to the drive as a Windows share via Places - Connect to Server. In OSX, well, I logged in to the drive once, and it just shows up in Finder.
For OS X look for your share name under /Volumes (it may have a "-digit" at the end if you have many mounts with the same name). The same goes for mounted CD/DVDs and disk images.
How to cd to a Windows file share?
1,536,525,681,000
I have a Linux machine that only has very minimal cmds available on it, for example the /bin looks like this: /bin# ls ash chattr clockdiff dd dumpkmap fdflush gunzip linux32 ls mktemp mt pidof printenv rmdir setserial su tracepath umount watch busybox chgrp cp df echo fgrep gzip linux64 lsattr more mv ping ps run-parts sh sync tracepath6 uname zcat cat chmod cpio dmesg egrep getopt hostname ln mkdir mount netstat ping6 pwd sed sleep tar traceroute6 usleep catv chown date dnsdomainname false grep kill login mknod mountpoint nice pipe_progress rm setarch stty touch true vi Also, it doesn't have /usr/share/zoneinfo dir. So how can I set a time zone on it with these cmds? I also need to sync the time zone and date on it remotely from another host. I tried TZ env variable, but it doesn't work, e.g: root@xxx:/bin# date Wed Aug 31 12:02:41 UTC 2023 root@xxx:/bin# TZ=America/New_York date Thu Aug 31 12:03:50 America 2023 root@xxx:/bin# date Thu Aug 31 12:04:58 UTC 2023 Notice that the time doesn't change when TZ is set.
If you can transfer files to it, then you can copy the required timezone definition from your local system's /usr/share/zoneinfo to /etc/localtime on the limited system. Testing this out in an Alpine container: % docker run --rm -it alpine date Fri Sep 1 07:15:13 UTC 2023 % docker run --rm -it -v /usr/share/zoneinfo/Asia/Calcutta:/etc/localtime alpine date Fri Sep 1 12:45:17 IST 2023 Your list of commands includes ash and busybox. IIRC the common versions of the Almquist shell on Linux are dash and busybox sh, and its possible that like with Alpine, most of your commands are just symlinks to busybox. By using Alpine to test, I think it'll be representative of your system. If you have root SSH access, but it's constrained somehow to disallow SFTP and scp, you can try: ssh limited-server 'cat > /etc/localtime' < /usr/share/zoneinfo/Some/Place
How can I set a time zone on a Linux machine that has only very minimal cmds available?
1,536,525,681,000
I set up zram and made extensive tests inside my Linux machines to measure that it really helps in my scenario. However, I'm very confused that zram seems to use up memory of the whole uncompressed data size. When I type in "zramctl" I see this: NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 2G 853,6M 355,1M 367,1M 4 [SWAP] According to the help command of zramctl, DATA is the uncompressed size and TOTAL the compressed memory including metadata. Yet, when I type in swapon -s, I see this output: Filename Type size used Priority /dev/sda2 partition 1463292 0 4 /dev/zram0 partition 2024224 906240 5 906240 is the used memory in Kilobytes, which translates to the 853,6M DATA value of zramctl. Which leaves the impression that the compressed zram device needs more memory than it saves. Once DATA is full, it actually starts swapping to the disk drive, so it must be indeed full. Why does zram seemingly occupy memory of the original data size? Why is it not the size of COMPR or TOTAL? It seems there is no source about that on the Internet yet, because I haven't found any information about this. Thank you!
So after more testing and observations, I made a few very interesting discoveries. DATA is indeed the uncompressed amount of memory that takes up the swap space. But at first glance it's very deceiving and confusing. When you setup zram and use it as swap, disksize does not stand for the total amount of memory that zram will consume for compressed data. Instead, it stands for the total amount of uncompressed data that zram will compress. So you could create a zram device with a size of 2 GB, but in practice zram will stop after the total compressed memory is around 500 - 1000 MB (depends on your scenario of course). Commands like swapon -s or Gnome's system monitor show the uncompressed data size for the zram device, just like the DATA of zramctl. Thankfully, in reality, zram does not actually use up the reported amount of memory. But this means that in practice, you actually have to create a zram disk size that equals the RAM you have + 50% to take real advantage of it and not a disk size that equals half of the RAM size, like zram-config incorrectly does. But read on to find out more. Here is the deeper background: Why am I so sure? Because I tested this with zswap as well. I have compiled an own kernel where I lowered the file_prio value inside mm/vmscan.c compared to anon_prio (in newer Linux 5.6 kernels the variables have been renamed to fp and ap respectively). The reduced file_prio value will make the kernel not discard valuable cache memory as much anymore. By default, even with vm.swappiness at 100, the kernel discards an insane amount of cached RAM data, both in standby memory and for active programs. The performance hit with the default configuration is extreme in memory pressure situations when you actually want to make use of zram, because then you absolutely want the kernel to swap rarely used and highly compressible memory way more often. With more free memory, you have more space for cached data. Then cached data won't be thrown away at a ridiculously high rate, and Linux won't have to reread certain purged program file cache repeatedly. When testing this on classic hard drives, you can easily verify the performance impact. Back to my zswap test: With my custom kernel, zswap got plenty of memory to compress once I hit the 50 - 70% memory mark. Gnome's System Monitor immediately shows a high swap data usage for the page partition, but oddly enough, there was no hard drive paging at all! This is of course by design of zswap, that it swaps least recently used memory on its own. But the interesting part is that the system reports such high swap usage for the swap partiton anyway, so ultimately you are limited by the size of your swap partition or swap file. Even though all memory is compressed, you have to have at least the swap size of the uncompressed data. Therefore even if in practice 4 GB of swapped memory in zswap use up only 1 - 2 GB, your swap needs to have the size of the uncompressed data size. The same goes for zram, but here the memory is at least not actually reserved. Well, unless you use zswap with a dynamically growing swap file of course. As for zram again, there is also a very interesting detail that backs up the observation I made: There is little point creating a zram of greater than twice the size of memory since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the size of the disk when not in use so a huge zram is wasteful. This means that to make an effectice use of zram, you have to at least create a disk size that equals your installed RAM. Due to the high compression ratios, I would suggest to use your GB of RAM + 50%, but the quote above implies that it does not make much sense if you go above +100%. Additionally, since we have to specify a disk size that matches the uncompressed data size, it is much harder to control and predict the actual real memory usage. From the helpful official source above, we can limit the actual memory usage (which equals the TOTAL value of zramctl) with this command: echo 1G > /sys/block/zram0/mem_limit. But in reality, doing this will lock up the machine. Because the system tries to still swap to it, but zram imposes a limit, and the machine locks up with super high CPU usage. This behavior can't be intentional at all, which strengthens my impression that something about the whole story is very wonky. To sum this up: The disksize you set during zram device creation is basically a virtual disk size, this does not stand for the real RAM usage. You have to predict the actual RAM usage (compression ratio) for your scenario, or make sure that you never create a zram disk size that is too large. Your current RAM size + 50% should be nearly always fine in practice. The default configuration of the Linux kernel is unfortunately totally unsuited for zram compression, even when setting vm.swappiness to 100. You need to make your own custom kernel to actually make real use of this handy feature, since Linux purges way too many file caches instead of freeing up memory by swapping the most compressible data much earlier. Ironically, a helpful patch to fix this situation was never accepted. Using the zram limit echo 1G > /sys/block/zram0/mem_limit will lock up your system once the compressed data reached that threshold. You are better off to limit zram usage with a well-predicted zram disksize, as it seems there is no other alternative for a limit.
Why does zram occupy much more memory compared to its "compressed" value?
1,536,525,681,000
I'm looking for a "meta-command" xyz such that: (echo "foo"; echo "bar") | xyz rev will return: foo oof bar rab I'd like to avoid temporary files, i.e. I'm looking for a solution neater than: tempfile=$(mktemp) cat > $tempfile cat $tempfile | rev | paste $tempfile - (And of course I want a general solution, for any command, not just rev, you can assume that the command outputs exactly one line for each input line.) Zsh solution would be also acceptable.
There will be a lot of problems in most cases due to the way that stdio buffering works. A work around for linux might be to use the stdbuf program and run the command with coproc, so you can explicitly control the interleaving of the output. The following assumes that the command will output one line after each line of input. #!/bin/bash coproc stdbuf -i0 -o0 "$@" IFS= while read -r in ; do printf "%s " "$in" printf "%s\n" "$in" >&${COPROC[1]} read -r out <&${COPROC[0]} printf "%s\n" "$out" done If a more general solution is needed as the OP only required each line of input to the program to eventually output one line rather than immediately, then a more complicated approach is needed. Create an event loop using read -t 0 to try and read from stdin and the co-process, if you have the same "line number" for both then output, otherwise store the line. To avoid using 100% of the cpu if in any round of the event loop neither was ready, then introduce a small delay before running the event loop again. There are additional complications if the process outputs partial lines, these need to be buffered. If this more general solution is needed, I would write this using expect as it already has good support for handling pattern matching on multiple input streams. However this is not a bash/zsh solution.
A command to output each line forward then backwards
1,536,525,681,000
I'm not sure how this happened, but I have a number of files that have becomed symlinked to themselves. It seems likely that there won't be any way to restore the files, but hopefully there is. Here is what ls -l says lrwxrwxrwx 1 bob users 50 Sep 9 21:45 background.png -> /path/to/background.png I tried unlinking one of the files, but unfortunately the file disappeared. I've also tried readlink. Readlink says that the path to the file is /path/to/background.png Like I said, I really don't know how this happened. I am inheriting all these files from a previous admin. Is there any recourse?
If a file is symlinked to itself then there's no data present and any attempt to access it will result in a loop, and ultimately an error eg $ ls -l myfile lrwxrwxrwx 1 sweh sweh 19 Sep 9 22:38 myfile -> /path/to/here/myfile $ cat myfile cat: myfile: Too many levels of symbolic links Since there's no data, deleting these symlinks won't lose any data, because there is no data to preserve. If you don't get the Too many levels of symbolic links error when you try to cat the file then your file is not a link to itself.
How to unlink file symlinked to itself without destroying
1,536,525,681,000
I have one file called abc.csv, its format is: aa,size:12 bb,size:13 cc,size:3 I want to delete size: , and the file will become like this: aa,12 bb,13 cc,3 Can anyone tell me how to use shell script to perform this task?
You can use sed easily for this task: sed -i -- 's/size://' abc.csv s/size:// is a simple regex that says replace size: with <blank>.
Linux shell script delete specific contents in file
1,536,525,681,000
How can I work out the link count of an inode number? If I know the inode number is, say, 592255 - what workings out can I do to find out the link count? I know directories have a link count of at least 2, but don't know how to work it out.
Finding the link count using the name You can use the stat command to get a link count on a given file/directory: $ stat lib/ File: ‘lib/’ Size: 4096 Blocks: 8 IO Block: 4096 directory Device: fd02h/64770d Inode: 11666186 Links: 3 Access: (0755/drwxr-xr-x) Uid: ( 1000/ saml) Gid: ( 1000/ saml) Context: unconfined_u:object_r:user_home_t:s0 Access: 2014-03-21 18:16:10.521963381 -0400 Modify: 2014-01-13 17:16:49.438408973 -0500 Change: 2014-01-14 17:57:46.636255446 -0500 Birth: - Taking a look at the man page for stat: %h number of hard links %i inode number So you can get just this value directly using stat's --printf or --format output capabilities: $ stat --printf="%h\n" lib/ 3 $ stat --format="%h" lib/ 3 $ stat -c "%h" lib/ 3 Finding the link count using the inode If on the other hand you know the inode number only you can work backwards like so: $ ls -id lib 11666186 lib $ find -inum 11666186 -exec stat -c "%h" {} + 3 References Hard links and Unix file system nodes (inodes)
Work out the link count of inode number?
1,536,525,681,000
I'm running a propietary program which generates text output. Kind of like a logger. logger I'd like to both view this output on the terminal to watch the server breathe, and send it to a text file. So normally I'd do: logger | tee /tmp/log.txt But these log files can become large enough to overwhelm my VMs disk space. Instead of sending the text to an uncompressed text file, I'd like it to be compressed immediately. So I might do: logger in one terminal and logger | gzip > /tmp/log.gz in another terminal. But this doesn't feel very *nixy. Is there a way I can accomplish this in one command, similar to using tee? Here is what I'm going for, which obviously won't work, but maybe you'll get the idea: logger | tee gzip > /tmp/log.txt
With ksh93, zsh or bash: logger | tee >(gzip > /tmp/log.txt.gz) That uses that ksh feature called process substitution. >(gzip > /tmp/log.txt.gz) is substituted with the path of a file (typically something like /dev/fd/something, but that could also be a temporary named pipe) that refers to the writing end of a pipe. At the other (reading) end of that pipe, the shell connects the standard input of a new process (run in background) that executes the gzip command. So, when tee writes to that file, it's actually feeding the data to gzip. On systems with /dev/fd/n, you can do it manually (with any Bourne-like shell but ksh93) like: { logger | tee -a /dev/fd/3 | gzip > /tmp/log.txt.gz; } 3>&1 (though in that case /dev/fd/3 refers to the original stdin which we've made available on the file descriptor 3 with 3>&1, not to the pipe to gzip which here is just connected to tee's stdout) With zsh: logger >&1 > >(gzip > /tmp/log.txt.gz) That uses zsh multios feature whereby zsh implements a sort of tee internally to redirect the same output to more than one file (here the original stdout (>&1) and the pipe to gzip using process substitution again. logger's stdout will actually be the writing end of a pipe. At the other end of the pipe is a shell process that reads it and distributes it both outputs like tee would.
Send stdin to console and compressed file
1,536,525,681,000
I have one IBM AIX server (serverA) which is connected to the san storage. I have created a volume group and also file system (jfs2) and mounted to directory /profit. After that I created a NFS share for that directory and started the NFS daemon. Over at another server, which is IBM AIX also (serverB), I created a mount point /profit and mounted the nfs share from serverA to serverB using the below command: mount 192.168.10.1:/profit /profit On serverB, I am able to access the directory and list the files in it. But the strange thing is, on serverA, the directory and files are under the oracle user ownership. But in serverB, i see them as a different user. When I touch a file in that directory at serverB, on serverA, I see it as another user id. Any clue how I can fix this? Below is the file listing from serverB $ ls -l total 0 -rwxrwxrwx 1 root system 0 Mar 16 15:00 haha -rwxrwxrwx 1 radiusd radiusd 0 Mar 16 15:19 haha2 -rwxrwxrwx 1 radiusd radiusd 0 Mar 16 15:31 haha3 -rw-r--r-- 1 oracle oinstall 0 Mar 17 2011 hahah3 drwxrwxrwx 2 radiusd radiusd 256 Mar 16 14:40 lost+found On serverA it looks like below: # ls -l /profit total 0 -rwxrwxrwx 1 root system 0 Mar 16 15:00 haha -rwxrwxrwx 1 oracle dba 0 Mar 16 15:19 haha2 -rwxrwxrwx 1 oracle dba 0 Mar 16 15:31 haha3 -rw-r--r-- 1 10 sshd 0 Mar 17 16:01 hahah3 drwxrwxrwx 2 oracle dba 256 Mar 16 14:40 lost+found Below is the /etc/exports file from serverA # more /etc/exports /profit -vers=3,sec=sys:krb5p:krb5i:krb5:dh,rw Thanks.
Remember that each of the NFS client systems will determine the username by looking up the numerical UID locally using the local system's /etc/passwd, or in your centralized user database. The NFS server only stores the UID in numerical format, and does not know about usernames. This is also true for group names vs. GIDs. In your case, serverA and serverB must have different usernames listed in /etc/passwd To test this, use ls -n to display user and group IDs numerically, rather than converting to a user or group name in a long (-l) output. If the ls -n option is not available on AIX, consult the manpage for this feature. To see the username-to-uid mapping, do one of the following on both serverA and serverB. grep $THEUSERID /etc/passwd Or, it's a good habit to use getent, since it works with /etc/password, and directory services (LDAP, etc.): getent passwd $THEUSERID The UIDs should be the same on both systems, but the usernames will be different.
Why is file ownership inconsistent between two systems mounting the same NFS share?
1,410,375,331,000
Our Linux based Thecus N12000 NAS recently experienced this message in its dmesg log. [2014-05-21 11:34:56] ------------[ cut here ]------------ [2014-05-21 11:34:56] WARNING: at net/ipv4/tcp_input.c:2966 tcp_ack+0xd88/0x1a1c() [2014-05-21 11:34:56] Hardware name: IRONLAKE & IBEX PEAK Chipset [2014-05-21 11:34:56] Modules linked in: nfsd lockd nfs_acl auth_rpcgss sunrpc iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ntfs ses enclosure usblp usb_storage usbhid xhci_hcd uhci_hcd ehci_hcd usbcore sg be2net tehuti igb ixgbe dca e1000e drm_kms_helper drm video backlight sata_sil24 mpt2sas ahci libahci ata_piix [2014-05-21 11:34:56] Pid: 1710, comm: smbd Not tainted 2.6.38 #1 [2014-05-21 11:34:56] Call Trace: [2014-05-21 11:34:56] [<ffffffff8103118e>] ? warn_slowpath_common+0x78/0x8c [2014-05-21 11:34:56] [<ffffffff81391339>] ? tcp_ack+0xd88/0x1a1c [2014-05-21 11:34:56] [<ffffffff81392ca5>] ? tcp_rcv_established+0x780/0x9d1 [2014-05-21 11:34:56] [<ffffffff81392d42>] ? tcp_rcv_established+0x81d/0x9d1 [2014-05-21 11:34:56] [<ffffffff8139a52d>] ? tcp_v4_do_rcv+0x1a1/0x377 [2014-05-21 11:34:56] [<ffffffff8139a52d>] ? tcp_v4_do_rcv+0x1a1/0x377 [2014-05-21 11:34:56] [<ffffffff81413149>] ? _raw_spin_lock_bh+0x9/0x1f [2014-05-21 11:34:56] [<ffffffff8135374c>] ? release_sock+0x19/0x103 [2014-05-21 11:34:56] [<ffffffff81413149>] ? _raw_spin_lock_bh+0x9/0x1f [2014-05-21 11:34:56] [<ffffffff813537cd>] ? release_sock+0x9a/0x103 [2014-05-21 11:34:56] [<ffffffff8138a89a>] ? tcp_recvmsg+0x48f/0x9f5 [2014-05-21 11:34:56] [<ffffffff8138c24d>] ? tcp_sendpage+0x595/0x5a7 [2014-05-21 11:34:56] [<ffffffff81350048>] ? sock_sendmsg+0xc3/0xe0 [2014-05-21 11:34:56] [<ffffffff813a5f60>] ? inet_recvmsg+0x64/0x75 [2014-05-21 11:34:56] [<ffffffff8134f84e>] ? sock_sendpage+0x36/0x3d [2014-05-21 11:34:56] [<ffffffff8134f7aa>] ? sock_aio_read+0x126/0x13a [2014-05-21 11:34:56] [<ffffffff810a0f4d>] ? do_sync_read+0xb1/0xea [2014-05-21 11:34:56] [<ffffffff810a1921>] ? vfs_read+0xbd/0x12d [2014-05-21 11:34:56] [<ffffffff810a1a47>] ? sys_read+0x45/0x6e [2014-05-21 11:34:56] [<ffffffff810027fb>] ? system_call_fastpath+0x16/0x1b [2014-05-21 11:34:56] ---[ end trace cdaf61db513385a1 ]--- In researching this error message I've only found the following info: if (WARN_ON(!tp->sacked_out && tp->fackets_out)) tp->fackets_out = 0; I also found this similar error on the oops.kernel.org site, WARNING: at net/ipv4/tcp_input.c:2966 tcp_ack+0xdbe/0x1f80. Is this just a non-issue warning that we can ignore is is symptomatic of something else that I should be concerned with? Isn't this an appliance? NOTE: Though this is a Linux appliance, of sorts, it's actually based on CentOS. I've brought binaries built on CentOS 5 onto the box from time to time and they've run without issues. Tools such as df for example. $ uname -a Linux tank 2.6.38 #1 SMP Fri Oct 26 14:35:05 CST 2012 x86_64 GNU/Linux References User's Manual for N12000 N12000 product pages
You're correct about the location of the WARN, this code is from the upstream kernel tag v2.6.38: net/ipv4/tcp_input.c 2953 static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, int flag) 2954 { ... 2964 if (WARN_ON(!tp->sacked_out && tp->fackets_out)) 2965 tp->fackets_out = 0; 2966 This is discussed here and fixed with commit: commit 5b35e1e6e9ca651e6b291c96d1106043c9af314a Author: Neal Cardwell <[email protected]> Date: Sat Jan 28 17:29:46 2012 +0000 tcp: fix tcp_trim_head() to adjust segment count with skb MSS The date puts its fix in kernel 3.3. This fix was not backported to Red Hat's EL5 source (I checked the 5.11 kernel 2.6.18-398) so if your NAS is based off CentOS 5 then this is not fixed. It's worth noting there was never a 2.6.38 released for EL5, so this is not a Red Hat or CentOS kernel. I assume your NAS vendor has taken a later upstream kernel, maybe applied some patches, and provided that kernel in the firmware image of your SAN. If you want to fix this, you'll probably need to get the source for kernel 3.3 or later, apply your SAN vendor's patches, and build your own kernel. It's probably worth checking if this is fixed in ELRepo's kernel-lt which is 3.2.63-1.el5, that's very close to 3.3. If not, you could use ELRepo's .config file and make oldconfig on the new kernel source to answer a minimum of questions. That being said, the big isn't a huge deal anyway. The WARN occurs because of an accounting error in TCP. If I understand the patch correctly, the functions which account data transmitted using TCP Segmentation Offloading make some incorrect assumptions, resulting in a garbage number of segments being counted under some conditions. The WARN fixes this by returning one of the segment counts to 0. I think the worst that can happen is that a little more data than necessary is retransmitted when there is packet loss. You may be able to work around this by disabling TSO. Check you are using TSO with: ethtool -g ethX If so, disable it with: ethtool -G ethX tso off If that works, and your networking is controlled by the regular CentOS initscripts (/etc/init.d/network and friends) then you can write /sbin/ifup-local to have the change apply every time the interface starts, like so: #!/bin/bash if [ $1 == "ethX" ]]; then /sbin/ethtool -G $1 tso off fi Replace ethX with the name of your network interface.
Is this kernel warning a major problem that needs my attention?
1,410,375,331,000
I've built a kernal module from source and now would like to load the module at boot. The .ko file is in the build directory in my user folder, and I know it works because running insmod ./vizzini.ko from the appropriate place works fine. I made the directory vizzini in /lib/modules/2.6.32-5-amd64/kernel/drivers/ and copied the .ko file there. Then I added vizzini to the end of /etc/modules. However, when I run modprobe vizzini, the module is not recognised. Do I need to restart my computer (log out, log in again?) Can I use any name for the folder containing the .ko file? What permissions do I need? Is what I've done right so far? What else do I need to do? Permissions are currently -rw-r--r-- root root
Everything seems fine so far. You just need to run depmod - then modprobe should find your module.
Integrating custom kernel module into Debian
1,410,375,331,000
We got a new server at work and I installed harddrive from the old system. Ubuntu 8.04 boots up just fine. The only problem is: $ifup eth0 device not found $lspci Ethernet Controller Intel Pro(100/1000) $dmesg | grep eth (nothing) Should I add some sort of default ethernet module using modprobe or is there another way? $cat /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xxx $lsmod<br/> [gist](https://gist.github.com/1004662)
Try modprobe e1000 or modprobe e1000e.
ethernet not found- load module into linux kernel?
1,410,375,331,000
I'm having problems setting up X.org on Gentoo. At the moment I have kernel 2.6.37-gentoo-r4 installed and have built X against this kernel. I have nouveau drivers installed and they seem to be working correctly, from what I can see in terminal. I have nVIdia GeForce 9500M GS and it should be supported by the driver since it uses the NV84 (G84) core. When I try to startx, this is what I get: ZVEZDA ~ # startx hostname: Unknown host xauth: file /root/.serverauth.4316 does not exist X.Org X Server 1.9.4 Release Date: 2011-02-04 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.37-gentoo-r4 x86_64 Gentoo Current Operating System: Linux ZVEZDA 2.6.37-gentoo-r4 #2 SMP PREEMPT Mon Apr 11 14:00:26 CEST 2011 x86_64 Kernel command line: root=/dev/sda3 nouveau.modeset=1 Build Date: 11 April 2011 02:02:40PM Current version of pixman: 0.20.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Mon Apr 11 14:13:09 2011 (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) Failed to load module "vesa" (module does not exist, 0) (EE) Failed to load module "fbdev" (module does not exist, 0) resize called 1920 1200 (EE) Logitech USB Receiver: failed to initialize for relative axes. (EE) SynPS/2 Synaptics TouchPad no synaptics event device found (EE) Query no Synaptics: 6003C8 (EE) SynPS/2 Synaptics TouchPad Unable to query/initialize Synaptics hardware. (EE) PreInit failed for input device "SynPS/2 Synaptics TouchPad" which: no keychain in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.5.2) /etc/X11/xinit/xinitrc: line 61: xterm: command not found /etc/X11/xinit/xinitrc: line 63: exec: xterm: not found /etc/X11/xinit/xinitrc: line 59: twm: command not found xinit: connection to X server lost waiting for X server to shut down When I try to configure Xorg, I get: ZVEZDA Xorg # Xorg -configure X.Org X Server 1.9.4 Release Date: 2011-02-04 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.36-gentoo-r5 x86_64 Gentoo Current Operating System: Linux ZVEZDA 2.6.37-gentoo-r4 #1 SMP PREEMPT Mon Apr 11 13:37:39 CEST 2011 x86_64 Kernel command line: root=/dev/sda3 nouveau.modeset=1 Build Date: 18 March 2011 09:54:54PM Current version of pixman: 0.20.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Mon Apr 11 14:04:54 2011 List of video drivers: nouveau (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatePCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. The log says: [ 47.459] X.Org X Server 1.9.4 Release Date: 2011-02-04 [ 47.461] X Protocol Version 11, Revision 0 [ 47.461] Build Operating System: Linux 2.6.37-gentoo-r4 x86_64 Gentoo [ 47.462] Current Operating System: Linux ZVEZDA 2.6.37-gentoo-r4 #2 SMP PREEMPT Mon Apr 11 14:00:26 CEST 2011 x86_64 [ 47.463] Kernel command line: root=/dev/sda3 nouveau.modeset=1 [ 47.464] Build Date: 11 April 2011 02:02:40PM [ 47.465] [ 47.465] Current version of pixman: 0.20.2 [ 47.466] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 47.468] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 47.470] (==) Log file: "/var/log/Xorg.0.log", Time: Mon Apr 11 15:58:36 2011 [ 47.471] (II) Loader magic: 0x7d5140 [ 47.471] (II) Module ABI versions: [ 47.471] X.Org ANSI C Emulation: 0.4 [ 47.471] X.Org Video Driver: 8.0 [ 47.471] X.Org XInput driver : 11.0 [ 47.471] X.Org Server Extension : 4.0 [ 47.472] (--) PCI:*(0:1:0:0) 10de:0405:1025:011e rev 161, Mem @ 0xd2000000/16777216, 0xc0000000/268435456, 0xd0000000/33554432, I/O @ 0x00005000/128 [ 47.473] List of video drivers: [ 47.473] nouveau [ 47.474] (II) LoadModule: "nouveau" [ 47.474] (II) Loading /usr/lib64/xorg/modules/drivers/nouveau_drv.so [ 47.474] (II) Module nouveau: vendor="X.Org Foundation" [ 47.474] compiled for 1.9.4, module version = 0.0.16 [ 47.474] Module class: X.Org Video Driver [ 47.474] ABI class: X.Org Video Driver, version 8.0 [ 47.475] (II) NOUVEAU driver [ 47.475] (II) NOUVEAU driver for NVIDIA chipset families : [ 47.475] RIVA TNT (NV04) [ 47.475] RIVA TNT2 (NV05) [ 47.475] GeForce 256 (NV10) [ 47.475] GeForce 2 (NV11, NV15) [ 47.475] GeForce 4MX (NV17, NV18) [ 47.475] GeForce 3 (NV20) [ 47.475] GeForce 4Ti (NV25, NV28) [ 47.475] GeForce FX (NV3x) [ 47.475] GeForce 6 (NV4x) [ 47.475] GeForce 7 (G7x) [ 47.475] GeForce 8 (G8x) [ 47.491] (++) Using config file: "/root/xorg.conf.new" [ 47.491] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 47.492] (==) ServerLayout "X.org Configured" [ 47.492] (**) |-->Screen "Screen0" (0) [ 47.492] (**) | |-->Monitor "Monitor0" [ 47.492] (**) | |-->Device "Card0" [ 47.492] (**) |-->Input Device "Mouse0" [ 47.492] (**) |-->Input Device "Keyboard0" [ 47.492] (==) Automatically adding devices [ 47.493] (==) Automatically enabling devices [ 47.493] (**) FontPath set to: /usr/share/fonts/misc/, /usr/share/fonts/TTF/, /usr/share/fonts/OTF/, /usr/share/fonts/Type1/, /usr/share/fonts/100dpi/, /usr/share/fonts/75dpi/, /usr/share/fonts/misc/, /usr/share/fonts/TTF/, /usr/share/fonts/OTF/, /usr/share/fonts/Type1/, /usr/share/fonts/100dpi/, /usr/share/fonts/75dpi/ [ 47.493] (**) ModulePath set to "/usr/lib64/xorg/modules" [ 47.493] (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 47.493] (WW) Disabling Mouse0 [ 47.493] (WW) Disabling Keyboard0 [ 47.493] (EE) [drm] No DRICreatePCIBusID symbol [ 47.493] Number of created screens does not match number of detected devices. Configuration failed. There are some interesting lines in those files, for example this: [ 47.475] (II) NOUVEAU driver for NVIDIA chipset families : [ 47.475] RIVA TNT (NV04) [ 47.475] RIVA TNT2 (NV05) [ 47.475] GeForce 256 (NV10) [ 47.475] GeForce 2 (NV11, NV15) [ 47.475] GeForce 4MX (NV17, NV18) [ 47.475] GeForce 3 (NV20) [ 47.475] GeForce 4Ti (NV25, NV28) [ 47.475] GeForce FX (NV3x) [ 47.475] GeForce 6 (NV4x) [ 47.475] GeForce 7 (G7x) [ 47.475] GeForce 8 (G8x) and [ 47.493] (EE) [drm] No DRICreatePCIBusID symbol [ 47.493] Number of created screens does not match number of detected devices. Configuration failed. So what am I missing?
I also get the No DRICreatePCIBusID symbol error when I try to run X -configure on my system. Thankfully, I didn't really need to run it in order to make X run. These are the files inside my /etc/X11/xorg.conf.d/: 10-evdev.conf 10-monitor.conf 10-quirks.conf 20-nouveau.conf 10-evdev.conf and 10-quirks.conf came with the xorg-server package. 10-monitor.conf contains the config from the ArchWiki's Xorg page, without the Device section, and 20-nouveau.conf from the Nouveau page. 10-monitor.conf: Section "Monitor" Identifier "VGA-1" Option "PreferredMode" "1280x1024" EndSection Section "Monitor" Identifier "TV-1" EndSection Section "Screen" Identifier "Screen0" Device "NVIDIA Card" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection 20-nouveau.conf: Section "Device" Identifier "NVIDIA Card" Driver "nouveau" EndSection The following errors tells us that you haven't installed xterm and twm. /etc/X11/xinit/xinitrc: line 61: xterm: command not found /etc/X11/xinit/xinitrc: line 63: exec: xterm: not found /etc/X11/xinit/xinitrc: line 59: twm: command not found You might want to emerge them or create a ~/.xinitrc file to override the system-wide xinitrc file. You might want to post the log for when you are trying to run X normally (i.e. startx). The one you posted is the log after trying to run Xorg -configure.
Xorg -configure doesn't work with nouveau drivers
1,410,375,331,000
I am running Debian 6, and decided to install 2.6.38 kernel from Unstable. I also installed the headers so that I can later on : sudo apt-get install --target-release=unstable linux-image-2.6.38-2-686-bigmem linux-headers-2.6.38-2-686-bigmem I then re-installed virtualbox-ose-dkms to that the VirtualBox drivers for 2.6.38 can be rebuilt (so that I can use VirtualBox under 2.6.38), but I get this error: Building initial module for 2.6.38-2-686-bigmem Error! Bad return status for module build on kernel: 2.6.38-2-686-bigmem (i686) Consult the make.log in the build directory /var/lib/dkms/virtualbox-ose/3.2.10/build/ for more information. dpkg: error processing virtualbox-ose-dkms (--configure): subprocess installed post-installation script returned error exit status 10 configured to not write apport reports Errors were encountered while processing: virtualbox-ose-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) Here's the contents of the file they asked me to look at: $ cat /var/lib/dkms/virtualbox-ose/3.2.10/build/make.log DKMS make.log for virtualbox-ose-3.2.10 for kernel 2.6.38-2-686-bigmem (i686) Sat Apr 9 14:11:57 SAST 2011 make: Entering directory `/usr/src/linux-headers-2.6.38-2-686-bigmem' LD /var/lib/dkms/virtualbox-ose/3.2.10/build/built-in.o LD /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.o In file included from /var/lib/dkms/virtualbox-ose/3.2.10/build/include/VBox/types.h:30, from /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/../SUPDrvInternal.h:35, from /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:33: /var/lib/dkms/virtualbox-ose/3.2.10/build/include/iprt/types.h:97:31: error: linux/autoconf.h: No such file or directory /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c: In function ‘VBoxDrvLinuxInit’: /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:451: error: ‘nmi_watchdog’ undeclared (first use in this function) /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:451: error: (Each undeclared identifier is reported only once /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:451: error: for each function it appears in.) /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:451: error: ‘NMI_IO_APIC’ undeclared (first use in this function) /var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.c:465: error: ‘nmi_active’ undeclared (first use in this function) make[4]: *** [/var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv/linux/SUPDrv-linux.o] Error 1 make[3]: *** [/var/lib/dkms/virtualbox-ose/3.2.10/build/vboxdrv] Error 2 make[2]: *** [_module_/var/lib/dkms/virtualbox-ose/3.2.10/build] Error 2 make[1]: *** [sub-make] Error 2 make: *** [all] Error 2 make: Leaving directory `/usr/src/linux-headers-2.6.38-2-686-bigmem'
autoconf.h moved from include/linux to include/generated in Linux 2.6.33. Authors of third-party modules must adapt their code; this has already been done upstream for VirtualBox. In the meantime, you can either patch the module source or create a symbolic link as a workaround. As for the NMI-related errors, the NMI watchdog has changed a lot between 2.6.37 and 2.6.38. This looks like it requires a nontrivial porting effort on the module source code. In the meantime, you might have some luck just patching out the offending code. The purpose of the NMI watchdog is to debug kernel lockups, so it's something you can live without.
I am failing to build VirtualBox driver for Linux 2.6.38
1,410,375,331,000
I have an older P5K asus mobo with: 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218 [GeForce 8400 GS Rev. 3] [10de:10c3] (rev a2) Subsystem: eVga.com. Corp. Device [3842:1302] that hangs (the entire system) if nvidiafb module (from kernel 6.6.5) is loaded. I found https://forums.gentoo.org/viewtopic-t-1044092-start-0.html but that is related to the drm driver. I'm wondering if there is a way around this other than blacklisting or removing that driver? And why do the nvidia drivers hang so many systems? TIA!!
nvidiafb which has long been deprecated must be disabled or blacklisted and nouveau should be used instead.
Why does nvidiafb module hang the entire system?
1,410,375,331,000
I need to install the linux-header-* package for other kernel versions in order to compile a kernel module locally for a different system. Say, I want to compile for Debian 10, with a kernel version of 4.19.0-13-amd64, using Ubuntu 20.10, with a kernel version of 5.8.0-43-generic. In that case, is it possible to install the neccessary linux-headers-4.19.0-13-amd64 package from the Ubuntu 20.10 machine? In particular, apt-cache search linux-headers-.* only show 5.8.0-* versions on Ubuntu 20.10. If not possible to download the necessary kernel headers using apt-get, where can these be obtained? I don't want the complete Linux source, just the headers required for compiling the kernel module.
You can't install the debian linux-headers on Ubuntu but you can download the source: Add only the debian sources , it doesn't harm ubuntu: printf "%s\n" "deb-src http://deb.debian.org/debian buster main" |\ sudo tee /etc/apt/sources.list.d/debian-src.list Add the gpg keys: sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 04EE7237B7D453EC 648ACFD622F3D138 DCC9EFBF77E11517 sudo apt-key update Download the source: apt source linux-headers-4.19.0-14-amd64 The linux-headers-4.19.0-13-amd64 is available from debian snapshot.
How to obtain linux-headers-* for other kernel versions than the most current using `apt-get`?
1,410,375,331,000
I am trying to create a virtual webcam from OBS(26.1.1) so I can feed it to Zoom. I am Linux Mint 20.1 Cinnamon, version 4.8.6, Kernel 5.4.0-64-generic. I did: sudo apt-get install v4l2loopback-dkms sudo apt-get install v4l2loopback-utils but v4l2loopback is not showing up as an option in zoom I went to the v4l2loopback github page, where it suggested I should build it from scratch and install it into my kernel. I tried to build from scratch and immediately ran into problems with the make command. make -C /lib/modules/`uname -r`/build M=/home/berggren/Downloads/v4l2loopback-main modules make[1]: Entering directory '/usr/src/linux-headers-5.4.0-64-generic' Building modules, stage 2. MODPOST 1 modules make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-64-generic' make -C utils make[1]: Entering directory '/home/berggren/Downloads/v4l2loopback-main/utils' cc -I.. v4l2loopback-ctl.c -o v4l2loopback-ctl v4l2loopback-ctl.c:1:10: fatal error: sys/types.h: No such file or directory 1 | #include <sys/types.h> | ^~~~~~~~~~~~~ compilation terminated. make[1]: *** [<builtin>: v4l2loopback-ctl] Error 1 make[1]: Leaving directory '/home/berggren/Downloads/v4l2loopback-main/utils' make: *** [Makefile:85: utils/v4l2loopback-ctl] Error 2 I didn't go further because I wasn't sure I was going in the right direction with that. Could someone explain the correct procedure for installing v4l2loopback?
Installing v4l2loopback-dkms will install the modules on your system (at least: if all goes well), but it will not load the modules for you. So, you need to manually load the module with something like modprobe v4l2loopback In order for zoom to use the device, you will first have to attach OBS Studio to it. You probably need to pass the exlusive_caps=1 option when loading the module, in order for zoom to recognise it.
How to best install v4l2loopback on Linux Mint?
1,410,375,331,000
I run a debian 10 with kernel 5.9.0.0 I installed v4l2loopback from the official repo, as in sudo apt install v4l2*, which installed sudo apt install v4l2* Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'v4l2loopback-source' for glob 'v4l2*' Note, selecting 'v4l2ucp' for glob 'v4l2*' Note, selecting 'v4l2loopback-dkms' for glob 'v4l2*' Note, selecting 'v4l2loopback-modules' for glob 'v4l2*' Note, selecting 'v4l2loopback-utils' for glob 'v4l2*' I have linux-headers-5.9.0-0.bpo.2-amd64 installed, and uname -a Linux debian 5.9.0-0.bpo.2-amd64 #1 SMP Debian 5.9.6-1~bpo10+1 (2020-11-19) x86_64 GNU/Linux When I try to modprobe for v4l2 though, this is what happens: sudo modprobe v4l2loopback modprobe: FATAL: Module v4l2loopback not found in directory /lib/modules/5.9.0-0.bpo.2-amd64 Folder exist, I can't see this module in it though. I tried purging v4l2, reinstalling, rebooting, nothing. Any help? Thanks! EDIT: when trying to install them I actually have some error, here is the full output sudo apt install v4l2loopback-dkms v4l2loopback-utils Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: v4l2loopback-dkms v4l2loopback-utils 0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded. Need to get 0 B/54.6 kB of archives. After this operation, 153 kB of additional disk space will be used. Selecting previously unselected package v4l2loopback-dkms. (Reading database ... 378603 files and directories currently installed.) Preparing to unpack .../v4l2loopback-dkms_0.12.1-1_all.deb ... Unpacking v4l2loopback-dkms (0.12.1-1) ... Selecting previously unselected package v4l2loopback-utils. Preparing to unpack .../v4l2loopback-utils_0.12.1-1_all.deb ... Unpacking v4l2loopback-utils (0.12.1-1) ... Setting up v4l2loopback-dkms (0.12.1-1) ... Loading new v4l2loopback-0.12.1 DKMS files... Building for 5.9.0-0.bpo.2-amd64 Building initial module for 5.9.0-0.bpo.2-amd64 Error! Bad return status for module build on kernel: 5.9.0-0.bpo.2-amd64 (x86_64) Consult /var/lib/dkms/v4l2loopback/0.12.1/build/make.log for more information. dpkg: error processing package v4l2loopback-dkms (--configure): installed v4l2loopback-dkms package post-installation script subprocess returned error exit status 10 Setting up v4l2loopback-utils (0.12.1-1) ... Processing triggers for man-db (2.8.5-2) ... Errors were encountered while processing: v4l2loopback-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) The output is not very eloquent as to what the problem may be, I tried with a sudo dpkg --configure v4l2loopback-dkms but got the same error
I'll answer my own question. Since my kernel is 5.9.0.0, which I installed from buster-backports, while v4l2 was installed from buster repo, it was off. I solved it by just installing that too from buster-backports and it works fine
modprobe: FATAL: Module v4l2loopback not found in directory
1,410,375,331,000
I'm trying to compile the kernel from source on the system CentOS 7. The output of uname -a is: Linux dbn03 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Here is how I download the source code and compile it: wget "http://vault.centos.org/7.6.1810/os/Source/SPackages/kernel-3.10.0-957.el7.src.rpm" rpm2cpio ./kernel-3.10.0-957.el7.src.rpm | cpio -idmv make menuconfig Device Drivers ->Multiple devices driver support (RAID and LVM) -><*> Block device as cache make bzImage make modules As you see, I just tried to compile the kernel with the module BCACHE. However, when I executed the commands above, I got the error as below: drivers/md/bcache/request.c:675:3: warning: passing argument 2 of ‘part_round_stats’ makes integer from pointer without a cast [enabled by default] part_round_stats(cpu, &s->d->disk->part0); ^ In file included from include/linux/blkdev.h:9:0, from include/linux/blktrace_api.h:4, from drivers/md/bcache/bcache.h:181, from drivers/md/bcache/request.c:9: include/linux/genhd.h:408:13: note: expected ‘int’ but argument is of type ‘struct hd_struct *’ extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part); ^ drivers/md/bcache/request.c:675:3: error: too few arguments to function ‘part_round_stats’ part_round_stats(cpu, &s->d->disk->part0); ^ In file included from include/linux/blkdev.h:9:0, from include/linux/blktrace_api.h:4, from drivers/md/bcache/bcache.h:181, from drivers/md/bcache/request.c:9: include/linux/genhd.h:408:13: note: declared here extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part); It seems that I got a warning and an error. I think I can ignore the warning but the error is fatal. In the header, the function part_round_stats is declared that three parameters are necessary, whereas in the file drivers/md/bcache/request.c, only two parameters are passed to the function part_round_stats. I've tried to google this issue but I got nothing. So what kind of problem did I get here? Is this the error coming from the source code of linux? (I don't think so...), or is this some kind of issue of the versions? or the downloaded source code doesn't support the module BCACHE and the developer of kernel left a fatal error?
Try this instead: rpm -ivh kernel-3.10.0-957.el7.src.rpm cd ~/rpmbuild/SOURCES rpmbuild -bp kernel.spec cd ~/rpmbuild/BUILD/kernel-3.10.0-957.el7/linux-3.10.0-957.fc32.x86_64 make menuconfig make bzImage make modules
Compiling kernel from source got a fatal error: too few arguments to function 'part_round_stats'
1,410,375,331,000
Some two years ago, I was able to set a very dim backlight brightness by writing a non-integer value to /sys/class/backlight/intel_backlight/brightness. $ echo 0.3 > /sys/class/backlight/intel_backlight/brightness But now, it seems that there is some sanity check... so, the system complains: bash: echo: write error: Invalid argument Is there anyway I can bypass such sanity check? Is there a way to pass values directly to the driver? I believe the relevant driver is i915. Linux debiel 5.4.0-4-amd64 #1 SMP Debian 5.4.19-1 (2020-02-13) x86_64 GNU/Linux Please, let me know if I should have given you any useful information. I do not really know how to properly report the problem.
Sounds like an implementation detail of your specific hardware driver. Have you used the exact same hardware when setting float dim values resulted in actually less lit display? Or do you now have simply a less finely configurable backlight, maybe? Probably, you could go to an intel support forum and ask there for the backlight value stepping API. Would be interesting, what their officials tell then. At least intel releases their own linux hardware drivers, so this is officially specified APIs. As I like Arch Linux Wiki for such information, I post a link to their API description here: https://wiki.archlinux.org/index.php/Backlight#Backlight_PWM_modulation_frequency_(Intel_i915_only) This Link explains i915 is using PWM to adjust light more accurately. Maybe PWM is disabled in your kernel since it typucally caused flickering on this hardware.
Non-integer value to (/sys/class/backlight/intel_backlight/brightness)
1,410,375,331,000
When starting Xubuntu 19.04 get this in boot.log: [[0;1;31mFAILED[0m] Failed to start [0;1;39mLoad Kernel Modules[0m. See 'systemctl status systemd-modules-load.service' for details. I run systemctl status systemd-modules-load.service which yields: Failed to find module 'nf_nat_proto_gre' With sudo modprobe nf_nat_proto_gre I get: modprobe: FATAL: Module nf_nat_proto_gre not found in directory /lib/modules/5.0.0-16-generic What is the problem and how should I fix it?
First: the obvious question should be: is this module needed? This is to support using (probably multiple rather than just one) GRE tunnels behind NAT. If no GRE tunnel is used, the question becomes moot. Now what happened? It appears Ubuntu 19.04 is using kernel 5.0 and there were a few netfilter reworking started from this kernel to factorize some separate netfilter modules back to core (ie: not as a module) to have an overall gain in size or help further netfilter features. This module was "axed" as part of this rework. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/net/ipv4/netfilter/nf_nat_proto_gre.c?h=v5.0 path: root/net/ipv4/netfilter/nf_nat_proto_gre.c Age Commit message (Expand) Author Files Lines 2018-12-17 netfilter: nat: remove nf_nat_l4proto struct Florian Westphal 1 -61/+0 2018-12-17 netfilter: nat: remove l4proto->manip_pkt Florian Westphal 1 -41/+0 2018-12-17 netfilter: nat: remove l4proto->nlattr_to_range Florian Westphal 1 -3/+0 2018-12-17 netfilter: nat: remove l4proto->in_range Florian Westphal 1 -1/+0 2018-12-17 netfilter: nat: remove l4proto->unique_tuple Of course the functionality is still there. Last commit comment, emphasis mine: netfilter: nat: remove nf_nat_l4proto struct This removes the (now empty) nf_nat_l4proto struct, all its instances and all the no longer needed runtime (un)register functionality. nf_nat_need_gre() can be axed as well: the module that calls it (to load the no-longer-existing nat_gre module) also calls other nat core functions. GRE nat is now always available if kernel is built with it. [...] So if Ubuntu had some hardcoded list of helper modules to load, the list wasn't updated to drop this one and a few others in the same case. You should safely ignore the error, or report the minor bug.
Failed to find module nf_nat_proto_gre
1,410,375,331,000
I'm trying to insert an external kernel module in permanent mode. During the boot, my module is being loaded by systemd-modules-load service so long as Selinux is permissive. But I want to keep Enforcing mode. I wasn't able to insert my module into Selinux list with semodule command. What else could I do? This is my environment: Fedora release 27 Kernel version 4.18.19-100.fc27.x86_64 rpm -qa 'selinux-*' output: selinux-policy-targeted-3.13.1-284.37.fc27.noarch selinux-policy-3.13.1-284.37.fc27.noarch sestatus output: SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 31 systemctl status systemd-modules-load.service output: ● systemd-modules-load.service - Load Kernel Modules Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2018-12-14 09:50:42 CET; 19min ago Docs: man:systemd-modules-load.service(8) man:modules-load.d(5) Process: 4397 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE) Main PID: 4397 (code=exited, status=1/FAILURE) dic 14 09:50:42 localhost.localdomain systemd[1]: Starting Load Kernel Modules... dic 14 09:50:42 localhost.localdomain systemd-modules-load[4397]: Failed to insert 'hello': Permission denied dic 14 09:50:42 localhost.localdomain systemd[1]: systemd-modules-load.service: Main process exited, code=exited, status=1/FAILURE dic 14 09:50:42 localhost.localdomain systemd[1]: Failed to start Load Kernel Modules. dic 14 09:50:42 localhost.localdomain systemd[1]: systemd-modules-load.service: Unit entered failed state. dic 14 09:50:42 localhost.localdomain systemd[1]: systemd-modules-load.service: Failed with result 'exit-code'. ls -lZ /usr/lib/systemd/systemd-modules-load output: -rwxr-xr-x. 1 root root system_u:object_r:systemd_modules_load_exec_t:s0 15576 4 mag 2018 /usr/lib/systemd/systemd-modules-load /var/log/audit/audit.log type=SELINUX_ERR msg=audit(1533716850.521:304): op=security_bounded_transition seresult=denied oldcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 newcontext=unconfined_u:unconfined_r:thumb_t:s0-s0:c0.c1023 type=SELINUX_ERR msg=audit(1533716850.596:305): op=security_bounded_transition seresult=denied oldcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 newcontext=unconfined_u:unconfined_r:thumb_t:s0-s0:c0.c1023 type=SELINUX_ERR msg=audit(1533716851.081:306): op=security_bounded_transition seresult=denied oldcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 newcontext=unconfined_u:unconfined_r:thumb_t:s0-s0:c0.c1023 type=SELINUX_ERR msg=audit(1533716851.422:307): op=security_bounded_transition seresult=denied oldcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 newcontext=unconfined_u:unconfined_r:thumb_t:s0-s0:c0.c1023 .. type=SELINUX_ERR msg=audit(1533717134.510:310): op=security_bounded_transition seresult=denied oldcontext=system_u:system_r:init_t:s0 newcontext=system_u:system_r:fprintd_t:s0 journalctl -xeb -u systemd-modules-load.service L'unità systemd-modules-load.service ha iniziato la fase di avvio. dic 13 16:38:08 localhost.localdomain systemd-modules-load[14937]: Failed to insert 'hello': Permission denied dic 13 16:38:08 localhost.localdomain systemd[1]: systemd-modules-load.service: Main process exited, code=exited, status=1/FAILURE dic 13 16:38:08 localhost.localdomain systemd[1]: Failed to start Load Kernel Modules. -- Subject: L'unità systemd-modules-load.service è fallita ls -Z system_u:object_r:modules_object_t:s0 bls.conf unconfined_u:object_r:modules_object_t:s0 modules.devname system_u:object_r:modules_object_t:s0 build system_u:object_r:modules_object_t:s0 modules.drm system_u:object_r:modules_object_t:s0 config system_u:object_r:modules_object_t:s0 modules.modesetting system_u:object_r:modules_object_t:s0 extra system_u:object_r:modules_object_t:s0 modules.networking unconfined_u:object_r:modules_object_t:s0 hello.ko system_u:object_r:modules_object_t:s0 modules.order system_u:object_r:modules_object_t:s0 kernel unconfined_u:object_r:modules_object_t:s0 modules.softdep unconfined_u:object_r:modules_object_t:s0 modules.alias unconfined_u:object_r:modules_object_t:s0 modules.symbols unconfined_u:object_r:modules_object_t:s0 modules.alias.bin unconfined_u:object_r:modules_object_t:s0 modules.symbols.bin system_u:object_r:modules_object_t:s0 modules.block system_u:object_r:modules_object_t:s0 source system_u:object_r:modules_object_t:s0 modules.builtin system_u:object_r:modules_object_t:s0 System.map unconfined_u:object_r:modules_object_t:s0 modules.builtin.bin system_u:object_r:modules_object_t:s0 updates unconfined_u:object_r:modules_object_t:s0 modules.dep system_u:object_r:modules_object_t:s0 vdso unconfined_u:object_r:modules_object_t:s0 modules.dep.bin system_u:object_r:usr_t:s0 vmlinuz My module is in /lib/modules/$(uname -r)
Solved! After building the kernel module, I put it into directory /lib/modules/$(uname -r)/kernel/drivers/net. And then with the command depmod I resolved this issue. Now it's loaded after every boot.
Selinux is blocking my external kernel module
1,410,375,331,000
I am unable to launch the cryptoloop module on Kali 4.6. How can I to install it? # modprobe cryptoloop modprobe: FATAL: Module cryptoloop not found in directory /lib/modules/4.6.0-kali1-amd64
You should re-compile the kernel to be able to load the cryptoloop module. Edit your source.list , uncomment the deb-src, then run the following: apt update apt-cache search linux-source Then install the appropriate kernel source (the linux-source-4.6.0 is an example): apt install kernel-source-4.6.0 then mkdir ~/kernel; cd ~/kernel tar -xvf /usr/src/linux-source-4.6.0.tar.xz cd linux-source-4.6.0 cp /boot/config-4.6.0-kali1-amd64 ~/kernel/linux-source-4.6.0/.config To enable the cryptoloop support see : Cryptoloop HOWTO Run ; make menuconfig enable: Device Drivers -> Block Devices -> Loopback device support Then compile the kernel.
How to install cryptoloop module on Kali 4.6?
1,410,375,331,000
My machine: An Ubuntu server 16.04 LTS on a PINE64 with an ARM 64-bit processor (Linux pine64 3.10.105-0-pine64-longsleep #3 SMP PREEMPT Sat Mar 11 16:05:53 CET 2017 aarch64 aarch64 aarch64 GNU/Linux) My goal is to get my PINE64 worked as a VPN server with L2TP/IPsec protocol by strongSwan. Now, I have a problem that my PINE64 doesn't have required kernel modules for strongSwan listed here: https://wiki.strongswan.org/projects/strongswan/wiki/KernelModules I tried sudo modprobe MODULE_NAME and only knew that my PINE64 had no such module in directory /lib/modules/3.10.105-0-pine64-longsleep. Here, My questions are: Is there any way to install these missing modules to my PINE64? If exists, how? Do you have any better workaround to make a VPN server on my PINE64? Any suggestion? Not only specific answers for PINE64 but also general answers for Linux are appreciated.
You might be able to build them your self from source. A similar question was asked here. https://askubuntu.com/questions/168279/how-do-i-build-a-single-in-tree-kernel-module
How to install missing kernel modules? Is it possible?
1,410,375,331,000
I want to load kernel modules, ip_gre.ko and gre.ko, on a embedded-device running embedded-linux to make that device to support GRE protocol. Since I do not want to change Embedded-linux device's kernel, I try to load kernel modules instead of re-installing device kernel. Fortunately, I have that device's kernel source code, thus I could compile ip_gre.ko and gre.ko modules. However, loading modules using insmod on the device failed with following messsages: $ insmod gre.ko insmod: can't insert 'gre.ko': Resource temporarily unavailable $ insmod ip_gre.ko ip_gre: Unknown symbol gre_del_protocol ip_gre: Unknown symbol gre_add_protocol insmod: can't insert 'ip_gre.ko': unknown symbol in module, or unknown parameter $ dmesg GRE over IPv4 demultiplexor drvier gre: can't add protocol ip_gre: Unknown symbol gre_del_protocol ip_gre: Unknown symbol gre_add_protocol My device has enough memeory to load modules (free showed 190700/239760 are free). Could you please let me know why this happens and its possible solutions? UPDATE: These are differences between .config of running kernel (on device) and that of kernel compiled for above two modules. $ diff config_for_running_kernel config_for_kernel_compiled_for_modules 299c299, 301 < # CONFIG_NET_IPGRE_DEMUX is not set --- > CONFIG_NET_IPGRE_DEMUX=m > CONFIG_NET_IPGRE=m > CONFIG_NET_IPGRE_BROADCAST=y 963c965 < CONFIG_PPTP=y --- > CONFIG_PPTP=m Since CONFIG_PPTP depends on CONFIG_NET_IPGRE_DEMUX, I had to make it as a module to compile the kernel without error message. Do the differences cause above error messages? If it is, could you let me know how can I solve it...? (and if you have additional references that can teach me about these problems & solutions, I would be very thankful)
I solved the problem by analyzing & modifying kernel module. Analyzing kernel module source code indicates that compatible kernel options should be concerned to load kernel modules as Gilles commented. Loading gre kernel module caused the problem because existing pptp module uses protocol ID IPGRE_PROTO that is equal to gre protocol ID. Kernel without gre module enabled uses IPGRE_PROTO as pptp protocol ID.
Loading kernel modules that were compiled for another kernel failed
1,410,375,331,000
I use Fedora 19 (Schrödinger’s Cat) on most computers as I prefer it over others (I probably would use Fedora 20 if I could have Gnome 3.8 plus a few other packages). My problem is that I use kmod-staging package for drivers (e.g. SD card reader driver), provided by the RPMFusion repos. Now, when the system updates the kernel, the kmod-staging package (and kmod-VirtualBox package[Note1] need to be updated as well. So it updates, and when I reboot and it switches to the latest kernel, it throws errors as it can't load the drivers as they aren't there. I would need kmod-staging-3.14.8-100.fc19.x86_64 for it to match the kernel version - but it seems to have stop receiving updates, since the latest build available seems to be kmod-staging-3.13.9-100.fc19.x86_64. So my questions are: Why is kmod-staging not updated on Fedora 19 anymore? It receives updates on Fedora 20 - Fedora 19 is still supported[Note2] and receives updates to other packages from the same repo like kmod-VirtualBox. Is there any way I could get kmod-staging for the 3.14 kernel on Fedora 19 (other than installing the Fedora 20 kernel + kmod-staging) If anything I would like the first question answered - don't mind too much about the second as I'll just roll back to the 13.9 kernel [Note1] - Which doesn't have this problem and is still being updated - it even is shown in the latest build list (here) [Note2] - According to the release schedule, it should be supported a month after the release of the current version +2 (not before 2014-10-14), so it still has a while yet.
The RPMFusion project is run by volunteers and this is likely just lagging due to either no time to do it, or perceived low interest. From the FAQ: excerpt #1 from FAQ Q: What is RPM Fusion? RPM Fusion is a repository of add-on packages for Fedora and EL+EPEL maintained by a group of volunteers. RPM Fusion is not a standalone repository, but an extension of Fedora. RPM Fusion distributes packages that have been deemed unacceptable to Fedora. excerpt #2 from FAQ Q: I would like to see an RPM for package X. What should I do? A: Place a request in the wiki and hopefully a maintainer will decide to pick it up. If however you wish to see an additional feature added to an existing package, please file a bug against it in Bugzilla. If I truly wanted this package I'd download the source RPM (SRPM) of the one you mentioned above and then modify it so that it included the newer version and rebuild it myself using rpmbuild --rebuild. It typically quite trivial to do this and will give you what you want. If this seems to complicated you could open a ticket up in the wiki as described in the FAQ, but there are no guarantees when, if ever, anyone would get around to doing the rebuilding of this package.
kmod-staging-3.14* on Fedora 19?
1,410,375,331,000
In order to build and install V4L2 module, do I have to download it, or it is already part of the kernel (and all I have to do is to choose it, in order to build it, via kernel configuration)? I 'm running Angstrom distribution [kernel 2.6.32.61]. Kernel configuration's result: --- Multimedia support *** Multimedia core support *** [*] Video For Linux [*] Enable Video For Linux API 1 (DEPRECATED) *** Multimedia drivers *** [*] Video capture adapters ---> [*] Radio Adapters ---> [ ] DAB adapters
It is part of the vanilla linux source, and that should include 2.6.x. If you run make menuconfig and hit /, you get a search. For the 3.11 source, the V4L2 core is triggered by VIDEO_DEV which requires Device Drivers->Multimedia Support and either Device Drivers->Multimedia Support->Cameras/video grabbers or some other camera support; most people will probably want to access it via USB, and if you select Device Drivers -> Multimedia Support -> Media USB Adapters -> USB Video Class V4L2 is part of that. However, the options for 2.6.x may be slightly different. You probably do not need to build this into the kernel. If you can take your current configuration and add the required options as modules, then you should be able to make modules_install with INSTALL_MOD_PATH set (if not, they'll end up in /lib/modules/x.x.x) and copy them over to the target system's /lib/modules/x.x.x. You then need to run depmod from the target system (or see man depmod).
Install kernel module [V4L2]
1,379,978,403,000
I'm trying to install the webcam driver for my Logitech C210. After some googling Linux UVC driver seems to be what I need. I followed their typical use in hope to get it installed: git clone git://linuxtv.org/media_build.git cd media_build ./build make install Now, I get these errors when I try to ./build: make -C /home/pi/media_build/v4l allyesconfig make[1]: Entering directory `/home/pi/media_build/v4l' make[2]: Entering directory `/home/pi/media_build/linux' Applying patches for kernel 3.1.9+ patch -s -f -N -p1 -i ../backports/api_version.patch patch -s -f -N -p1 -i ../backports/pr_fmt.patch patch -s -f -N -p1 -i ../backports/v3.1_no_export_h.patch patch -s -f -N -p1 -i ../backports/v3.1_no_pm_qos.patch Patched drivers/media/dvb/dvb-core/dvbdev.c Patched drivers/media/video/v4l2-dev.c Patched drivers/media/rc/rc-main.c make[2]: Leaving directory `/home/pi/media_build/linux' ./scripts/make_kconfig.pl /lib/modules/3.1.9+/build /lib/modules/3.1.9+/build 1 Preparing to compile for kernel version 3.1.9 File not found: /lib/modules/3.1.9+/build/.config at ./scripts/make_kconfig.pl line 33, <IN> line 4. make[1]: *** [allyesconfig] Error 2 make[1]: Leaving directory `/home/pi/media_build/v4l' make: *** [allyesconfig] Error 2 can't select all drivers at ./build line 451. Btw im trying to do this on the Raspberry PI, architecture is ARM cpu.
You need to install the linux-headers package in order to compile additional modules. This package contains the .config file and other files that are generated during the compilation of the kernel. Pick the version of the package that matches your running kernel.
How can I install my webcam on Debian?
1,379,978,403,000
Following instructions at http://lik.noblogs.org/post/2010/05/07/wacom-debian/ Though, i downloaded and built linuxwacom-0.8.8-11 ./configure --enable-wacom --with-kernel=/usr/src/linux-headers-2.6.32-5-686 completes without any problem, then copy sudo cp src/2.6.30/wacom.ko /lib/modules/2.6.32-5-686/kernel/drivers/input/tablet/wacom.ko unplug and replug, jcress@debian:~/Downloads/linuxwacom-0.8.8-11$ dmesg | grep wacom [ 878.879686] usbcore: registered new interface driver wacom [ 878.879694] wacom: v1.52:USB Wacom tablet driver [ 963.774142] usbcore: deregistering interface driver wacom [ 1613.534147] usbcore: registered new interface driver wacom [ 1613.534671] wacom: v1.52-pc-0.4:USB Wacom tablet driver and dmesg | grep input doesn't list the wacom advice? Does this mean the module isn't working? EDIT: === fixed === I did dist-upgrade to testing and it 'just works'
I'm posting this as an answer, seeing as the OP has resolved his/her issue Apparently, performing a dist-upgrade will fix this issue. Although without more information we cannot really say how.
wacom pen & touch cth-460 on debian [closed]
1,379,978,403,000
Attempting to create xfs partition in NVMe disk type in ubuntu 20.04 in GCP is throwing me this error: ubuntu 20.04 , nvme disk root@my-euwe3a-vm:/home/myuser# mkfs.xfs /dev/myvg1/my_lv meta-data=/dev/myvg1/my_lv isize=512 agcount=4, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=1310720, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 mkfs.xfs: pwrite failed: Input/output error Creating it as ext4 in same logical volume(lv) is working as expected ... Please, advice? machine type: n2d-standard-4 OS image: ubuntu-pro-fips-2004-focal-v20220829 additional disk: NVMe lvdisplay --- Logical volume --- LV Path /dev/myvg1/my_lv LV Name my_lv VG Name myvg1 LV UUID YnXOLf-TORp-qB87-cwfy-KeEd-qHqy-dUngE5 LV Write Access read/write LV Creation host, time my-vm, 2022-10-06 12:42:20 +0000 LV Status available # open 1 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 pvs File descriptor 3 (/dev/urandom) leaked on pvs invocation. Parent PID 51678: bash PV VG Fmt Attr PSize PFree /dev/nvme0n2 myvg1 lvm2 a-- <32.00g 1020.00m
It seems the issue does not appear in Ubuntu 22.04 LTS, so it looks like an incompatibility between mkfs.xfs and SSD NVMe in Ubuntu 20.04.
ubuntu 20.04 LTS mkfs.xfs problem creating xfs in nvme disk on GCE GCP
1,379,978,403,000
I'm writing a livepatch module to hook a function and replace it with one that causes the process to terminate. I can't call abort() because that calls BUG() and my kernel will panic on oops. Importantly, the function must terminate the process immediately and must not return.
Well, I'm dumb. Turns out I was still using do_exit_group() and forgot to switch to do_exit().
What is the proper way to exit the current process from a kernel module?
1,379,978,403,000
I have a USB device that I am using with Ubuntu 20.04.3. The manufacturer provided a library and a driver. I can install the driver and get the device to work, but every time the kernel updates I must reinstall the driver to the new tree. I understand that this is the job of dkms and I started working on getting that up as it doesn't look too difficult, but taking a step back I am wondering if there might be a simpler/better way to get the device to work as I think that most of their userbase is windows, so there is not much dev time given to the linux side of things. And anyway I want to understand more about how linux does things. The driver they supply is a slightly modified Exar driver. Looking at the diffs between their version and the vanilla Exar driver, basically all they did was add their vendor number and product id number to the code so their device would be recognized as compatible to the driver and run the appropriate blocks of code (same as the specific Exar chip they use) when the device matches their device descriptor. Based on this very helpful page and other sources around the internet, my understanding is that each driver has a modules.alias file that essentially has rules on what types of devices it accepts. Then when the device is plugged in, the system pulls info from the device to make a modalias file for that device. Then the first, most specific driver to match the device modalias to a rule in the modules.alias file is the driver that gets assigned to the device. So I imagine my device has an Exar chip and could use the Exar driver to make it work if the driver accepted it and recognized it as essentially an Exar chip. But since the manufacturer put their own custom device descriptor, modprobe doesn't recognize the device as being compatible with the vanilla Exar driver. Rather than copying the driver and modifying it so slightly to make the system honor it as they did, is there a way to tell the system that this device really is compatible with the vanilla Exar driver or has an Exar device hidden within as some kind of sub-device? Like an alias for device descriptors as far as modprobe is concerned? Or perhaps there's a way to write a different driver for this device that utilizes the underlying exar driver and passes everything through to it rather than copying the code? Then I could get the benefit of updates from the chip manufacturer. Or should I continue with dkms because the way the manufacturer did it the best way to do it? I have seen examples on how to bind and unbind drivers to devices with udev rules, but I think that if the driver doen't match, it won't be bound anyway? And even if it did, would that be the "best" way?
As @ReedGhost so helpfully suggested in the comments (thanks!), this is exactly what I wanted to do. I found the built in xr_serial module and echoed my device vendor and product id to the newid file. The built-in driver recognizes my device as compatible and is loaded when I plug it in. However, this did not solve my specific use case because the built in exar driver is written for multiple devices with different product ids. There is an if-then-else chain that specifies a certain chunk of code to be run for each specific product id. So some functionality still does not work because my device's id causes it to go to the else section of the code when it should run one of the other sections. I realize now that I would really need to spoof or substitute the id of my device to the ids of the specific chip used instead of just adding a new one to be recognized by the driver. EDIT: Actually, I will probably try to write a different driver for the device that utilizes the manufacturer-supplied driver under the hood as that seems like the more correct way to do it and get the benefits of updates. This question and this question seem like good starting points for anyone who finds this and has this same problem. In the meantime, I used dkms to just update the driver.
Is it possible to set up a new device with an existing driver on linux when the modalias doesn't match?
1,379,978,403,000
I have written a "device driver" (see source code here: https://bitbucket.org/wothke/websid/src/master/raspi/websid_module/ ) that runs fine for most of the time (see https://www.youtube.com/watch?v=bE6nSTT_038 ) but which still seems to have the potential to randomly crash the device occasionally. The "device driver" starts a kthread that performs a simple but timing critical playback loop in which it controls some connected audio chip via multiple GPIO pins. This kthread is run (using kthread_bind) on an "isolated" CPU core which should be largely excempted from regular kernel use (see details on kernel configuration below). The kthread is given high prio via sched_set_fifo. The kthread makes no subroutine calls and does not require any memory that has not already been previously allocated in the kernel. (The thread also temporarily disables anything that might interfer with its timing, using get_cpu, local_irq_save & local_bh_disable. However these do not seem to be the root cause of the sporadic crashes since crashes could be reproduced even when that disabling was not used.) I have compiled a regular "Raspberry OS" "Desktop" kernel but I specifically activated NO_HZ_FULL (i.e. "Full dynaticks system (tickless)"). Also I am specifically isolating core #3 via cmdline.txt with: isolcpus=3 rcu_nocbs=3 rcu_nocb_poll=3 nohz_full=3 (which seems to keep most IRQs off cpu core #3 - as intended, so that my above kthread should be pretty alone on that core #3) The susual suspect might be the "shared kernel memory" buffer that is used for all communication between the above "playback" kthread and the producer of the data which lives in "userland". I already took all the precautions that I could think of to avoid potential race-conditions but maybe there is some kind of CPU cache effect, or something else that I am overlooking.. The "shared buffer" contains 4 page aligned areas that are setup/used in a way that should ensure safe communication/synchronization. the 1st page only contains one 32-bit flag that is accessed as an u32 or uint32_t (this should be naturally atomar). The kthread only updates this flag when it is 0 and it only sets it to something non-0. The userland code only resets this flag to 0 and only if it had some non-0 value - thereby acknowledging that it received the non-0 value set by the kthread. the 2nd page contains a similar flag like 1) but for the opposite direction, i.e. here it is the kthread that will receive something non-0 from "userland". the 3rd(+following) page(s) then contain the 1st buffer that is used for a simple double buffering scheme. This buffer is exclusively written to by the "userland" producer and exclusively read by the kthread. The "ping/pong" protocol implemented via the 2 flags is meant to ensure that the buffer is *never" used concurrently: The kthread initiates a sequence by signalling that one of the buffers can be filled and later the "userland" signals back after it has completed filling the respective buffer, i.e. the kthead only starts reading from a buffer after it has seen the signal from the producer that it is now safe to do so (before the "userland" producer gives that signal it uses msync(start_page, len, MS_INVALIDATE) to report which parts of the shared memory area it has updated.). the n-th(+following) pages(s) then contain the 2nd buffer (everything said in 3) applies here as well) But even if something went wrong in the above, that might then block the kthread or the respective userland process.. but I don't see why that should crash the whole system. The most plausible explanation for me would be if the "shared buffer" got randomly relocated (thus leading to random memory corruption), but I would think that this should not happen to a buffer allocated via: _raw_buffer = kmalloc(AREA_SIZE + 2*PAGE_SIZE, GFP_KERNEL & ~__GFP_RECLAIM & ~__GFP_MOVABLE); Or if there was some kernal function that specifically went into some blocking wait for something from core #3 (which might not be happening due to my kthread starving everything else on that CPU..).. however I'd be surprised why such a problem would then only be striking sporadically instead of crashing the machine all the time.. Any ideas?
After having failed to improve the situation by adding "memory barriers" at every reasonable point in my code I finally found a workaround that works. The problem does not seem to be linked to the shared memory at all. Instead it seems to be triggered by the scheduler and adding calls to "schedule()" in my long running kthread does seem to avoid the system freezes. Unfortunately this workaround is not a viable solution for me and I've created a separate thread to further explore the direction that this is taking: Is there a way to use a long running kthread without calling schedule()?
Am I making invalid assumptions with regard to my kernel module's shared memory?
1,379,978,403,000
During my Linux studies, I came up with this question which I've thus far been unable to find a satisfactory answer. Suppose I have a computer and I've just installed a Linux OS. A certain piece of hardware is not working because the required module is not in the kernel. I have the hardware information but how do I find out the identity of the missing module? I wondered if there might be an online resource which lists all hardware with their respective modules but I haven't been able to find anything like that. Is the situation then, that I would have to tackle each hardware/module problem on a case by case basis?
Sometimes the install media will have many modules available, so while you've booted the install media you can run lspci -k to show the name of the driver associated with the piece of hardware: $ /sbin/lspci -k 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) Subsystem: VMware Virtual Machine Chipset -> Kernel driver in use: agpgart-intel If you've already finished the installation, you can always boot the install media again — just don't perform the installation again — to see if this command reports a driver for your hardware. If that doesn't list a kernel driver in use, you can use the other associated information from the output of that command to search for an appropriate driver. That said, I'm not familiar with a particular source, other than simply to use Google, to find the driver in that case.
Is there a command line option or single resource to determine the module(s) required for a piece of hardware?
1,379,978,403,000
Some time ago, I installed GeForce GTX 970 on my computer running under Fedora 20. An import thing to know is that I'm using the card only as an accelerator (not for graphics). Until recently, it worked fine. But then I've faced the following problem when trying to launch a .cu executable: modprobe: FATAL: Module nvidia not found. bug.cu (16): no CUDA-capable device is detected in cudaMalloc((void **)&p, sizeof(int)) I've googled for similar cases and found out that the message can be interpreted as inability of modprobe (who manages so called linkable kernel modules) to find one particular LKM - nvidia or, to put it even more simpler, there is something wrong with drivers. Then I investigated that further in the following way: $ lspci -k | grep -A 2 -i "VGA" 01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3171 Kernel modules: nouveau On a forum I've read that two more NVidia LKM exist (and, possibly, should be present in the above given output): nvidia and nvidiafb which, as you can see, are missing in my system. And here's my question: does that necessarily mean, that I don't have these modules at all? Or it might be the case that they don't have to be there all the time and get linked to the kernel only when necessary? Should I reinstall my drivers? Or probably those modules simply got disabled somehow and I should just activate it in a way?
I fixed my problem by reinstalling the drivers. First I tried to reinstall it with yum (because it was initially installed in this way) but that didn't help. So I removed it and downloaded drivers from NVidia official cite. Installation was done according to this instruction. After that everything worked. As for LKMs: $ lspci -k | grep -A 2 -i "VGA" 01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3171 Kernel driver in use: nvidia
NVIDIA drivers as LKMs in Unix: Module nvidia not found
1,428,446,776,000
I want to ring bell in bash I've set lsmod |grep pcs pcspkr 1987 0 xset b and finally echo -e \\a But no ring.
Solution found modprobe pcspkr and echo -e "\a" Works..but only on tty,not on pts.
Slackware won't ring my bell
1,428,446,776,000
I'm compiling third-party kernel modules. Their build system goes to /usr/src/linux-headers-[version] (of a custom kernel chroot) and runs make from there. I want to find out, which files - sources and headers - have been used for the compilation, and which have not. Standard scripts/Makefile.build creates *.d files for each compiled source, and I'd like to use that... but these files are deleted after short processing. (That is rule_cc_o_c definition in Makefile.build.) What could be a way to collect these files with minimal modifications to the standard build system?
Try using libtrashcan. After compiling it and installing, preload the library to your process. For example, the following will create a test file and then try to remove it, but because of libtrashcan the unlink system call will be replaced by a move, so the file will end up in ~/Trash: export LD_PRELOAD=/usr/local/lib/libtrash.so.3.3 touch testfile rm testfile
How to preserve .d files after kernel compilation?
1,428,446,776,000
I have been trying to set up iptables on my archlinux server, however when I run iptables -nvL I receive the error iptables v1.4.20: can't initialize iptables table 'filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. Having tried to load the modules and failing I checked to see if they were installed using modinfo and they could not be found. I was trying to run these modules x_tables, ip_tables, ip_filter, iptable_filter, xt_tcpudp, nf_conntrack, xt_state, nf_conntrack_ipv4 Does anybody know how to solve this problem? EDIT: Having done some more research on the problem I think I may need to install the necessary modules manually. Is this something which is possible over ssh? I am not sure how to go about rebuilding the kernel.
As I run a custom kernel, I do not know if the module needed for the filter table is included in the stock arch kernel. But perhaps you do not even need exactly this table. On my system, I normally just use the nat table. Pass the parameter -t to all iptables invocations to select a given table, for example trying iptables -nvLt nat
Archlinux not configured with iptables
1,428,446,776,000
I just compiled and installed the new 3.0-rc2 kernel from kernel.org on my Fedora 15 system. Everything seems to work fine and I can successfully boot into the system. However, this broke my previously installed NVIDIA driver, so I need to compile a new one. I downloaded the installer from nvidia.com, but I am having trouble with the installation. To compile the kernel I unzipped the kernel archive to my home directory, then simply reused my Fedora config for the new kernel. Everything resides in ~/linux_build/linux-3.0-rc2. After booting to runlevel 3 I get an error with the NVIDIA installer: ERROR: If you are using a Linux 2.4 kernel, please make sure you either have configured kernel sources matching your kernel or the correct set of kernel headers installed on your system. If you are using a Linux 2.6 kernel, please make sure you have configured kernel sources matching your kernel installed on your system. If you specified a separate output directory using either the "KBUILD_OUTPUT" or the "O" KBUILD parameter, make sure to specify this directory with the SYSOUT environment variable or with the equivalent nvidia-installer command line option. Depending on where and how the kernel sources (or the kernel headers) were installed, you may need to specify their location with the SYSSRC environment variable or the equivalent nvidia-installer command line option. I ran the installer like this: bash NVIDIA-Linux-x86_64-270.41.19.run --kernel-source-path=/home/tja/linux_build/linux-3.0-rc2 Usually this was solved by installing the kernel headers from yum, but here I am using a new kernel with no RPM available. How do I manually install the needed headers/source files for the NVIDIA installer?
Well, I wasn't going crazy. The NVIDIA installer needed to be patched. Kernel version 2.7.0 was hardcoded as the upper bound. That was bumped up to 3.1.0 from a simple patch. Here is the patch file: nvidia-patch @ fedoraforum.org --- conftest.sh.orig 2011-05-30 12:24:39.770031044 -0400 +++ conftest.sh 2011-05-30 12:25:49.059315428 -0400 @@ -76,7 +76,7 @@ } build_cflags() { - BASE_CFLAGS="-D__KERNEL__ \ + BASE_CFLAGS="-O2 -D__KERNEL__ \ -DKBUILD_BASENAME=\"#conftest$$\" -DKBUILD_MODNAME=\"#conftest$$\" \ -nostdinc -isystem $ISYSTEM" --- nv-linux.h.orig 2011-05-30 12:27:09.341819608 -0400 +++ nv-linux.h 2011-05-30 12:27:28.854951411 -0400 @@ -32,7 +32,7 @@ # define KERNEL_2_4 #elif LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 0) # error This driver does not support 2.5 kernels! -#elif LINUX_VERSION_CODE < KERNEL_VERSION(2, 7, 0) +#elif LINUX_VERSION_CODE < KERNEL_VERSION(3, 1, 0) # define KERNEL_2_6 #else # error This driver does not support development kernels! Then you need to extract the files from the nvidia installer: ./NVIDIA-Linux-x86_64-270.41.19.run -x Then, inside the 'kernel' directory are the files to be patched: cd NVIDIA-Linux-x86_64-270.41.19/kernel/ patch -p0 kernel-3.0-rc1.patch.txt Once that's done, simply supply the kernel sources as a parameter to the installer: ./nvidia-installer --kernel-source-path /home/tja/linux/linux-3.0-rc2 ...and it builds fine! Now I am up running Linux 3 with a proper Nvidia driver.
NVIDIA installer can't find kernel source/build files (compiled from kernel.org)
1,428,446,776,000
when I do dpkg --get-selections or less /proc/modules I see two different list. I don't understand the difference between the elements of each...
The two commands are not related in any way. dpkg --get-selections returns the selection state of available packages. From man dpkg: --get-selections [package-name-pattern...] Get list of package selections, and write it to stdout. Without a pattern, non-installed packages (i.e. those which have been previously purged) will not be shown. The selection state is one of: install The package is selected for installation. hold A package marked to be on hold is not handled by dpkg, unless forced to do that with option --force-hold. deinstall The package is selected for deinstallation (i.e. we want to remove all files, except configuration files). purge The package is selected to be purged (i.e. we want to remove everything from system directories, even configuration files). /proc/modules, on the other hand, is the list of available kernel modules (you can consider these as equivalent to dll files in the Windows world). While some modules will have been installed as packages, others are included in the kernel. So, looking at the list of modules, you will see some overlap with the dpkg command above if you have installed certain modules that are not part of the kernel. For example, on my system, I have installed the fuse package which provides the fuse kernel module. I therefore have an entry for fuse in both lists: $ dpkg --get-selections | grep -P 'fuse\t' fuse install $ grep 'fuse' /proc/modules fuse 67503 3 - Live 0xffffffffa1140000 Conversely, the package firefox does not provide any modules so it is only listed in the dpkg output: $ dpkg --get-selections | grep -P 'firefox\t' firefox install $ grep 'firefox' /proc/modules $
What is the difference between a linux package and a linux module?
1,428,446,776,000
I am reading through Salzman's Linux Kernel Module Programming Guide, and I was wondering about where the file linux/kernel.h is located. I couldn't find it with find. Or rather the files I found did not have any printk priority macros in them.
The linux/kernel.h header which gets used for module builds is the header which is part of the kernel source. When modules are built in the kernel source tree, that’s the version which is used. For external module builds, the build process looks for the header in /lib/modules/$(uname -r)/build/include/linux/sched.h. That file is provided by kernel header packages, e.g. on Debian derivatives, the linux-headers-$(uname -r) package. The /usr/include/linux/kernel.h is intended for user processes, not for kernel modules. The printk priority macros now live in linux/printk.h and linux/kern_levels.h. I’m guessing you’re reading the original guide, which is based on the 2.6 kernel series; for modern kernels you should read the updated guide (currently for 5.6.7).
Where exactly is the file linux/kernel.h?
1,428,446,776,000
I've a RTL8153 based USB ethernet adapter, which uses the cdc_ether driver by default. I want to use the r8152 driver, which could be loaded by creating a custom udev rule, as present in the Realtek's linux driver source. But here's confusing part, when I plug-in the adapter, both of the cdc_ether and r8152 modules are loaded. My questions are, Why? How can I find the udev rule responsible for loading cdc_ether? How can I stop loading that module? As it's not necessary to load two modules in this case. A line of the Udev rule ACTION=="add", DRIVER=="r8152", ATTR{idVendor}=="2357", ATTR{idProduct}=="0601", ATTR{bConfigurationValue}!="$env{REALTEK_NIC_MODE}", ATTR{bConfigurationValue}="$env{REALTEK_NIC_MODE}" The DRIVER== part is not necessary.
ACTION=="add", DRIVER=="r8152", ATTR{idVendor}=="2357", ATTR{idProduct}=="0601", ATTR{bConfigurationValue}!="$env{REALTEK_NIC_MODE}", ATTR{bConfigurationValue}="$env{REALTEK_NIC_MODE}" The meaning of this udev rule is as follows: "When a device with idVendor value 2357 and idProduct value 0601 (and managed by driver "r8152") is added to the system, if its bConfigurationValue is not whatever value is defined in environment variable REALTEK_NIC_MODE, set its bConfigurationValueto that value." In other words, this udev rule is not loading the r8152 driver, it's switching the device to the correct mode for that driver if necessary. When a new device is added, Linux kernel basically runs modprobe with the hardware IDs (and some other things) of the device encoded in the "name" of the module it requests. This "name" is then compared by modprobe to wildcard strings embedded into each module as module aliases. The depmod command gathers up these alias names and stores them into /lib/modules/<kernel version>/modules.alias[.bin] for quick searching. You can view the alias strings embedded into kernel modules with the modinfo command. For your USB ethernet adapter, the "name" is something like usb:v2357p0601d.... Unfortunately, the cdc_ether module has a wildcard alias that will match it too. Any aliases defined in /etc/modprobe.d will take precedence over aliases embedded into modules themselves. So, you could probably specify an alias that will match your ethernet adapter and causes the r8152 module to be loaded instead. Try adding this as /etc/modprobe.d/usbnic.conf: alias usb:v2357p0601d*dc*dsc*dp*ic*isc*ip*in* r8152 Then run depmod -a as root, unplug the USB ethernet adapter, unload both the r8152 and cdc_ether modules, plug the ethernet adapter back in and see what happens. If only the r8152 module is loaded, good. If the cdc_ether still gets loaded too, the alias might need to be more specific (i.e. one or more of the asterisks in it needs to be replaced with actual values, whatever they might be) in order for this alias to be the most specific and thus the "best" match. Update: Here is a description of the module alias format: http://people.skolelinux.org/pere/blog/Modalias_strings___a_practical_way_to_map__stuff__to_hardware.html
Why Udev is loading two kernel modules for a single USB device?
1,428,446,776,000
In the distro I'm most familiar with -- ALTLinux, a command like apt-get install kernel-image-std-def#3.0.... would install another kernel, and update the initramfs and the bootloader config accordingly; an even better command would be the specific update-kernel, which would also install the accompanying kernel modules for the new version of the kernel (the ones that are installed in the running system, so that the support for the hardware of this system is not lost in the case of the new kernel). (A short manual on this topic for ALT (in Russian).) Now I want to upgrade the kernel in my Ubuntu 12.04 system on a Toshiba AC100 (ARM). What would be the command in Ubuntu to install the new kernel, so that all the required things are done: initramfs is generated, the bootloader is updated, and no required module is lost? I'm especially interested in a command that would ensure that everything is done correctly, because I don't understand the peculiar boot process on this computer very well.
Simply installing the new kernel package will handle everything for you. Most of the time, you want to go ahead and upgrade all of your packages, as it will mostly include bug fixes and security updates: apt-get update apt-get upgrade If you call "install" on a package that is already installed, it will be upgraded if an upgrade is available. apt-get update apt-get install linux-image-ac100
The command to upgrade to a new kernel from Ubuntu repositories
1,428,446,776,000
I assembled a new machine on Asus P9X79 motherboard, using its RAID controller to create a RAID1 array of two 500Gb drives. When booting Arch from an external drive I am able to work fine with /dev/md126 which corresponds to the array. This way I created the partitions and filesystems on it, then chrooted and installed Arch Linux on the drive. However, I am not able to boot from RAID successfully: /, /boot and /home cannot be remounted in read-write mode (mount returns 32), and I end up in emergency console. Trying to remount from there also fails, saying the drive is write-protected. I figured that some necessary kernel modules were not loaded at boot and played with mkinitcpio.conf. I have mdadm_udev as a hook (after udev, before filesystems). To my understanding, this should be enough, but I also tried adding raid1, raid456, ext2 and ext4 to the MODULES array, this didn't change anything. The RAID1 itself is recognized in the initial environment thanks to the mdadm_udev hook (the devices are there). raid1 is, I think, also loaded by this hook automatically. I am still able to boot from external drive and mount the RAID1 device just fine; so I did lsmod on it and compared to lsmod on the "native" system. Nothing seems suspicious to me: $ diff <(sort lsmod.old | cut -f1 -d ' ') <(sort lsmod.new | cut -f1 -d ' ') 7a8,12 > async_memcpy > async_pq > async_raid6_recov > async_tx > async_xor 13a19,20 > drm > drm_kms_helper 24a32 > i2c_algo_bit 43c51 < nvidia --- > nouveau 48a57,58 > raid456 > raid6_pq 64a75,76 > ttm > uas 71a84 > xor old is the one that works. As you can see, the only line starting with < is < nvidia. So all the necessary modules are loaded (some of the additional modules in new are dependencies of raid456 that I tried loading). What am I missing here? What can be the possible differences between two systems? Kernel versions are the same: 3.6.8. (BTW the installation medium I first tried to use had 3.6.6 and didn't work with this array; all operations ended up hanging endlessly).
(Lev found the solution, I'm doing exegesis to explain why it works.) using its RAID controller to create a RAID1 array That's a bad sign: you're using fakeraid — a RAID implementation which is mostly implemented by the Windows driver with a little help from the firmware. You get all the downsides of hardware RAID (dependency on the firmware) with all the downsides of software RAID (no performance advantage). The RAID metadata is handled by the firmware. (Metadata is the extra data that needs to be stored somewhere and that's not part of the filesystem or partition stored on the RAID device: things like where each block of data should be sorted, extra data to handle resynchronization and so on.) With Linux's implementation (at least for this driver), this is not handled by the kernel alone, the mdmon utility is also needed. When your system boots, at first, there is only the kernel and an initial RAM drive (an initramfs). This initial RAM drive must contain all the loadable modules and programs that are necessary to mount the root filesystem. Because that has to fit in RAM, most distributions generate the initramfs on demand, based on the drivers that are necessary on your system. This is typically done on each kernel upgrade. It seems that Arch Linux's initramfs generation scripts did not detect that you needed the mdmon program at boot time, and so generated a non-working initramfs. Forcing mdmon to be present in the initramfs made the initramfs work.
Can't remount local file systems for read-write (RAID1)
1,428,446,776,000
Quite a lot has been going on to advance inter-process security (UID the same) and priveledge dropping in userland, yet it is common that proprietary linux kernel components are used (it seems that GPLv2 does not really solve closed source kernel module issue sadly). My quesiton is about concept (existing or in developmen) to "sandbox" a otherwise closed source kernel module. It seems to me that in times of justified paranoia (wikileaks,post snowden) people have been looking for ways how to prevent a potential backdoor in a proprietary kernel module, right?
Yes, there is a sandboxing concept for proprietary drivers. It's called userland drivers. If you run code inside the kernel, it has access to everything, so it's impossible to sandbox it. (Impossible with respect to the Linux system — that system could run in a virtual machine, and then the VM would be doing the sandboxing.) Userland drivers are possible for some kinds of peripherals. For example, some USB peripherals can be driven from userland via libusb and usbfs. Filesystems can be implemented in userland via FUSE. Given that a malicious driver for a peripheral can usually leverage its access to the peripheral to access the rest of the system (e.g. by configuring the peripheral for DMA and thus accessing arbitrary memory), there isn't much point in attempting to sandbox a driver. If you don't trust the driver, don't use it. It's possible to do some sandboxing by running the driver inside a virtual machine, and configuring the hypervisor to allow the VM to access only a specific peripheral. This is only useful if the peripheral itself only has access to a specific part of memory, which can be done with an IOMMU (the IOMMU has to remain under control of the hypervisor, of course). Not all systems support such sandboxing — once again, if you don't trust the peripheral, why would you have it in your computer?
Are there sandboxing concepts for proprietary binary kernel modules in linux?
1,428,446,776,000
In the linux kernel, there is a section "Library routines" with a snippet shown below: Library routines ---> <M> CRC-CCITT functions <M> CRC ITU-T V.41 functions <M> CRC7 functions <M> CRC32c (Castagnoli, et al) Cyclic Redundancy-Check <M> CRC8 function ... ... I have most of the options compiled in as "module", but these modules never get loaded. I'm curious to know what these modules are used for and in which situation I would need them? The Kernel Config help is not very illuminating: This option is provided for the case where no in-kernel-tree modules require <XYZ> functions, but a module built outside the kernel tree does. Such modules that use library <XYZ> functions require M here.
CCITT stands for "Comité Consultatif International Téléphonique et Télégraphique" and ITU for "International Telecommunication Union". These modules have to do with (error correction) for telephone-modem connections. Since even old style high-end modems (to which you would normally communicate via a, real, hardware serial port) do things like CRC themselves, my guess is those modules are for low-end hardware, where a large part of the handling was done by the CPU, so called softmodems So unless you have, and use, that kind of simple modem hardware, your kernel is unlikely to load those modules.
Library routines in linux kernel
1,428,446,776,000
What is the difference between 'm' and 'y'? I am reading a guide, and the first step is to make sure that my kernel supports PPP and MPPE. It should be: # cat /boot/config-`uname -r` | grep G_PPP= CONFIG_PPP=y # cat /boot/config-`uname -r` | grep MPPE CONFIG_PPP_MPPE=y I get: root@N550JV:~# cat /boot/config-`uname -r` | grep G_PPP= CONFIG_PPP=y root@N550JV:~# cat /boot/config-`uname -r` | grep MPPE CONFIG_PPP_MPPE=m root@N550JV:~# My uname -r: 3.8.0-39-generic
Kernel features can be compiled in-kernel or compiled as loadable modules. When specifying y, the feature will be compiled in kernel. When m is specified, the feature will be compiled as a loadable kernel module. Reference docs: PPP: http://tldp.org/HOWTO/PPP-HOWTO/ MPPE: http://mppe-mppc.alphacron.de/
CONFIG_PPP_MPPE=m vs CONFIG_PPP_MPPE=y
1,428,446,776,000
When running an $ lsmod command, the output lists: dm_mirror 21715 0 dm_region_hash 15984 1 dm_mirror dm_log 18072 2 dm_region_hash,dm_mirror I have tried to search for "dm-mirror module" and found for example a http://en.wikibooks.org/wiki/Linux_Guide/Modules as result. But there were no details. What is the dm_mirror module, and what is its purpose?
dm_mirror is a Linux kernel module. Therefore better search the kernel.org website. kernel.org "dm_mirror site:kernel.org" returns loads of less relevant results. The search query "dm_mirror -bugzilla site:kernel.org" does a better job. One of these search engine results links to https://www.kernel.org/doc/menuconfig/frv.html. That document links to https://www.kernel.org/doc/menuconfig/drivers-md-Kconfig.html#DM_MIRROR and there is an explanation of the purpose of the dm-mirror kernel module: Mirror target Allow volume managers to mirror logical volumes, also needed for live data migration tools such as 'pvmove'.
What is the purpose of kernel module dm_mirror a.k.a. dm-mirror?
1,428,446,776,000
I have recently acquired a device driver for an integrated watchdog timer on a x86 board on which I am running a minimal Linux system. The kernel is 3.6.11 and it has been built using buildroot. My installation does not run udev so I am required to manually perform an insmod and mknod for any drivers that I require. I have managed to do this for a CAN driver, but for this watchdog driver, I am able to cross-compile the source code for the target and I am successfully able to insmod the resulting .ko file. After this there are no errors generated and a call to lsmod reports that the module is loaded. The problem I am having is that I need to created a device node in /dev for this driver and I am not sure how to proceed. I do not know how to obtain device major and minor numbers like I can for char devices. The source of this driver suggests it is a platform device driver but I am not sure what that even means. I have only heard of character and block devices and so – is the notion of major and minor numbers relevant to platform devices? If so, how can I obtain this information? There is no entry within /proc for this device driver name and I am unsure how to proceed.
If it uses the normal kernel watchdog interface, that's at /dev/watchdog, which is 10, 130 here. It may also export another one (/dev/watchdog0, etc.). You can find that by querying sysfs: $ cat /sys/class/watchdog/watchdog0/dev 253:0 $ cat /sys/class/watchdog/watchdog0/uevent MAJOR=253 MINOR=0 DEVNAME=watchdog0 And indeed: $ ls -l /dev/watchdog0 crw------- 1 root root 253, 0 May 17 18:26 /dev/watchdog0 That number may be dynamically allocated (I'm not sure), so it could be different on your machine. (Platform devices probably also have something in /sys/devices/platform, which may let you set various parameters) edit: You can create a character device with mknod like this (as root): mknod -m 0600 /dev/watchdog c 10 130 -m sets the mode (file permissions, you have to use octal here); /dev/watchdog is the name; c means its a character device (as opposed to block); 10 is the major number; 130 is the minor.
Installing a platform driver
1,428,446,776,000
I just downloaded and compiled tg3.ko kernel module. Where should I put it on a Debian system? There is one in /lib/modules/2.6.32-5-xen-amd64/kernel/drivers/net/tg3.ko already. Ideally, I would like to leave the original one where it is, and "bump the priority" for mine. So if mine doesn't get loaded or disappears, the original is still there as a fallback. The only way I know to do it is dpkg-divert, but I feel a slight shiver in my stomach when I use it. It is especially scary to do it on a server, with the network module. :)
Place your module in /lib/modules/2.6.32-5-xen-amd64/updates/ (make the directory if it doesn't exist) and re-run dpkg-reconfigure linux-image-2.6.32-5-xen-amd64 (or just run depmod if you know how). Check that the new driver is found with modprobe -l tg3. Read man 5 depmod.conf for more details.
proper way to overwrite debian kernel modules
1,428,446,776,000
I am trying to install a Cisco VPN client under Ubuntu 10.04 LTS. In the extracted folder I type: alex@alex-laptop:~/Downloads/cisco4.8/vpnclient$ sudo ./vpn_install and there is output of activity which appears to end unsuccessfully with: /home/alex/Downloads/cisco4.8/vpnclient/linuxcniapi.c:458: error: ‘struct sk_buff’ has no member named ‘nh’ make[2]: *** [/home/alex/Downloads/cisco4.8/vpnclient/linuxcniapi.o] Error 1 make[1]: *** [_module_/home/alex/Downloads/cisco4.8/vpnclient] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-33-generic' make: *** [default] Error 2 Failed to make module "cisco_ipsec.ko". How can I find the problems in this installation? What actions should I take to correct the installation procedure?
Your kernel is too recent for the Cisco VPN client. You'll need to downgrade your kernel to a 2.6.30 version or below. See the release notes.
Kernel module for Cisco VPN client doesn't compile under ubuntu 10.04 LTS?
1,428,446,776,000
I need to know what config and data files are pulled in to make the initrd.img-xxx when update-initramfs (mkinitramfs) is executed. I am having a video driver problem that I have narrowed down to the generation of the initrd.img-xxx after kernel updates. I only get low-resolution single screen VESA, I should have two screen 1080p. Debian 12 Bookworm, but it's an old install that has been upgraded from earlier versions of Debian. I still have a working fallback kernel from 2 months ago, so I set it as manually installed and held back from upgrades for now. I created a fresh installation of Debian on a spare drive with its own EFI boot sector and grub and it has no issues. I have, as best as I can query, the same graphics drivers and firmware installed in both installs, and I purged all of them from the old install and reinstalled with apt to get fresh configs if any. I also purged and reinstalled the kernel metapackage and initram tools. I have two identical kernel builds installed in both the old and new installs. I copied the initrd.img-123 from the new install to the old install. The old install boots correctly with correct graphics using the initrd.img-123 from the new install. The initrd.img of the new and old install are of different file type when listed by file initrd.img-XXX and they don't unpack the same when attempting to decompress. The new is making zstd files and the old system are appearing as CPIO. (The older fallback kernel also appears to have a CPIO initrd.img but doesn't have problems.) I have mounted both root partitions and done diff -r on /boot and /etc and cleaned up the most obvious differences with apt-get purging old packages and some manual house keeping. But there is still a lot of noise due to heirloom configurations and settings, much of which I would like to keep if this doesn't drag on too long.
If you run update-initramfs with a "verbose" option, e.g. update-initramfs -u -v, it will display the name of every file it adds to the initramfs, and every hook script it executes.
What files are pulled in by update-initramfs?
1,428,446,776,000
I'm on CentOS 8 stream. My IP is 2001:570:1:b86::12, and I ran this:- ip -6 route add local 2001:570:1:b86::/64 dev lo and built and ran one of these, and i can now connect (locally) to my server on any of those 18,446,744,073,709,551,615 IP addresses and it all works. I can also connect from a remote machine to the existing 2001:570:1:b86::12 and it works fine over the interne t as well. However, I cannot connect from remote to any other of my IPs... $ ping6 -c 1 2001:570:1:b86:1234:2345:3456:6789 PING6(56=40+8+8 bytes) 2001:8000:1ced:6d00:f507:cb71:703f:afe1 --> 2001:570:1:b86:1234:2345:3456:6789 --- 2001:570:1:b86:1234:2345:3456:6789 ping6 statistics --- 1 packets transmitted, 0 packets received, 100.0% packet loss And looking with this, my box doesn't appear to be replying to the ARP ? # tcpdump -i eno1 -n -nn -vvv -XX proto 58 dropped privs to tcpdump tcpdump: listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes 00:11:47.354817 IP6 (class 0xc0, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::629c:9fff:fe86:c00 > ff02::1:ff56:6789: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:570:1:b86:1234:2345:3456:6789 source link-address option (1), length 8 (1): 60:9c:9f:86:0c:00 0x0000: 609c 9f86 0c00 0x0000: 3333 ff56 6789 609c 9f86 0c00 86dd 6c00 33.Vg.`.......l. 0x0010: 0000 0020 3aff fe80 0000 0000 0000 629c ....:.........b. 0x0020: 9fff fe86 0c00 ff02 0000 0000 0000 0000 ................ 0x0030: 0001 ff56 6789 8700 f8a7 0000 0000 2001 ...Vg........... 0x0040: 0570 0001 0b86 1234 2345 3456 6789 0101 .p.....4#E4Vg... 0x0050: 609c 9f86 0c00 `..... 00:11:48.389831 IP6 (class 0xc0, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::629c:9fff:fe86:c00 > ff02::1:ff56:6789: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:570:1:b86:1234:2345:3456:6789 source link-address option (1), length 8 (1): 60:9c:9f:86:0c:00 0x0000: 609c 9f86 0c00 0x0000: 3333 ff56 6789 609c 9f86 0c00 86dd 6c00 33.Vg.`.......l. 0x0010: 0000 0020 3aff fe80 0000 0000 0000 629c ....:.........b. 0x0020: 9fff fe86 0c00 ff02 0000 0000 0000 0000 ................ 0x0030: 0001 ff56 6789 8700 f8a7 0000 0000 2001 ...Vg........... 0x0040: 0570 0001 0b86 1234 2345 3456 6789 0101 .p.....4#E4Vg... 0x0050: 609c 9f86 0c00 `..... 00:11:49.386308 IP6 (class 0xc0, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::629c:9fff:fe86:c00 > ff02::1:ff56:6789: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:570:1:b86:1234:2345:3456:6789 source link-address option (1), length 8 (1): 60:9c:9f:86:0c:00 0x0000: 609c 9f86 0c00 0x0000: 3333 ff56 6789 609c 9f86 0c00 86dd 6c00 33.Vg.`.......l. 0x0010: 0000 0020 3aff fe80 0000 0000 0000 629c ....:.........b. 0x0020: 9fff fe86 0c00 ff02 0000 0000 0000 0000 ................ 0x0030: 0001 ff56 6789 8700 f8a7 0000 0000 2001 ...Vg........... 0x0040: 0570 0001 0b86 1234 2345 3456 6789 0101 .p.....4#E4Vg... 0x0050: 609c 9f86 0c00 `..... How do I tell the kernel to instruct the router that my box is the one that accepts those packets? If this is something to do with tproxy, I do have that (not sure what to do in my firewall to get those ARPs working though) # lsmod | grep tproxy nf_tproxy_ipv6 16384 0 Suggestions?
OP's method is possible since Linux 2.6.37, but requires additional settings. The IPv6 equivalent for ARP is NDP (which is using ICMPv6 multicast/unicast instead of ARP's dedicated L2 protocol with broadcast/unicast). proxy_ndp doesn't behave exactly as Proxy ARP here (it still requires per-IP settings, which we don't want here) and won't help. Instead a dedicated daemon called ndppd which listens to NDP requests to (usually) answer on behalf of other systems can manage this case. It must be set to not attempt to query a backend system before answering, since there is no such other system. Assuming here that: main interface is called eth0 system is alone in this /64 (except possibly an optional global address for its router). system isn't supposed to be set as router. See minor caveat. Enable EPEL (package epel-release), install ndppd, and use a configuration similar to this one in /etc/ndppd.conf: proxy eth0 { router no rule 2001:570:1:b86::/64 { static } } static makes the daemon answer immediately without querying any backend system, which is what has to be done for this case, since all addresses belong to (or more exactly here, all queries should reach) the host. Caveats: ndppd will generate a warning when starting because the netmask is large. This matters when the system's router wasn't explicitly set up for routing this /64 block via 2001:570:1:b86::12 or (much better) via the link local address on system's eth0 interface. If a network scan from remote is done on the block without the router properly set up, this router would allocate an NDP entry for each new address seen in this /64 scan. Small (home) routers not designed to be robust and evict old entries fast enough might not cope well with this and go into out of memory/high CPU use conditions (ie: DoS). since the actual host's IPv6 address is among the /64, querying this address will elicit two NDP answers: one by the kernel and one by ndppd. If the system is actually a router this could cause intermittent routing issues. In such case router yes could be pondered. ndppd's configuration doesn't appear to have a subnet override a larger network including this subnet, like would happen in a routing table.
I've "bound" an entire IPv6 /64 - now how do I get my kernel to respond to ARP to accept packets?
1,428,446,776,000
Where does qemu pull modules from when using a custom-built kernel (using -kernel)? Will the kernel try to find them in the guest FS or is the whole linux/qemu setup smart enough to realize that modules should be pulled from the custom-built kernel set up on the host?
-kernel only says where to load a kernel from, nothing else. It's like telling the bootloader in real hardware "load this kernel file". Once the guest kernel has booted, it is what makes the decision about where to look for modules (or even whether to look for modules at all). So the modules have to be in the guest filesystem. Personally I usually try to use a non-modules kernel if I'm doing development and booting a kernel with -kernel.
qemu - where are modules pulled from if using -kernel?
1,428,446,776,000
In a call trace we see: WARNING: CPU: 3 PID: 123456 at xxxxxxxx Modules linked in: cmac md4 cifs ccm ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_conntrack ipt_REJECT nf_reject_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip6table_filter iptable_filter bridge stp llc cdc_acm i2c_mux_ltc4306 i2c_mux cdc_ether usbnet mii amd64_edac_mod edac_mce_amd edac_core xhci_pci gq(O) kvm_amd pcspkr sha3_generic xhci_hcd i2c_piix4 evdev acpi_cpufreq sch_fq_codel i2c_via_ipmi(O) autofs4 Call trace: xxxxxxxx What do the Modules linked in mean? Does it mean the modules related to ( or called from?) this call trace?
“Modules linked in” lists all the modules currently loaded, along with their taint flags if any. If modules have been loaded and then unloaded, the last unloaded module is listed too. If any modules are being loaded or unloaded, they are marked with + or - respectively. The list isn’t limited to modules involved in the trace. See the kernel bug-hunting guide for details.
What does 'modules linked in' mean in call trace?
1,428,446,776,000
I am trying to debug my kernel module. When I run it I get the following kernel warnings, but it seems that there is no informative message like other warnings I've seen. Is it possible to get any useful info out of this? Some more info: The module is called firewall, it diverts tcp packets to a proxy server in user space, and the proxy then sends the tcp data it receives to the intended destination. This happens when processing an http response by simply receiving all the data on one socket and calling sendall on another. The warning doesn't happen when all the response comes in one packet, but does when the http payload data is segmented into several tcp packets. The proxy is written in python. It seems strange to me that in the warning it says "python tainted". Can userspace applications cause kernel warnings? I tried only receiving a large file in the proxy but not sending it and did not get any errors, and the system didn't freeze at any point. The problem happens only on calling socket.sendall/socket.send reducing the read buffer size and then sending smaller chunks causes the system to lockup faster. Turning off both gso, tso with ethtool prevents the error messages, but the system still locks up after the same amount of time, making me wonder if the warnings are even tied to the lockup [16795.153478] ------------[ cut here ]------------ [16795.153489] WARNING: at /build/buildd/linux-3.2.0/net/core/dev.c:1970 skb_gso_segment+0x2e9/0x360() [16795.153492] Hardware name: VirtualBox [16795.153495] e1000: caps=(0x40014b89, 0x401b4b89) len=2948 data_len=0 ip_summed=0 [16795.153497] Modules linked in: firewall(O) vesafb vboxsf(O) snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device joydev psmouse snd soundcore serio_raw i2c_piix4 snd_page_alloc vboxguest(O) video bnep mac_hid rfcomm bluetooth parport_pc ppdev lp parport usbhid hid e1000 [last unloaded: firewall] [16795.153529] Pid: 7644, comm: python Tainted: G W O 3.2.0-37-generic-pae #58-Ubuntu [16795.153532] Call Trace: [16795.153540] [<c105a822>] warn_slowpath_common+0x72/0xa0 [16795.153544] [<c14ad2b9>] ? skb_gso_segment+0x2e9/0x360 [16795.153548] [<c14ad2b9>] ? skb_gso_segment+0x2e9/0x360 [16795.153551] [<c105a8f3>] warn_slowpath_fmt+0x33/0x40 [16795.153555] [<c14ad2b9>] skb_gso_segment+0x2e9/0x360 [16795.153561] [<c14b01ce>] dev_hard_start_xmit+0xae/0x4c0 [16795.153568] [<f9a6f2fd>] ? divertPacket+0x7d/0xe0 [firewall] [16795.153574] [<c14c8151>] sch_direct_xmit+0xb1/0x180 [16795.153578] [<f9a6f941>] ? hook_localout+0x71/0xe0 [firewall] [16795.153582] [<c14b06d6>] dev_queue_xmit+0xf6/0x370 [16795.153586] [<c14c6459>] ? eth_header+0x29/0xc0 [16795.153590] [<c14b73f0>] neigh_resolve_output+0x100/0x1c0 [16795.153594] [<c14c6430>] ? eth_rebuild_header+0x80/0x80 [16795.153598] [<c14dec62>] ip_finish_output+0x152/0x2e0 [16795.153602] [<c14df75f>] ip_output+0xaf/0xc0 [16795.153605] [<c14dd340>] ? ip_forward_options+0x1d0/0x1d0 [16795.153609] [<c14deec0>] ip_local_out+0x20/0x30 [16795.153612] [<c14defee>] ip_queue_xmit+0x11e/0x3c0 [16795.153617] [<c10841c5>] ? getnstimeofday+0x55/0x120 [16795.153622] [<c14f4de7>] tcp_transmit_skb+0x2d7/0x4a0 [16795.153626] [<c14f5786>] tcp_write_xmit+0x146/0x3a0 [16795.153630] [<c14f5a4c>] __tcp_push_pending_frames+0x2c/0x90 [16795.153634] [<c14e7d2b>] tcp_sendmsg+0x71b/0xae0 [16795.153638] [<c104a33d>] ? update_curr+0x1ed/0x360 [16795.153642] [<c1509c23>] ? inet_recvmsg+0x73/0x90 [16795.153646] [<c1509ca0>] inet_sendmsg+0x60/0xa0 [16795.153650] [<c149ae27>] sock_sendmsg+0xf7/0x120 [16795.153655] [<c1044648>] ? ttwu_do_wakeup+0x28/0x130 [16795.153660] [<c1036a98>] ? default_spin_lock_flags+0x8/0x10 [16795.153667] [<c149ce7e>] sys_sendto+0x10e/0x150 [16795.153672] [<c1117e7f>] ? handle_pte_fault+0x28f/0x2c0 [16795.153675] [<c111809e>] ? handle_mm_fault+0x15e/0x2c0 [16795.153679] [<c15ab9c7>] ? do_page_fault+0x227/0x490 [16795.153681] [<c149cefb>] sys_send+0x3b/0x40 [16795.153684] [<c149d842>] sys_socketcall+0x162/0x2c0 [16795.153687] [<c15af55f>] sysenter_do_call+0x12/0x28 [16795.153689] ---[ end trace 3170256120cbbc8f ]---
Have you tried following backwards from end of stack trace with addr2line? For example looking at the last line sysenter_do_call+0x12/0x28 It tells us that the offset is 0x12 and the length is 0x28 $ addr2line -e [path-to-kernel-module-with-issue] 0xc15af55f and so on...gdb is another alternative in terms of breaking down the stack trace into lines. However, I am not completely sure how you are arriving at a kernel-panic, as all I am seeing in the log excerpt you provided is a warning. Does it result in a crash/kernel-panic message after the stack-trace you posted? -------as far as the stack trace posted: it has to do with the general segmentation offload and the skbuffer not being happy with the ip_summed checksum, disabling large\general receiver offload with $ethtool -k [NIC] lro off $ethtool -k [NIC] gro off might be a possible workaround. Also, skipping checksum check with skb->ip_summed = CHECKSUM_UNNECESSARY might also solve this issue, depending on the purpose of the setup.
Understanding a dmesg kernel warning message
1,428,446,776,000
My question mostly about hardware, specifically the Intel i5-2500K CPU which Intel describes as having # of Cores 4 # of Threads 4 Linux shows me 4 processors: $ cat /proc/cpuinfo | grep ^processor processor : 0 processor : 1 processor : 2 processor : 3 Nevertheless, I've written a little kernel module that shows me 8 processors: $ cat show_cpus_mod.c #include <linux/module.h> #include <linux/kernel.h> #include <linux/init.h> #include <linux/version.h> #define CLASS_NAME "show_cpus_mod" #define dbg( format, arg... ) do { if ( debug ) pr_info( CLASS_NAME ": %s: " format , __FUNCTION__ , ## arg ); } while ( 0 ) #define err( format, arg... ) pr_err( CLASS_NAME ": " format, ## arg ) #define info( format, arg... ) pr_info( CLASS_NAME ": " format, ## arg ) #define warn( format, arg... ) pr_warn( CLASS_NAME ": " format, ## arg ) MODULE_DESCRIPTION( "shows all cpus" ); MODULE_VERSION( "0.1" ); MODULE_LICENSE( "GPL" ); MODULE_AUTHOR( "author <[email protected]>" ); static int show_cpus_mod_init( void ) { int cpu; info( "Start loading module show_cpus_mod.\n" ); for_each_possible_cpu( cpu ) { info( "cpu = %d\n", cpu ); } return 0; } static void show_cpus_mod_exit( void ) { info( "Module show_cpus_mod unloaded\n" ); } module_init( show_cpus_mod_init ); module_exit( show_cpus_mod_exit ); Building: $ cat Makefile CURRENT = $(shell uname -r) KDIR = /lib/modules/$(CURRENT)/build PWD = $(shell pwd) TARGET = show_cpus_mod obj-m := $(TARGET).o default: $(MAKE) -C $(KDIR) M=$(PWD) modules clean: @rm -f *.o .*.cmd .*.flags *.mod.c *.order @rm -f .*.*.cmd *.symvers *~ *.*~ TODO.* @rm -fR .tmp* @rm -rf .tmp_versions Inserting: # make # cp show_cpus_mod.ko /lib/modules/4.14.0-kali3-amd64/ # depmod # modprobe show_cpus_mod syslog: localhost kernel: [67596.578805] show_cpus_mod: Start loading module show_cpus_mod. localhost kernel: [67596.578808] show_cpus_mod: cpu = 0 localhost kernel: [67596.578809] show_cpus_mod: cpu = 1 localhost kernel: [67596.578810] show_cpus_mod: cpu = 2 localhost kernel: [67596.578811] show_cpus_mod: cpu = 3 localhost kernel: [67596.578811] show_cpus_mod: cpu = 4 localhost kernel: [67596.578812] show_cpus_mod: cpu = 5 localhost kernel: [67596.578812] show_cpus_mod: cpu = 6 localhost kernel: [67596.578813] show_cpus_mod: cpu = 7 localhost kernel: [67607.725738] show_cpus_mod: Module show_cpus_mod unloaded What am I missing in Intel's description? Why 8? Or what is wrong with my kernel module?
You should use for_each_online_cpu or for_each_present_cpu instead of for_each_possible_cpu. That will limit the output to CPUs which are really online or present, respectively.
Kernel module shows me 8 processors instead of 4 for Intel i5-2500K
1,428,446,776,000
This is a fresh installation of Manjaro, installed using Manjaro Architect on an existing LVM partition inside a LUKS-encrypted partition and using a separate unencrypted boot partition. I've reinstalled several times, with the same result every time. The problem manifests itself on the second and subsequent boots, in which systemd-modules-load.service will fail, although the boot will continue for a bit before it hangs without any further error messages. Fortunately after a while I can switch to another tty to examine the issue, and what I've discovered so far is this: The boot hangs because Xorg fails to load because the nvidia driver fails to load because the nvidia kernel module has not been loaded. systemd-modules-load.service seems to fail because /usr/lib/modules does not exist. It succeeds on first boot, but then after the first boot the directory is gone and subsequent boots will fail. I can recover by reinstalling the kernel (linux417) and nvidia driver (linux417-nvidia) which will work for exactly one boot before /usr/lib/modules disappears again. So my questions are: What could possibly be causing this during the boot process? How can I proceed to find more clues? System: Kernel: 4.17.0-2-MANJARO x86_64 bits: 64 compiler: gcc v: 8.1.1 Desktop: Gnome 3.28.2 Distro: Manjaro Linux 17.1.10 Hakoila Machine: Type: Desktop Mobo: ASUSTeK model: P8Z77-V v: Rev 1.xx serial: <filter> BIOS: American Megatrends v: 0906 date: 03/26/2012 CPU: Topology: Quad Core model: Intel Core i5-3570K bits: 64 type: MCP arch: Ivy Bridge rev: 9 L2 cache: 6144 KiB flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 27288 Speed: 1605 MHz min/max: 1600/3800 MHz Core speeds (MHz): 1: 1605 2: 1605 3: 1605 4: 1605 Graphics: Card-1: NVIDIA GK104 [GeForce GTX 670] driver: nvidia v: 396.24 bus ID: 01:00.0 Display: x11 server: N/A driver: nvidia resolution: <xdpyinfo missing> OpenGL: renderer: GeForce GTX 670/PCIe/SSE2 v: 4.6.0 NVIDIA 396.24 direct render: Yes Audio: Card-1: Intel 7 Series/C216 Family High Definition Audio driver: snd_hda_intel v: kernel bus ID: 00:1b.0 Card-2: NVIDIA GK104 HDMI Audio driver: snd_hda_intel v: kernel bus ID: 01:00.1 Sound Server: ALSA v: k4.17.0-2-MANJARO Network: Card-1: Intel 82579V Gigabit Network Connection driver: e1000e v: 3.2.6-k port: f040 bus ID: 00:19.0 IF: eno1 state: up speed: 1000 Mbps duplex: full mac: <filter> Card-2: Qualcomm Atheros AR9485 Wireless Network Adapter driver: ath9k v: kernel bus ID: 06:00.0 IF: wlp6s0 state: down mac: <filter> Drives: HDD Total Size: 588.83 GiB used: 306.90 GiB (52.1%) ID-1: /dev/sda vendor: Samsung model: SSD 830 Series size: 119.24 GiB ID-2: /dev/sdb vendor: Samsung model: SSD 850 EVO 500GB size: 465.76 GiB Partition: ID-1: / size: 31.25 GiB used: 10.00 GiB (32.0%) fs: ext4 dev: /dev/dm-2 ID-2: /boot size: 487.9 MiB used: 66.7 MiB (13.7%) fs: ext4 dev: /dev/sdb1 ID-3: /home size: 410.58 GiB used: 296.83 GiB (72.3%) fs: ext4 dev: /dev/dm-3 ID-4: swap-1 size: 16.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/dm-1 Info: Processes: 201 Uptime: 54m Memory: 15.62 GiB used: 2.46 GiB (15.7%) Init: systemd Compilers: gcc: 8.1.1 clang: 6.0.0 Shell: zsh v: 5.5.1 inxi: 3.0.10
Manjaro includes a kernel-alive package which provides a systemd service called linux-module-cleanup that is supposed to clean up old kernel modules, but apparently has a bug in it that causes it to just wipe the entire /usr/lib/modules directory... The solution is to just disable the service with systemctl disable linux-module-cleanup.service, or you could probably also just remove the kernel-alive package. Credit goes to jonathon on the Manjaro forums for suggesting this.
/usr/lib/modules getting deleted on boot